.Rproj.user
_version.py
*.bak
+*.log
arvados-snakeoil-ca.pem
.vagrant
sdk/python/tests/fed-migrate/*.cwl
sdk/python/tests/fed-migrate/*.cwlex
doc/install/*.xlsx
+sdk/cwl/tests/wf/hello.txt
+sdk/cwl/tests/wf/indir1/hello2.txt
\ No newline at end of file
Veritas Genetics, Inc. <*@veritasgenetics.com>
Curii Corporation, Inc. <*@curii.com>
Dante Tsang <dante@dantetsang.com>
-Codex Genetics Ltd <info@codexgenetics.com>
\ No newline at end of file
+Codex Genetics Ltd <info@codexgenetics.com>
+Bruno P. Kinoshita <brunodepaulak@yahoo.com.br>
Those interested in contributing should begin by joining the [Arvados community
channel](https://gitter.im/arvados/community) and telling us about your interest.
-Contributers should also create an account at https://dev.arvados.org
+Contributors should also create an account at https://dev.arvados.org
to be able to create and comment on bug tracker issues. The
Arvados public bug tracker is located at
https://dev.arvados.org/projects/arvados/issues .
-Contributers may also be interested in the [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1).
+Contributors may also be interested in the [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1).
# Development
Git repositories for primary development are located at
https://git.arvados.org/ and can also be browsed at
https://dev.arvados.org/projects/arvados/repository . Every push to
-the master branch is also mirrored to Github at
+the main branch is also mirrored to Github at
https://github.com/arvados/arvados .
Visit [Hacking Arvados](https://dev.arvados.org/projects/arvados/wiki/Hacking) for
2. Clone your fork, make your changes, commit to your fork.
3. Every commit message must have a DCO sign-off and every file must have a SPDX license (see below).
4. Add yourself to the [AUTHORS](AUTHORS) file
-5. When your fork is ready, through Github, Create a Pull Request against `arvados:master`
+5. When your fork is ready, through Github, Create a Pull Request against `arvados:main`
6. Notify the core team about your pull request through the [Arvados development
channel](https://gitter.im/arvados/development) or by other means.
7. A member of the core team will review the pull request. They may have questions or comments, or request changes.
8. When the contribution is ready, a member of the core team will
-merge the pull request into the master branch, which will
+merge the pull request into the main branch, which will
automatically resolve the pull request.
The Arvados project does not require a contributor agreement in advance, but does require each commit message include a [Developer Certificate of Origin](https://dev.arvados.org/projects/arvados/wiki/Developer_Certificate_Of_Origin). Please ensure *every git commit message* includes `Arvados-DCO-1.1-Signed-off-by`. If you have already made commits without it, fix them with `git commit --amend` or `git rebase`.
Continuous integration is hosted at https://ci.arvados.org/
-Currently, external contributers cannot trigger builds. We are investigating integration with Github pull requests for the future.
+Currently, external contributors cannot trigger builds. We are investigating integration with Github pull requests for the future.
[![Build Status](https://ci.arvados.org/buildStatus/icon?job=run-tests)](https://ci.arvados.org/job/run-tests/)
This enables machine processing of license information based on the SPDX
License Identifiers that are available here: http://spdx.org/licenses/
-The full license text for each license is available in this directory:
+The full license text for each license is appended below, and is also available
+in this directory:
AGPL-3.0: agpl-3.0.txt
Apache-2.0: apache-2.0.txt
As a general rule, code in the sdk/ directory is licensed Apache-2.0,
documentation in the doc/ directory is licensed CC-BY-SA-3.0, and
-everything else is licensed AGPL-3.0.
\ No newline at end of file
+everything else is licensed AGPL-3.0.
+
+###############################################################################
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+###############################################################################
+
+ GNU AFFERO GENERAL PUBLIC LICENSE
+ Version 3, 19 November 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU Affero General Public License is a free, copyleft license for
+software and other kinds of works, specifically designed to ensure
+cooperation with the community in the case of network server software.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+our General Public Licenses are intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ Developers that use our General Public Licenses protect your rights
+with two steps: (1) assert copyright on the software, and (2) offer
+you this License which gives you legal permission to copy, distribute
+and/or modify the software.
+
+ A secondary benefit of defending all users' freedom is that
+improvements made in alternate versions of the program, if they
+receive widespread use, become available for other developers to
+incorporate. Many developers of free software are heartened and
+encouraged by the resulting cooperation. However, in the case of
+software used on network servers, this result may fail to come about.
+The GNU General Public License permits making a modified version and
+letting the public access it on a server without ever releasing its
+source code to the public.
+
+ The GNU Affero General Public License is designed specifically to
+ensure that, in such cases, the modified source code becomes available
+to the community. It requires the operator of a network server to
+provide the source code of the modified version running there to the
+users of that server. Therefore, public use of a modified version, on
+a publicly accessible server, gives the public access to the source
+code of the modified version.
+
+ An older license, called the Affero General Public License and
+published by Affero, was designed to accomplish similar goals. This is
+a different license, not a version of the Affero GPL, but Affero has
+released a new version of the Affero GPL which permits relicensing under
+this license.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU Affero General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Remote Network Interaction; Use with the GNU General Public License.
+
+ Notwithstanding any other provision of this License, if you modify the
+Program, your modified version must prominently offer all users
+interacting with it remotely through a computer network (if your version
+supports such interaction) an opportunity to receive the Corresponding
+Source of your version by providing access to the Corresponding Source
+from a network server at no charge, through some standard or customary
+means of facilitating copying of software. This Corresponding Source
+shall include the Corresponding Source for any work covered by version 3
+of the GNU General Public License that is incorporated pursuant to the
+following paragraph.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the work with which it is combined will remain governed by version
+3 of the GNU General Public License.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU Affero General Public License from time to time. Such new versions
+will be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU Affero General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU Affero General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU Affero General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU Affero General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU Affero General Public License for more details.
+
+ You should have received a copy of the GNU Affero General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If your software can interact with users remotely through a computer
+network, you should also make sure that it provides a way for users to
+get its source. For example, if your program is a web application, its
+interface could display a "Source" link that leads users to an archive
+of the code. There are many ways you could offer source, and different
+solutions will be better for different programs; see section 13 for the
+specific requirements.
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU AGPL, see
+<http://www.gnu.org/licenses/>.
+
+###############################################################################
+
+Attribution-ShareAlike 3.0 Unported
+
+ CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM ITS USE.
+
+License
+
+THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
+
+BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
+
+1. Definitions
+
+ "Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
+ "Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(f) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined below) for the purposes of this License.
+ "Creative Commons Compatible License" means a license that is listed at https://creativecommons.org/compatiblelicenses that has been approved by Creative Commons as being essentially equivalent to this License, including, at a minimum, because that license: (i) contains terms that have the same purpose, meaning and effect as the License Elements of this License; and, (ii) explicitly permits the relicensing of adaptations of works made available under that license under this License or a Creative Commons jurisdiction license with the same License Elements as this License.
+ "Distribute" means to make available to the public the original and copies of the Work or Adaptation, as appropriate, through sale or other transfer of ownership.
+ "License Elements" means the following high-level license attributes as selected by Licensor and indicated in the title of this License: Attribution, ShareAlike.
+ "Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License.
+ "Original Author" means, in the case of a literary or artistic work, the individual, individuals, entity or entities who created the Work or if no individual or entity can be identified, the publisher; and in addition (i) in the case of a performance the actors, singers, musicians, dancers, and other persons who act, sing, deliver, declaim, play in, interpret or otherwise perform literary or artistic works or expressions of folklore; (ii) in the case of a phonogram the producer being the person or legal entity who first fixes the sounds of a performance or other sounds; and, (iii) in the case of broadcasts, the organization that transmits the broadcast.
+ "Work" means the literary and/or artistic work offered under the terms of this License including without limitation any production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression including digital form, such as a book, pamphlet and other writing; a lecture, address, sermon or other work of the same nature; a dramatic or dramatico-musical work; a choreographic work or entertainment in dumb show; a musical composition with or without words; a cinematographic work to which are assimilated works expressed by a process analogous to cinematography; a work of drawing, painting, architecture, sculpture, engraving or lithography; a photographic work to which are assimilated works expressed by a process analogous to photography; a work of applied art; an illustration, map, plan, sketch or three-dimensional work relative to geography, topography, architecture or science; a performance; a broadcast; a phonogram; a compilation of data to the extent it is protected as a copyrightable work; or a work performed by a variety or circus performer to the extent it is not otherwise considered a literary or artistic work.
+ "You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.
+ "Publicly Perform" means to perform public recitations of the Work and to communicate to the public those public recitations, by any means or process, including by wire or wireless means or public digital performances; to make available to the public Works in such a way that members of the public may access these Works from a place and at a place individually chosen by them; to perform the Work to the public by any means or process and the communication to the public of the performances of the Work, including by public digital performance; to broadcast and rebroadcast the Work by any means including signs, sounds or images.
+ "Reproduce" means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium.
+
+2. Fair Dealing Rights. Nothing in this License is intended to reduce, limit, or restrict any uses free from copyright or rights arising from limitations or exceptions that are provided for in connection with the copyright protection under copyright law or other applicable laws.
+
+3. License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:
+
+ to Reproduce the Work, to incorporate the Work into one or more Collections, and to Reproduce the Work as incorporated in the Collections;
+ to create and Reproduce Adaptations provided that any such Adaptation, including any translation in any medium, takes reasonable steps to clearly label, demarcate or otherwise identify that changes were made to the original Work. For example, a translation could be marked "The original work was translated from English to Spanish," or a modification could indicate "The original work has been modified.";
+ to Distribute and Publicly Perform the Work including as incorporated in Collections; and,
+ to Distribute and Publicly Perform Adaptations.
+
+ For the avoidance of doubt:
+ Non-waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme cannot be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License;
+ Waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme can be waived, the Licensor waives the exclusive right to collect such royalties for any exercise by You of the rights granted under this License; and,
+ Voluntary License Schemes. The Licensor waives the right to collect royalties, whether individually or, in the event that the Licensor is a member of a collecting society that administers voluntary licensing schemes, via that society, from any exercise by You of the rights granted under this License.
+
+The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. Subject to Section 8(f), all rights not expressly granted by Licensor are hereby reserved.
+
+4. Restrictions. The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:
+
+ You may Distribute or Publicly Perform the Work only under the terms of this License. You must include a copy of, or the Uniform Resource Identifier (URI) for, this License with every copy of the Work You Distribute or Publicly Perform. You may not offer or impose any terms on the Work that restrict the terms of this License or the ability of the recipient of the Work to exercise the rights granted to that recipient under the terms of the License. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties with every copy of the Work You Distribute or Publicly Perform. When You Distribute or Publicly Perform the Work, You may not impose any effective technological measures on the Work that restrict the ability of a recipient of the Work from You to exercise the rights granted to that recipient under the terms of the License. This Section 4(a) applies to the Work as incorporated in a Collection, but this does not require the Collection apart from the Work itself to be made subject to the terms of this License. If You create a Collection, upon notice from any Licensor You must, to the extent practicable, remove from the Collection any credit as required by Section 4(c), as requested. If You create an Adaptation, upon notice from any Licensor You must, to the extent practicable, remove from the Adaptation any credit as required by Section 4(c), as requested.
+ You may Distribute or Publicly Perform an Adaptation only under the terms of: (i) this License; (ii) a later version of this License with the same License Elements as this License; (iii) a Creative Commons jurisdiction license (either this or a later license version) that contains the same License Elements as this License (e.g., Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible License. If you license the Adaptation under one of the licenses mentioned in (iv), you must comply with the terms of that license. If you license the Adaptation under the terms of any of the licenses mentioned in (i), (ii) or (iii) (the "Applicable License"), you must comply with the terms of the Applicable License generally and the following provisions: (I) You must include a copy of, or the URI for, the Applicable License with every copy of each Adaptation You Distribute or Publicly Perform; (II) You may not offer or impose any terms on the Adaptation that restrict the terms of the Applicable License or the ability of the recipient of the Adaptation to exercise the rights granted to that recipient under the terms of the Applicable License; (III) You must keep intact all notices that refer to the Applicable License and to the disclaimer of warranties with every copy of the Work as included in the Adaptation You Distribute or Publicly Perform; (IV) when You Distribute or Publicly Perform the Adaptation, You may not impose any effective technological measures on the Adaptation that restrict the ability of a recipient of the Adaptation from You to exercise the rights granted to that recipient under the terms of the Applicable License. This Section 4(b) applies to the Adaptation as incorporated in a Collection, but this does not require the Collection apart from the Adaptation itself to be made subject to the terms of the Applicable License.
+ If You Distribute, or Publicly Perform the Work or any Adaptations or Collections, You must, unless a request has been made pursuant to Section 4(a), keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of the Original Author (or pseudonym, if applicable) if supplied, and/or if the Original Author and/or Licensor designate another party or parties (e.g., a sponsor institute, publishing entity, journal) for attribution ("Attribution Parties") in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; (ii) the title of the Work if supplied; (iii) to the extent reasonably practicable, the URI, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and (iv) , consistent with Ssection 3(b), in the case of an Adaptation, a credit identifying the use of the Work in the Adaptation (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). The credit required by this Section 4(c) may be implemented in any reasonable manner; provided, however, that in the case of a Adaptation or Collection, at a minimum such credit will appear, if a credit for all contributing authors of the Adaptation or Collection appears, then as part of these credits and in a manner at least as prominent as the credits for the other contributing authors. For the avoidance of doubt, You may only use the credit required by this Section for the purpose of attribution in the manner set out above and, by exercising Your rights under this License, You may not implicitly or explicitly assert or imply any connection with, sponsorship or endorsement by the Original Author, Licensor and/or Attribution Parties, as appropriate, of You or Your use of the Work, without the separate, express prior written permission of the Original Author, Licensor and/or Attribution Parties.
+ Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Adaptations or Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation. Licensor agrees that in those jurisdictions (e.g. Japan), in which any exercise of the right granted in Section 3(b) of this License (the right to make Adaptations) would be deemed to be a distortion, mutilation, modification or other derogatory action prejudicial to the Original Author's honor and reputation, the Licensor will waive or not assert, as appropriate, this Section, to the fullest extent permitted by the applicable national law, to enable You to reasonably exercise Your right under Section 3(b) of this License (right to make Adaptations) but not otherwise.
+
+5. Representations, Warranties and Disclaimer
+
+UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
+
+6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+7. Termination
+
+ This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Adaptations or Collections from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.
+ Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.
+
+8. Miscellaneous
+
+ Each time You Distribute or Publicly Perform the Work or a Collection, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.
+ Each time You Distribute or Publicly Perform an Adaptation, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License.
+ If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
+ No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.
+ This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.
+ The rights granted under, and the subject matter referenced, in this License were drafted utilizing the terminology of the Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979), the Rome Convention of 1961, the WIPO Copyright Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996 and the Universal Copyright Convention (as revised on July 24, 1971). These rights and subject matter take effect in the relevant jurisdiction in which the License terms are sought to be enforced according to the corresponding provisions of the implementation of those treaty provisions in the applicable national law. If the standard suite of rights granted under applicable copyright law includes additional rights not granted under this License, such additional rights are deemed to be included in the License; this License is not intended to restrict the license of any rights under applicable law.
+
+ Creative Commons Notice
+
+ Creative Commons is not a party to this License, and makes no warranty whatsoever in connection with the Work. Creative Commons will not be liable to You or any party on any legal theory for any damages whatsoever, including without limitation any general, special, incidental or consequential damages arising in connection to this license. Notwithstanding the foregoing two (2) sentences, if Creative Commons has expressly identified itself as the Licensor hereunder, it shall have all rights and obligations of Licensor.
+
+ Except for the limited purpose of indicating to the public that the Work is licensed under the CCPL, Creative Commons does not authorize the use by either party of the trademark "Creative Commons" or any related trademark or logo of Creative Commons without the prior written consent of Creative Commons. Any permitted use will be in compliance with Creative Commons' then-current trademark usage guidelines, as may be published on its website or otherwise made available upon request from time to time. For the avoidance of doubt, this trademark restriction does not form part of the License.
+
+ Creative Commons may be contacted at https://creativecommons.org/.
source 'https://rubygems.org'
gem 'rails', '~> 5.2.0'
-gem 'arvados', git: 'https://github.com/arvados/arvados.git', glob: 'sdk/ruby/arvados.gemspec'
+gem 'arvados', '~> 2.1.5'
gem 'activerecord-nulldb-adapter', git: 'https://github.com/arvados/nulldb'
gem 'multi_json'
# See: https://github.com/rails/sprockets-rails/issues/443
gem 'sprockets', '~> 3.0'
-# Fast app boot times
-gem 'bootsnap', require: false
-
# Note: keeping this out of the "group :assets" section "may" allow us
# to use Coffescript for UJS responses. It also prevents a
# warning/problem when running tests: "WARN: tilt autoloading
-GIT
- remote: https://github.com/arvados/arvados.git
- revision: c210114aa8c77ba0bb8e4d487fc1507b40f9560f
- glob: sdk/ruby/arvados.gemspec
- specs:
- arvados (1.5.0.pre20200114202620)
- activesupport (>= 3)
- andand (~> 1.3, >= 1.3.3)
- arvados-google-api-client (>= 0.7, < 0.8.9)
- faraday (< 0.16)
- i18n (~> 0)
- json (>= 1.7.7, < 3)
- jwt (>= 0.1.5, < 2)
-
GIT
remote: https://github.com/arvados/nulldb
revision: d8e0073b665acdd2537c5eb15178a60f02f4b413
remote: https://rubygems.org/
specs:
RedCloth (4.3.2)
- actioncable (5.2.4.3)
- actionpack (= 5.2.4.3)
+ actioncable (5.2.6)
+ actionpack (= 5.2.6)
nio4r (~> 2.0)
websocket-driver (>= 0.6.1)
- actionmailer (5.2.4.3)
- actionpack (= 5.2.4.3)
- actionview (= 5.2.4.3)
- activejob (= 5.2.4.3)
+ actionmailer (5.2.6)
+ actionpack (= 5.2.6)
+ actionview (= 5.2.6)
+ activejob (= 5.2.6)
mail (~> 2.5, >= 2.5.4)
rails-dom-testing (~> 2.0)
- actionpack (5.2.4.3)
- actionview (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ actionpack (5.2.6)
+ actionview (= 5.2.6)
+ activesupport (= 5.2.6)
rack (~> 2.0, >= 2.0.8)
rack-test (>= 0.6.3)
rails-dom-testing (~> 2.0)
rails-html-sanitizer (~> 1.0, >= 1.0.2)
- actionview (5.2.4.3)
- activesupport (= 5.2.4.3)
+ actionview (5.2.6)
+ activesupport (= 5.2.6)
builder (~> 3.1)
erubi (~> 1.4)
rails-dom-testing (~> 2.0)
rails-html-sanitizer (~> 1.0, >= 1.0.3)
- activejob (5.2.4.3)
- activesupport (= 5.2.4.3)
+ activejob (5.2.6)
+ activesupport (= 5.2.6)
globalid (>= 0.3.6)
- activemodel (5.2.4.3)
- activesupport (= 5.2.4.3)
- activerecord (5.2.4.3)
- activemodel (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ activemodel (5.2.6)
+ activesupport (= 5.2.6)
+ activerecord (5.2.6)
+ activemodel (= 5.2.6)
+ activesupport (= 5.2.6)
arel (>= 9.0)
- activestorage (5.2.4.3)
- actionpack (= 5.2.4.3)
- activerecord (= 5.2.4.3)
- marcel (~> 0.3.1)
- activesupport (5.2.4.3)
+ activestorage (5.2.6)
+ actionpack (= 5.2.6)
+ activerecord (= 5.2.6)
+ marcel (~> 1.0.0)
+ activesupport (5.2.6)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
tzinfo (~> 1.1)
- addressable (2.7.0)
+ addressable (2.8.0)
public_suffix (>= 2.0.2, < 5.0)
andand (1.3.3)
angularjs-rails (1.3.15)
arel (9.0.0)
+ arvados (2.1.5)
+ activesupport (>= 3)
+ andand (~> 1.3, >= 1.3.3)
+ arvados-google-api-client (>= 0.7, < 0.8.9)
+ faraday (< 0.16)
+ i18n (~> 0)
+ json (>= 1.7.7, < 3)
+ jwt (>= 0.1.5, < 2)
arvados-google-api-client (0.8.7.4)
activesupport (>= 3.2, < 5.3)
addressable (~> 2.3)
multi_json (>= 1.0.0)
autoprefixer-rails (9.5.1.1)
execjs
- bootsnap (1.4.7)
- msgpack (~> 1.0)
bootstrap-sass (3.4.1)
autoprefixer-rails (>= 5.2.1)
sassc (>= 2.0.0)
execjs
coffee-script-source (1.12.2)
commonjs (0.2.7)
- concurrent-ruby (1.1.6)
+ concurrent-ruby (1.1.9)
crass (1.0.6)
deep_merge (1.2.1)
docile (1.3.1)
- erubi (1.9.0)
+ erubi (1.10.0)
execjs (2.7.0)
extlib (0.9.16)
faraday (0.15.4)
rails-dom-testing (>= 1, < 3)
railties (>= 4.2.0)
thor (>= 0.14, < 2.0)
- json (2.3.0)
+ json (2.5.1)
jwt (1.5.6)
launchy (2.4.3)
addressable (~> 2.3)
railties (>= 4)
request_store (~> 1.0)
logstash-event (1.2.02)
- loofah (2.6.0)
+ loofah (2.10.0)
crass (~> 1.0.2)
nokogiri (>= 1.5.9)
mail (2.7.1)
mini_mime (>= 0.1.1)
- marcel (0.3.3)
- mimemagic (~> 0.3.2)
+ marcel (1.0.1)
memoist (0.16.2)
metaclass (0.0.4)
method_source (1.0.0)
mime-types (3.2.2)
mime-types-data (~> 3.2015)
mime-types-data (3.2019.0331)
- mimemagic (0.3.5)
- mini_mime (1.0.2)
- mini_portile2 (2.4.0)
+ mini_mime (1.1.0)
+ mini_portile2 (2.6.1)
minitest (5.10.3)
mocha (1.8.0)
metaclass (~> 0.0.1)
morrisjs-rails (0.5.1.2)
railties (> 3.1, < 6)
- msgpack (1.3.3)
multi_json (1.15.0)
multipart-post (2.1.1)
net-scp (2.0.0)
net-ssh (5.2.0)
net-ssh-gateway (2.0.0)
net-ssh (>= 4.0.0)
- nio4r (2.5.2)
- nokogiri (1.10.10)
- mini_portile2 (~> 2.4.0)
+ nio4r (2.5.7)
+ nokogiri (1.12.5)
+ mini_portile2 (~> 2.6.1)
+ racc (~> 1.4)
npm-rails (0.2.1)
rails (>= 3.2)
oj (3.7.12)
cliver (~> 0.3.1)
multi_json (~> 1.0)
websocket-driver (>= 0.2.0)
- public_suffix (4.0.5)
+ public_suffix (4.0.6)
+ racc (1.6.0)
rack (2.2.3)
rack-mini-profiler (1.0.2)
rack (>= 1.2.0)
rack-test (1.1.0)
rack (>= 1.0, < 3)
- rails (5.2.4.3)
- actioncable (= 5.2.4.3)
- actionmailer (= 5.2.4.3)
- actionpack (= 5.2.4.3)
- actionview (= 5.2.4.3)
- activejob (= 5.2.4.3)
- activemodel (= 5.2.4.3)
- activerecord (= 5.2.4.3)
- activestorage (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ rails (5.2.6)
+ actioncable (= 5.2.6)
+ actionmailer (= 5.2.6)
+ actionpack (= 5.2.6)
+ actionview (= 5.2.6)
+ activejob (= 5.2.6)
+ activemodel (= 5.2.6)
+ activerecord (= 5.2.6)
+ activestorage (= 5.2.6)
+ activesupport (= 5.2.6)
bundler (>= 1.3.0)
- railties (= 5.2.4.3)
+ railties (= 5.2.6)
sprockets-rails (>= 2.0.0)
rails-controller-testing (1.0.4)
actionpack (>= 5.0.1.x)
rails-html-sanitizer (1.3.0)
loofah (~> 2.3)
rails-perftest (0.0.7)
- railties (5.2.4.3)
- actionpack (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ railties (5.2.6)
+ actionpack (= 5.2.6)
+ activesupport (= 5.2.6)
method_source
rake (>= 0.8.7)
thor (>= 0.19.0, < 2.0)
- rake (13.0.1)
+ rake (13.0.3)
raphael-rails (2.1.2)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
sprockets (3.7.2)
concurrent-ruby (~> 1.0)
rack (> 1, < 3)
- sprockets-rails (3.2.1)
+ sprockets-rails (3.2.2)
actionpack (>= 4.0)
activesupport (>= 4.0)
sprockets (>= 3.0.0)
therubyracer (0.12.3)
libv8 (~> 3.16.14.15)
ref
- thor (1.0.1)
+ thor (1.1.0)
thread_safe (0.3.6)
tilt (2.0.9)
- tzinfo (1.2.7)
+ tzinfo (1.2.9)
thread_safe (~> 0.1)
uglifier (2.7.2)
execjs (>= 0.3.0)
json (>= 1.8.0)
- websocket-driver (0.7.3)
+ websocket-driver (0.7.4)
websocket-extensions (>= 0.1.0)
websocket-extensions (0.1.5)
xpath (2.1.0)
activerecord-nulldb-adapter!
andand
angularjs-rails (~> 1.3.8)
- arvados!
- bootsnap
+ arvados (~> 2.1.5)
bootstrap-sass (~> 3.4.1)
bootstrap-tab-history-rails
bootstrap-x-editable-rails
uglifier (~> 2.0)
BUNDLED WITH
- 1.17.3
+ 2.2.19
filters: [['group_class', '=', 'project']],
description: 'project',
},
+ {
+ wb_path: 'projects',
+ api_path: 'arvados/v1/groups',
+ filters: [['group_class', '=', 'filter']],
+ description: 'project',
+ },
{
wb_path: 'collections',
api_path: 'arvados/v1/collections',
@object.link_class == 'name' and
ArvadosBase::resource_class_for_uuid(@object.head_uuid) == Collection
redirect_to collection_path(id: @object.uuid)
- elsif @object.is_a?(Group) and @object.group_class == 'project'
+ elsif @object.is_a?(Group) and (@object.group_class == 'project' or @object.group_class == 'filter')
redirect_to project_path(id: @object.uuid)
elsif @object
redirect_to @object
# exception here than in a template.)
unless current_user.nil?
begin
- my_starred_projects current_user
+ my_starred_projects current_user, 'project'
build_my_wanted_projects_tree current_user
rescue ArvadosApiClient::ApiError
# Fall back to the default-setting code later.
if objects.respond_to?(:result_offset) and
objects.respond_to?(:result_limit)
next_offset = objects.result_offset + objects.result_limit
- if objects.respond_to?(:items_available) and (next_offset < objects.items_available)
+ if objects.respond_to?(:items_available) and (objects.items_available != nil) and (next_offset < objects.items_available)
next_offset
elsif @objects.results.size > 0 and (params[:count] == 'none' or
(params[:controller] == 'search' and params[:action] == 'choose'))
helper_method :all_projects
def all_projects
@all_projects ||= Group.
- filter([['group_class','=','project']]).order('name')
+ filter([['group_class','IN',['project','filter']]]).order('name')
end
helper_method :my_projects
end
helper_method :my_starred_projects
- def my_starred_projects user
+ def my_starred_projects user, group_class
return if defined?(@starred_projects) && @starred_projects
links = Link.filter([['owner_uuid', 'in', ["#{Rails.configuration.ClusterID}-j7d0g-publicfavorites", user.uuid]],
['link_class', '=', 'star'],
['head_uuid', 'is_a', 'arvados#group']]).with_count("none").select(%w(head_uuid))
uuids = links.collect { |x| x.head_uuid }
- starred_projects = Group.filter([['uuid', 'in', uuids]]).order('name').with_count("none")
+ if group_class == ""
+ starred_projects = Group.filter([['uuid', 'in', uuids]]).order('name').with_count("none")
+ else
+ starred_projects = Group.filter([['uuid', 'in', uuids],['group_class', '=', group_class]]).order('name').with_count("none")
+ end
@starred_projects = starred_projects.results
end
@too_many_projects = false
@reached_level_limit = false
while from_top.size <= page_size*2
- current_level = Group.filter([['group_class','=','project'],
+ current_level = Group.filter([['group_class','IN',['project','filter']],
['owner_uuid', 'in', uuids]])
.order('name').limit(page_size*2)
break if current_level.results.size == 0
class GroupsController < ApplicationController
def index
- @groups = Group.filter [['group_class', '!=', 'project']]
+ @groups = Group.filter [['group_class', '!=', 'project'], ['group_class', '!=', 'filter']]
@group_uuids = @groups.collect &:uuid
@links_from = Link.where(link_class: 'permission', tail_uuid: @group_uuids).with_count("none")
@links_to = Link.where(link_class: 'permission', head_uuid: @group_uuids).with_count("none")
end
def show
- if @object.group_class == 'project'
+ if @object.group_class == 'project' or @object.group_class == 'filter'
redirect_to(project_path(@object))
else
super
skip_before_action :ensure_arvados_api_exists
def destroy
+ token = session[:arvados_api_token]
session.clear
- redirect_to arvados_api_client.arvados_logout_url(return_to: root_url)
+ redirect_to arvados_api_client.arvados_logout_url(return_to: root_url, api_token: token)
end
def logged_out
raw(link_name)
else
controller_class = resource_class.to_s.tableize
- if controller_class.eql?('groups') and object.andand.group_class.eql?('project')
+ if controller_class.eql?('groups') and (object.andand.group_class.eql?('project') or object.andand.group_class.eql?('filter'))
controller_class = 'projects'
end
(link_to raw(link_name), { controller: controller_class, action: 'show', id: ((opts[:name_link].andand.uuid) || link_uuid) }, style_opts) + raw(tags)
api_params[:filters] = @filters if @filters
api_params[:distinct] = @distinct if @distinct
api_params[:include_trash] = @include_trash if @include_trash
+ api_params[:cluster_id] = Rails.configuration.ClusterID
if @fetch_multiple_pages
# Default limit to (effectively) api server's MAX_LIMIT
api_params[:limit] = 2**(0.size*8 - 1) - 1
ret
end
+ def editable?
+ if group_class == 'filter'
+ return false
+ end
+ super
+ end
+
def contents params={}
res = arvados_api_client.api self.class, "/#{self.uuid}/contents", {
_method: 'GET'
end
def class_for_display
- group_class == 'project' ? 'Project' : super
+ (group_class == 'project' or group_class == 'filter') ? 'Project' : super
end
def textile_attributes
SPDX-License-Identifier: AGPL-3.0 %>
-<% starred_projects = my_starred_projects current_user%>
+<% starred_projects = my_starred_projects current_user, '' %>
<% if starred_projects.andand.any? %>
<li role="presentation" class="dropdown-header">
My favorite projects
<li role="menuitem"><a href="/groups">
<i class="fa fa-lg fa-users fa-fw"></i> Groups
</a></li>
- <li role="menuitem"><a href="/nodes">
- <i class="fa fa-lg fa-cloud fa-fw"></i> Compute nodes
- </a></li>
<li role="menuitem"><a href="/keep_services">
<i class="fa fa-lg fa-exchange fa-fw"></i> Keep services
</a></li>
- <li role="menuitem"><a href="/keep_disks">
- <i class="fa fa-lg fa-hdd-o fa-fw"></i> Keep disks
- </a></li>
</ul>
</li>
<% end %>
<div class="modal-body">
<div class="selectable-container" style="height: 15em; overflow-y: scroll">
- <% starred_projects = my_starred_projects current_user%>
+ <% starred_projects = my_starred_projects current_user, 'project' %>
<% if starred_projects.andand.any? %>
<% writable_projects = starred_projects.select(&:editable?) %>
<% writable_projects.each do |projectnode| %>
<%= render_editable_attribute @object, 'name', nil, { 'data-emptytext' => "New project" } %>
<% end %>
</h2>
+ <% if @object.class == Group and @object.group_class == 'filter' %>
+ This is a filter group.
+ <% end %>
<% end %>
<%
<div id="#manage_current_token" class="panel-body">
<p>The Arvados API token is a secret key that enables the Arvados SDKs to access Arvados with the proper permissions. For more information see <%= link_to raw('Getting an API token'), "#{Rails.configuration.Workbench.ArvadosDocsite}/user/reference/api-tokens.html", target: "_blank"%>.</p>
<p>Paste the following lines at a shell prompt to set up the necessary environment for Arvados SDKs to authenticate to your <b><%= current_user.username %></b> account.</p>
+<%
+ wb2_url = nil
+ if Rails.configuration.Services.Workbench2.ExternalURL != URI("")
+ wb2_url = Rails.configuration.Services.Workbench2.ExternalURL.to_s
+ wb2_url += '/' if wb2_url[-1] != '/'
+ wb2_url += "token?api_token=" + Thread.current[:arvados_api_token]
+ end
+%>
+<p><b>IMPORTANT:</b> This token will expire when logged out. If you need a token for a long running process, it is recommended to <% if wb2_url %><a href="<%= wb2_url %>">get a token from Workbench2's Get API token dialog</a>. <% else %> create a new token using the CLI tools.<% end %></p>
<pre>
HISTIGNORE=$HISTIGNORE:'export ARVADOS_API_TOKEN=*'
<% content_for :breadcrumbs do raw '<!-- -->' end %>
-<div class="row">
- <div class="col-sm-8 col-sm-push-4" style="margin-top: 1em">
- <div class="well clearfix">
- <%= image_tag "dax.png", style: "width: 112px; height: 150px; margin-right: 2em", class: 'pull-left' %>
-
- <h3 style="margin-top:0">Please log in.</h3>
-
- <p>
+<%= javascript_tag do %>
+ function controller_password_authenticate(event) {
+ event.preventDefault()
+ document.getElementById('login-authenticate-error').innerHTML = '';
+ fetch('<%= "#{Rails.configuration.Services.Controller.ExternalURL}" %>arvados/v1/users/authenticate', {
+ method: 'POST',
- The "Log in" button below will show you a Google sign-in page.
- After you assure Google that you want to log in here with your
- Google account, you will be redirected back here to
- <%= Rails.configuration.Workbench.SiteName %>.
+ headers: {'Content-Type': 'application/json'},
+ body: JSON.stringify({
+ username: document.getElementById('login-username').value,
+ password: document.getElementById('login-password').value,
+ }),
+ }).then(function(resp) {
+ if (!resp.ok) {
+ resp.json().then(function(respj) {
+ document.getElementById('login-authenticate-error').innerHTML = "<p>"+respj.errors[0]+"</p>";
+ });
+ return;
+ }
- </p><p>
+ var redir = document.getElementById('login-return-to').value
+ if (redir.indexOf('?') > 0) {
+ redir += '&'
+ } else {
+ redir += '?'
+ }
+ resp.json().then(function(respj) {
+ document.location = redir + "api_token=v2/" + respj.uuid + "/" + respj.api_token;
+ });
+ });
+ }
+ function clear_authenticate_error() {
+ document.getElementById('login-authenticate-error').innerHTML = "";
+ }
+<% end %>
- If you have never used <%= Rails.configuration.Workbench.SiteName %>
- before, logging in for the first time will automatically
- create a new account.
-
- </p><p>
+<div class="row">
+ <div class="col-sm-8 col-sm-push-4" style="margin-top: 1em">
+ <div class="well clearfix">
- <i><%= Rails.configuration.Workbench.SiteName %> uses your name and
- email address only for identification, and does not retrieve
- any other personal information from Google.</i>
+ <%= raw(Rails.configuration.Workbench.WelcomePageHTML) %>
- </p>
- <%# Todo: add list of external authentications providers to
- discovery document, then generate the option list here. Right
- now, don't provide 'auth_provider' to get the default one. %>
+ <% case %>
+ <% when Rails.configuration.Login.PAM.Enable,
+ Rails.configuration.Login.LDAP.Enable,
+ Rails.configuration.Login.Test.Enable %>
+ <form id="login-form-tag" onsubmit="controller_password_authenticate(event)">
+ <p>username <input type="text" class="form-control" name="login-username"
+ value="" id="login-username" style="width: 50%"
+ oninput="clear_authenticate_error()"></input></p>
+ <p>password <input type="password" class="form-control" name="login-password" value=""
+ id="login-password" style="width: 50%"
+ oninput="clear_authenticate_error()"></input></p>
+ <input type="hidden" name="return_to" value="<%= "#{Rails.configuration.Services.Workbench1.ExternalURL}" %>" id="login-return-to">
+ <span style="color: red"><p id="login-authenticate-error"></p></span>
+ <button type="submit" class="btn btn-primary">Log in</button>
+ </form>
+ <% else %>
<div class="pull-right">
<%= link_to arvados_api_client.arvados_login_url(return_to: request.url), class: "btn btn-primary" do %>
Log in to <%= Rails.configuration.Workbench.SiteName %>
<i class="fa fa-fw fa-arrow-circle-right"></i>
<% end %>
</div>
+ <% end %>
+
</div>
</div>
</div>
+++ /dev/null
-#!/usr/bin/env ruby
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-APP_ROOT = File.expand_path('..', __dir__)
-Dir.chdir(APP_ROOT) do
- begin
- exec "yarnpkg #{ARGV.join(" ")}"
- rescue Errno::ENOENT
- $stderr.puts "Yarn executable was not detected in the system."
- $stderr.puts "Download Yarn at https://yarnpkg.com/en/docs/install"
- exit 1
- end
-end
module ArvadosWorkbench
class Application < Rails::Application
+ # The following is to avoid SafeYAML's warning message
+ SafeYAML::OPTIONS[:default_mode] = :safe
require_relative "arvados_config.rb"
# Load the defaults, used by config:migrate and fallback loading
# legacy application.yml
-Open3.popen2("arvados-server", "config-dump", "-config=-", "-skip-legacy") do |stdin, stdout, status_thread|
- stdin.write("Clusters: {xxxxx: {}}")
- stdin.close
- confs = YAML.load(stdout, deserialize_symbols: false)
- clusterID, clusterConfig = confs["Clusters"].first
- $arvados_config_defaults = clusterConfig
- $arvados_config_defaults["ClusterID"] = clusterID
+defaultYAML, stderr, status = Open3.capture3("arvados-server", "config-dump", "-config=-", "-skip-legacy", stdin_data: "Clusters: {xxxxx: {}}")
+if !status.success?
+ puts stderr
+ raise "error loading config: #{status}"
end
-
-# Load the global config file
-Open3.popen2("arvados-server", "config-dump", "-skip-legacy") do |stdin, stdout, status_thread|
- confs = YAML.load(stdout, deserialize_symbols: false)
- if confs && !confs.empty?
- # config-dump merges defaults with user configuration, so every
- # key should be set.
- clusterID, clusterConfig = confs["Clusters"].first
- $arvados_config_global = clusterConfig
- $arvados_config_global["ClusterID"] = clusterID
- else
- # config-dump failed, assume we will be loading from legacy
- # application.yml, initialize with defaults.
- $arvados_config_global = $arvados_config_defaults.deep_dup
+confs = YAML.load(defaultYAML, deserialize_symbols: false)
+clusterID, clusterConfig = confs["Clusters"].first
+$arvados_config_defaults = clusterConfig
+$arvados_config_defaults["ClusterID"] = clusterID
+
+if ENV["ARVADOS_CONFIG"] == "none"
+ # Don't load config. This magic value is set by packaging scripts so
+ # they can run "rake assets:precompile" without a real config.
+ $arvados_config_global = $arvados_config_defaults.deep_dup
+else
+ # Load the global config file
+ Open3.popen2("arvados-server", "config-dump", "-skip-legacy") do |stdin, stdout, status_thread|
+ confs = YAML.load(stdout, deserialize_symbols: false)
+ if confs && !confs.empty?
+ # config-dump merges defaults with user configuration, so every
+ # key should be set.
+ clusterID, clusterConfig = confs["Clusters"].first
+ $arvados_config_global = clusterConfig
+ $arvados_config_global["ClusterID"] = clusterID
+ else
+ # config-dump failed, assume we will be loading from legacy
+ # application.yml, initialize with defaults.
+ $arvados_config_global = $arvados_config_defaults.deep_dup
+ end
end
end
ConfigLoader.copy_into_config $arvados_config, config
ConfigLoader.copy_into_config $remaining_config, config
secrets.secret_key_base = $arvados_config["Workbench"]["SecretKeyBase"]
- ConfigValidators.validate_wb2_url_config()
- ConfigValidators.validate_download_config()
-
+ if ENV["ARVADOS_CONFIG"] != "none"
+ ConfigValidators.validate_wb2_url_config()
+ ConfigValidators.validate_download_config()
+ end
end
ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../../Gemfile', __FILE__)
require 'bundler/setup' if File.exists?(ENV['BUNDLE_GEMFILE'])
-require 'bootsnap/setup' # Speed up boot time by caching expensive operations.
# Use ARVADOS_API_TOKEN environment variable (if set) in console
require 'rails'
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.lograge.custom_options = lambda do |event|
payload = {
+ ClusterID: Rails.configuration.ClusterID,
request_id: event.payload[:request_id],
}
# Also log params (minus the pseudo-params added by Rails). But if
case "$TARGET" in
centos*)
- fpm_depends+=(git bison make automake gcc gcc-c++ graphviz)
+ fpm_depends+=(git bison make automake gcc gcc-c++ graphviz shared-mime-info)
+ ;;
+ ubuntu1804)
+ fpm_depends+=(git g++ bison zlib1g-dev make graphviz shared-mime-info)
+ fpm_conflicts+=(ruby-bundler)
;;
debian* | ubuntu*)
- fpm_depends+=(git g++ bison zlib1g-dev make graphviz)
+ fpm_depends+=(git g++ bison zlib1g-dev make graphviz shared-mime-info)
;;
esac
use_token user
ctrl = ProjectsController.new
current_user = User.find(api_fixture('users')[user]['uuid'])
- my_starred_project = ctrl.send :my_starred_projects, current_user
+ my_starred_project = ctrl.send :my_starred_projects, current_user, ''
assert_equal(size, my_starred_project.andand.size)
ctrl2 = ProjectsController.new
current_user = User.find(api_fixture('users')[user]['uuid'])
- my_starred_project = ctrl2.send :my_starred_projects, current_user
+ my_starred_project = ctrl2.send :my_starred_projects, current_user, ''
assert_equal(size, my_starred_project.andand.size)
end
end
use_token :project_viewer
current_user = User.find(api_fixture('users')['project_viewer']['uuid'])
ctrl = ProjectsController.new
- my_starred_project = ctrl.send :my_starred_projects, current_user
+ my_starred_project = ctrl.send :my_starred_projects, current_user, ''
assert_equal(0, my_starred_project.andand.size)
# share it again
# verify that the project is again included in starred projects
use_token :project_viewer
ctrl = ProjectsController.new
- my_starred_project = ctrl.send :my_starred_projects, current_user
+ my_starred_project = ctrl.send :my_starred_projects, current_user, ''
assert_equal(1, my_starred_project.andand.size)
end
end
[
['foo', 10, 25,
['/pipeline_instances/zzzzz-d1hrv-1xfj6xkicf2muk2',
- '/pipeline_instances/zzzzz-d1hrv-jobspeccomponts',
+ '/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk4',
'/jobs/zzzzz-8i9sb-grx15v5mjnsyxk7'],
['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk3',
'/jobs/zzzzz-8i9sb-n7omg50bvt0m1nf',
'/container_requests/zzzzz-xvhdp-cr4completedcr2']],
['pipeline_with_tagged_collection_input', 1, 1,
['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk3'],
- ['/pipeline_instances/zzzzz-d1hrv-jobspeccomponts',
+ ['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk4',
'/jobs/zzzzz-8i9sb-pshmckwoma9plh7',
'/jobs/zzzzz-8i9sb-n7omg50bvt0m1nf',
'/container_requests/zzzzz-xvhdp-cr4completedcr2']],
['no_such_match', 0, 0,
[],
- ['/pipeline_instances/zzzzz-d1hrv-jobspeccomponts',
+ ['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk4',
'/jobs/zzzzz-8i9sb-pshmckwoma9plh7',
'/jobs/zzzzz-8i9sb-n7omg50bvt0m1nf',
'/container_requests/zzzzz-xvhdp-cr4completedcr2']],
].each do |search_filter, expected_min, expected_max, expected, not_expected|
test "all_processes page for search filter '#{search_filter}'" do
- work_units_index(filters: [['any','@@', search_filter]], show_children: true)
+ work_units_index(filters: [['any','ilike', "%#{search_filter}%"]], show_children: true)
assert_response :success
# Verify that expected number of processes are found
if !user
assert page.has_text?('Please log in'), 'Not found text - Please log in'
- assert page.has_text?('The "Log in" button below will show you a Google sign-in page'), 'Not found text - google sign in page'
+ assert page.has_text?('If you have never used Arvados Workbench before'), 'Not found text - If you have never'
assert page.has_no_text?('My projects'), 'Found text - My projects'
- assert page.has_link?("Log in to #{Rails.configuration.Workbench.SiteName}"), 'Not found text - log in to'
+ assert page.has_link?("Log in"), 'Not found text - Log in'
elsif user['is_active']
if profile_config && !has_profile
assert page.has_text?('Save profile'), 'No text - Save profile'
['SSH keys', nil, 'public_key'],
['Links', nil, 'link_class'],
['Groups', nil, 'All users'],
- ['Compute nodes', nil, 'ping_secret'],
['Keep services', nil, 'service_ssl_flag'],
- ['Keep disks', nil, 'bytes_free'],
].each do |page_name, add_button_text, look_for|
test "test system menu #{page_name} link" do
visit page_with_token('admin')
test "trying to use expired token redirects to login page" do
visit page_with_token('expired_trustedclient')
- buttons = all("a.btn", text: /Log in/)
+ buttons = all("button.btn", text: /Log in/)
assert_equal(1, buttons.size, "Failed to find one login button")
- login_link = buttons.first[:href]
- assert_match(%r{//[^/]+/login}, login_link)
- assert_no_match(/\bapi_token=/, login_link)
end
end
[[true, 25, 100,
['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk3',
- '/pipeline_instances/zzzzz-d1hrv-jobspeccomponts',
+ '/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk4',
'/jobs/zzzzz-8i9sb-grx15v5mjnsyxk7',
'/jobs/zzzzz-8i9sb-n7omg50bvt0m1nf',
'/container_requests/zzzzz-xvhdp-cr4completedcr2',
'/container_requests/zzzzz-xvhdp-oneof60crs00001']],
[false, 25, 100,
['/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk3',
- '/pipeline_instances/zzzzz-d1hrv-jobspeccomponts',
+ '/pipeline_instances/zzzzz-d1hrv-1yfj61234abcdk4',
'/container_requests/zzzzz-xvhdp-cr4completedcr2'],
['/pipeline_instances/zzzzz-d1hrv-scarxiyajtshq3l',
'/container_requests/zzzzz-xvhdp-oneof60crs00001',
echo processing $outputdir/$cleaned_test-$build.txt creating $outputdir/$cleaned_test.csv
echo $(grep ^Completed $outputdir/$cleaned_test-$build.txt | perl -n -e '/^Completed (.*) in [0-9]+ms.*$/;print "".++$line."-$1,";' | perl -p -e 's/,$//g'|tr " " "_" ) > $outputdir/$cleaned_test.csv
echo $(grep ^Completed $outputdir/$cleaned_test-$build.txt | perl -n -e '/^Completed.*in ([0-9]+)ms.*$/;print "$1,";' | perl -p -e 's/,$//g' ) >> $outputdir/$cleaned_test.csv
- #echo URL=https://ci.curoverse.com/view/job/arvados-api-server/ws/apps/workbench/log/$cleaned_test-$build.txt/*view*/ >> $outputdir/$test.properties
else
echo "$test was't found on $file"
cleaned_test=$(echo $test | tr -d ",.:;/")
--- /dev/null
+#!/bin/bash
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+# When run with WORKSPACE pointing at a git checkout of arvados, this script
+# calculates the package version of an Arvados component.
+
+# set to --no-cache-dir to disable pip caching
+CACHE_FLAG=
+STDOUT_IF_DEBUG=/dev/null
+STDERR_IF_DEBUG=/dev/null
+DASHQ_UNLESS_DEBUG=-q
+ITERATION="${ARVADOS_BUILDING_ITERATION:-1}"
+
+. `dirname "$(readlink -f "$0")"`/run-library.sh
+
+TYPE_LANG=$1
+SRC_PATH=$2
+
+if [[ "$TYPE_LANG" == "" ]] || [[ "$SRC_PATH" == "" ]]; then
+ echo "Syntax: $0 <lang> <src_path>"
+ echo
+ echo "Example: $0 go cmd/arvados-client"
+ echo "Example: $0 python3 services/fuse"
+ echo
+ exit 1
+fi
+
+if [[ "$WORKSPACE" == "" ]]; then
+ echo "The WORKSPACE environment variable must be set, pointing at the root of the arvados git tree"
+ exit 1
+fi
+
+
+debug_echo "package_go_binary $SRC_PATH"
+
+if [[ "$TYPE_LANG" == "go" ]]; then
+ calculate_go_package_version go_package_version $SRC_PATH
+ echo "${go_package_version}-${ITERATION}"
+elif [[ "$TYPE_LANG" == "python3" ]]; then
+
+ cd $WORKSPACE/$SRC_PATH
+
+ rm -rf dist/*
+
+ # Get the latest setuptools
+ if ! pip3 install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'; then
+ echo "Error, unable to upgrade setuptools with"
+ echo " pip3 install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'"
+ exit 1
+ fi
+ # filter a useless warning (when building the cwltest package) from the stderr output
+ if ! python3 setup.py $DASHQ_UNLESS_DEBUG sdist 2> >(grep -v 'warning: no previously-included files matching' |grep -v 'for version number calculation'); then
+ echo "Error, unable to run python3 setup.py sdist for $SRC_PATH"
+ exit 1
+ fi
+
+ PYTHON_VERSION=$(awk '($1 == "Version:"){print $2}' *.egg-info/PKG-INFO)
+ UNFILTERED_PYTHON_VERSION=$(echo -n $PYTHON_VERSION | sed s/\.dev/~dev/g |sed 's/\([0-9]\)rc/\1~rc/g')
+
+ echo "${UNFILTERED_PYTHON_VERSION}-${ITERATION}"
+fi
+
#
# SPDX-License-Identifier: AGPL-3.0
-all: centos7/generated debian10/generated ubuntu1604/generated ubuntu1804/generated ubuntu2004/generated
+all: centos7/generated debian10/generated debian11/generated ubuntu1804/generated ubuntu2004/generated
centos7/generated: common-generated-all
test -d centos7/generated || mkdir centos7/generated
test -d debian10/generated || mkdir debian10/generated
cp -f -rlt debian10/generated common-generated/*
-ubuntu1604/generated: common-generated-all
- test -d ubuntu1604/generated || mkdir ubuntu1604/generated
- cp -f -rlt ubuntu1604/generated common-generated/*
+debian11/generated: common-generated-all
+ test -d debian11/generated || mkdir debian11/generated
+ cp -f -rlt debian11/generated common-generated/*
ubuntu1804/generated: common-generated-all
test -d ubuntu1804/generated || mkdir ubuntu1804/generated
test -d ubuntu2004/generated || mkdir ubuntu2004/generated
cp -f -rlt ubuntu2004/generated common-generated/*
-GOTARBALL=go1.13.4.linux-amd64.tar.gz
+GOTARBALL=go1.17.1.linux-amd64.tar.gz
NODETARBALL=node-v10.23.1-linux-x64.tar.xz
RVMKEY1=mpapis.asc
RVMKEY2=pkuczynski.asc
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install Bash 4.4.12 // see https://dev.arvados.org/issues/15612
&& ln -sf /usr/local/src/bash-4.4.12/bash /bin/bash
# Install golang binary
-ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.17.1.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
ENV DEBIAN_FRONTEND noninteractive
# Install dependencies.
-RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-setuptools python3-pip libcurl4-gnutls-dev curl git procps libattr1-dev libfuse-dev libgnutls28-dev libpq-dev unzip python3-venv python3-dev libpam-dev
+RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-setuptools python3-pip libcurl4-gnutls-dev curl git procps libattr1-dev libfuse-dev libgnutls28-dev libpq-dev unzip python3-venv python3-dev libpam-dev equivs
# Install virtualenv
RUN /usr/bin/pip3 install 'virtualenv<20'
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.17.1.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
#
# SPDX-License-Identifier: AGPL-3.0
-FROM ubuntu:xenial
+## dont use debian:11 here since the word 'bullseye' is used for rvm precompiled binaries
+FROM debian:bullseye
MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
# Install dependencies.
-RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python-setuptools python3-setuptools python3-pip libcurl4-gnutls-dev libgnutls-dev curl git libattr1-dev libfuse-dev libpq-dev unzip tzdata python3-venv python3-dev libpam-dev
+RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-setuptools python3-pip libcurl4-gnutls-dev curl git procps libattr1-dev libfuse-dev libgnutls28-dev libpq-dev unzip python3-venv python3-dev libpam-dev equivs
# Install virtualenv
RUN /usr/bin/pip3 install 'virtualenv<20'
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ echo "gem: --no-document" >> /etc/gemrc && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.17.1.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
ENV WORKSPACE /arvados
-CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "ubuntu1604"]
+CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "debian11"]
ENV DEBIAN_FRONTEND noninteractive
# Install dependencies.
-RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-pip libcurl4-gnutls-dev libgnutls28-dev curl git libattr1-dev libfuse-dev libpq-dev unzip tzdata python3-venv python3-dev libpam-dev
+RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-pip libcurl4-gnutls-dev libgnutls28-dev curl git libattr1-dev libfuse-dev libpq-dev unzip tzdata python3-venv python3-dev libpam-dev equivs
# Install virtualenv
RUN /usr/bin/pip3 install 'virtualenv<20'
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.17.1.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
ENV DEBIAN_FRONTEND noninteractive
# Install dependencies.
-RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-pip libcurl4-gnutls-dev libgnutls28-dev curl git libattr1-dev libfuse-dev libpq-dev unzip tzdata python3-venv python3-dev libpam-dev
+RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python3 python3-pip libcurl4-gnutls-dev libgnutls28-dev curl git libattr1-dev libfuse-dev libpq-dev unzip tzdata python3-venv python3-dev libpam-dev shared-mime-info equivs
# Install virtualenv
RUN /usr/bin/pip3 install 'virtualenv<20'
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.17.1.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
#
# SPDX-License-Identifier: AGPL-3.0
-all: centos7/generated debian10/generated ubuntu1604/generated ubuntu1804/generated ubuntu2004/generated
+all: centos7/generated debian10/generated debian11/generated ubuntu1804/generated ubuntu2004/generated
centos7/generated: common-generated-all
test -d centos7/generated || mkdir centos7/generated
test -d debian10/generated || mkdir debian10/generated
cp -f -rlt debian10/generated common-generated/*
-ubuntu1604/generated: common-generated-all
- test -d ubuntu1604/generated || mkdir ubuntu1604/generated
- cp -f -rlt ubuntu1604/generated common-generated/*
+debian11/generated: common-generated-all
+ test -d debian11/generated || mkdir debian11/generated
+ cp -f -rlt debian11/generated common-generated/*
ubuntu1804/generated: common-generated-all
test -d ubuntu1804/generated || mkdir ubuntu1804/generated
gpg --import --no-tty /tmp/mpapis.asc && \
gpg --import --no-tty /tmp/pkuczynski.asc && \
curl -L https://get.rvm.io | bash -s stable && \
- /usr/local/rvm/bin/rvm install 2.3 && \
- /usr/local/rvm/bin/rvm alias create default ruby-2.3 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+ /usr/local/rvm/bin/rvm install 2.5 && \
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.9
# Install Bash 4.4.12 // see https://dev.arvados.org/issues/15612
RUN cd /usr/local/src \
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
#
# SPDX-License-Identifier: AGPL-3.0
-FROM ubuntu:xenial
+FROM debian:bullseye
MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
# Install dependencies
RUN apt-get update && \
- apt-get -y install --no-install-recommends curl ca-certificates
+ apt-get -y install --no-install-recommends curl ca-certificates gpg procps gpg-agent
# Install RVM
ADD generated/mpapis.asc /tmp/
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+ echo "gem: --no-document" >> /etc/gemrc && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
-RUN echo "deb file:///arvados/packages/ubuntu1604/ /" >>/etc/apt/sources.list
-
-# Add preferences file for the Arvados packages. This pins Arvados
-# packages at priority 501, so that older python dependency versions
-# are preferred in those cases where we need them
-ADD etc-apt-preferences.d-arvados /etc/apt/preferences.d/arvados
+RUN echo "deb file:///arvados/packages/debian11/ /" >>/etc/apt/sources.list
+++ /dev/null
-Package: *
-Pin: release o=Arvados
-Pin-Priority: 501
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
- /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.2.19
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
+++ /dev/null
-deb-common-test-packages.sh
\ No newline at end of file
+++ /dev/null
-deb-common-test-packages.sh
\ No newline at end of file
+++ /dev/null
-deb-common-test-packages.sh
\ No newline at end of file
* After it installs the core configuration files (database.yml, application.yml, and production.rb) to /etc/arvados/server, it calls setup_extra_conffiles. By default this is a noop function (in step2.sh).
* Before it restarts nginx, it calls setup_before_nginx_restart. By default this is a noop function (in step2.sh). API server defines this to set up the internal git repository, if necessary.
* $RAILSPKG_DATABASE_LOAD_TASK defines the Rake task to load the database. API server uses db:structure:load. Workbench doesn't set this, which causes the postinst to skip all database work.
-* If $RAILSPKG_SUPPORTS_CONFIG_CHECK != 1, it won't run the config:check rake task.
# initialize git_internal_dir
# usually /var/lib/arvados/internal.git (set in application.default.yml )
if [ "$APPLICATION_READY" = "1" ]; then
- GIT_INTERNAL_DIR=$($COMMAND_PREFIX bundle exec rake config:dump 2>&1 | grep GitInternalDir | awk '{ print $2 }' |tr -d '"')
+ GIT_INTERNAL_DIR=$($COMMAND_PREFIX bin/rake config:dump 2>&1 | grep GitInternalDir | awk '{ print $2 }' |tr -d '"')
if [ ! -e "$GIT_INTERNAL_DIR" ]; then
run_and_report "Creating git_internal_dir '$GIT_INTERNAL_DIR'" \
mkdir -p "$GIT_INTERNAL_DIR"
}
prepare_database() {
- DB_MIGRATE_STATUS=`$COMMAND_PREFIX bundle exec rake db:migrate:status 2>&1 || true`
+ DB_MIGRATE_STATUS=`$COMMAND_PREFIX bin/rake db:migrate:status 2>&1 || true`
if echo "$DB_MIGRATE_STATUS" | grep -qF 'Schema migrations table does not exist yet.'; then
# The database exists, but the migrations table doesn't.
- run_and_report "Setting up database" $COMMAND_PREFIX bundle exec \
- rake "$RAILSPKG_DATABASE_LOAD_TASK" db:seed
+ run_and_report "Setting up database" $COMMAND_PREFIX bin/rake \
+ "$RAILSPKG_DATABASE_LOAD_TASK" db:seed
elif echo "$DB_MIGRATE_STATUS" | grep -q '^database: '; then
run_and_report "Running db:migrate" \
- $COMMAND_PREFIX bundle exec rake db:migrate
+ $COMMAND_PREFIX bin/rake db:migrate
elif echo "$DB_MIGRATE_STATUS" | grep -q 'database .* does not exist'; then
if ! run_and_report "Running db:setup" \
- $COMMAND_PREFIX bundle exec rake db:setup 2>/dev/null; then
+ $COMMAND_PREFIX bin/rake db:setup 2>/dev/null; then
echo "Warning: unable to set up database." >&2
DATABASE_READY=0
fi
cd "$RELEASE_PATH"
export RAILS_ENV=production
- if ! $COMMAND_PREFIX bundle --version >/dev/null; then
- run_and_report "Installing bundler" $COMMAND_PREFIX gem install bundler --version 1.17.3
+ if ! $COMMAND_PREFIX bundle --version >/dev/null 2>&1; then
+ run_and_report "Installing bundler" $COMMAND_PREFIX gem install bundler --version 2.2.19 --no-document
fi
+ run_and_report "Running bundle config set --local path $SHARED_PATH/vendor_bundle" \
+ $COMMAND_PREFIX bin/bundle config set --local path $SHARED_PATH/vendor_bundle
+
run_and_report "Running bundle install" \
- $COMMAND_PREFIX bundle install --path $SHARED_PATH/vendor_bundle --local --quiet
+ $COMMAND_PREFIX bin/bundle install --local --quiet
echo -n "Ensuring directory and file permissions ..."
# Ensure correct ownership of a few files
prepare_database
fi
- if [ 11 = "$RAILSPKG_SUPPORTS_CONFIG_CHECK$APPLICATION_READY" ]; then
+ if [ -e /etc/arvados/config.yml ]; then
+ # warn about config errors (deprecated/removed keys from
+ # previous version, etc)
run_and_report "Checking configuration for completeness" \
- $COMMAND_PREFIX bundle exec rake config:check || APPLICATION_READY=0
- fi
-
- # precompile assets; thankfully this does not take long
- if [ "$APPLICATION_READY" = "1" ]; then
- run_and_report "Precompiling assets" \
- $COMMAND_PREFIX bundle exec rake assets:precompile -q -s 2>/dev/null \
- || APPLICATION_READY=0
+ $COMMAND_PREFIX bin/rake config:check || APPLICATION_READY=0
else
- echo "Precompiling assets... skipped."
+ APPLICATION_READY=0
fi
+
chown -R "$WWW_OWNER:" $RELEASE_PATH/tmp
setup_before_nginx_restart
PACKAGE BUILD ERROR: $0 is missing package metadata.
-This package is buggy. Please mail <support@curoverse.com> to let
+This package is buggy. Please mail <packaging@arvados.org> to let
us know the name and version number of the package you tried to
install, and we'll get it fixed.
RELEASE_CONFIG_PATH=$RELEASE_PATH/config
SHARED_PATH=$INSTALL_PATH/shared
-RAILSPKG_SUPPORTS_CONFIG_CHECK=${RAILSPKG_SUPPORTS_CONFIG_CHECK:-1}
if ! type setup_extra_conffiles >/dev/null 2>&1; then
setup_extra_conffiles() { return; }
fi
# docker always creates a local 'latest' tag, and we don't want to push that
# tag in every case. Remove it.
docker rmi $1:latest
+
+ GITHEAD=$(cd $WORKSPACE && git log --format=%H -n1 HEAD)
+
if [[ ! -z "$tags" ]]
then
for tag in $( echo $tags|tr "," " " )
do
- $DOCKER tag $1 $1:$tag
+ $DOCKER tag $1:$GITHEAD $1:$tag
done
fi
cd "$WORKSPACE"
if [[ -z "$ARVADOS_BUILDING_VERSION" ]] && ! [[ -z "$version_tag" ]]; then
- ARVADOS_BUILDING_VERSION="$version_tag"
- ARVADOS_BUILDING_ITERATION="1"
+ export ARVADOS_BUILDING_VERSION="$version_tag"
+ export ARVADOS_BUILDING_ITERATION="1"
fi
# This defines python_sdk_version and cwl_runner_version with python-style
elif ! [[ "$2" =~ (.*)-(.*) ]]; then
echo >&2 "FATAL: --build-version '$2' does not include an iteration. Try '${2}-1'?"
exit 1
+ elif ! [[ "$2" =~ ^[0-9]+\.[0-9]+\.[0-9]+(\.[0-9]+|)(~rc[0-9]+|~dev[0-9]+|)-[0-9]+$ ]]; then
+ echo >&2 "FATAL: --build-version '$2' is invalid, must match pattern ^[0-9]+\.[0-9]+\.[0-9]+(\.[0-9]+|)(~rc[0-9]+|~dev[0-9]+|)-[0-9]+$"
+ exit 1
else
+ [[ "$2" =~ (.*)-(.*) ]]
ARVADOS_BUILDING_VERSION="${BASH_REMATCH[1]}"
ARVADOS_BUILDING_ITERATION="${BASH_REMATCH[2]}"
fi
arvados-client
arvados-controller
arvados-dispatch-cloud
+ arvados-dispatch-lsf
arvados-docker-cleaner
arvados-git-httpd
arvados-health
"Arvados cluster controller daemon"
package_go_binary cmd/arvados-server arvados-dispatch-cloud \
"Arvados cluster cloud dispatch"
+package_go_binary cmd/arvados-server arvados-dispatch-lsf \
+ "Dispatch Arvados containers to an LSF cluster"
package_go_binary services/arv-git-httpd arvados-git-httpd \
"Provide authenticated http access to Arvados-hosted git repositories"
package_go_binary services/crunch-dispatch-local crunch-dispatch-local \
"Rebalance and garbage-collect data blocks stored in Arvados Keep"
package_go_binary services/keepproxy keepproxy \
"Make a Keep cluster accessible to clients that are not on the LAN"
-package_go_binary services/keepstore keepstore \
+package_go_binary cmd/arvados-server keepstore \
"Keep storage daemon, accessible to clients on the LAN"
package_go_binary services/keep-web keep-web \
"Static web hosting service for user data stored in Arvados Keep"
# The Arvados user activity tool
fpm_build_virtualenv "arvados-user-activity" "tools/user-activity" "python3"
+# The python->python3 metapackages
+build_metapackage "arvados-fuse" "services/fuse"
+build_metapackage "arvados-python-client" "services/fuse"
+build_metapackage "arvados-cwl-runner" "sdk/cwl"
+build_metapackage "crunchstat-summary" "tools/crunchstat-summary"
+build_metapackage "arvados-docker-cleaner" "services/dockercleaner"
+build_metapackage "arvados-user-activity" "tools/user-activity"
+
# The cwltest package, which lives out of tree
cd "$WORKSPACE"
if [[ -e "$WORKSPACE/cwltest" ]]; then
# signal to our build script that we want a cwltest executable installed in /usr/bin/
mkdir cwltest/bin && touch cwltest/bin/cwltest
fpm_build_virtualenv "cwltest" "cwltest" "python3"
+# The python->python3 metapackage
+build_metapackage "cwltest" "cwltest"
+cd "$WORKSPACE"
rm -rf "$WORKSPACE/cwltest"
calculate_go_package_version arvados_server_version cmd/arvados-server
mv /tmp/x /etc/arvados/config.yml
perl -p -i -e 'BEGIN{undef $/;} s/WebDAV(.*?):\n( *)ExternalURL: ""/WebDAV$1:\n$2ExternalURL: "example.com"/g' /etc/arvados/config.yml
- RAILS_ENV=production RAILS_GROUPS=assets bundle exec rake npm:install >"$STDOUT_IF_DEBUG"
- RAILS_ENV=production RAILS_GROUPS=assets bundle exec rake assets:precompile >"$STDOUT_IF_DEBUG"
+ ARVADOS_CONFIG=none RAILS_ENV=production RAILS_GROUPS=assets bin/rake npm:install >"$STDOUT_IF_DEBUG"
+ ARVADOS_CONFIG=none RAILS_ENV=production RAILS_GROUPS=assets bin/rake assets:precompile >"$STDOUT_IF_DEBUG"
# Remove generated configuration files so they don't go in the package.
rm -rf /etc/arvados/
elif [[ "$FORMAT" == "deb" ]]; then
declare -A dd
dd[debian10]=buster
- dd[ubuntu1604]=xenial
+ dd[debian11]=bullseye
dd[ubuntu1804]=bionic
dd[ubuntu2004]=focal
D=${dd[$TARGET]}
LICENSE_STRING=`grep license $WORKSPACE/$PKG_DIR/setup.py|cut -f2 -d=|sed -e "s/[',\\"]//g"`
COMMAND_ARR+=('--license' "$LICENSE_STRING")
- if [[ "$FORMAT" != "rpm" ]]; then
- COMMAND_ARR+=('--conflicts' "python-$PKG")
+ if [[ "$FORMAT" == "rpm" ]]; then
+ # Make sure to conflict with the old rh-python36 packages we used to publish
+ COMMAND_ARR+=('--conflicts' "rh-python36-python-$PKG")
fi
if [[ "$DEBUG" != "0" ]]; then
COMMAND_ARR+=('--depends' "$i")
done
+ for i in "${fpm_depends[@]}"; do
+ COMMAND_ARR+=('--replaces' "python-$PKG")
+ done
+
# make sure the systemd service file ends up in the right place
# used by arvados-docker-cleaner
if [[ -e "${systemd_unit}" ]]; then
fi
# the python3-arvados-cwl-runner package comes with cwltool, expose that version
- if [[ -e "$WORKSPACE/$PKG_DIR/dist/build/usr/share/$python/dist/python-arvados-cwl-runner/bin/cwltool" ]]; then
- COMMAND_ARR+=("usr/share/$python/dist/python-arvados-cwl-runner/bin/cwltool=/usr/bin/")
+ if [[ -e "$WORKSPACE/$PKG_DIR/dist/build/usr/share/$python/dist/$PYTHON_PKG/bin/cwltool" ]]; then
+ COMMAND_ARR+=("usr/share/$python/dist/$PYTHON_PKG/bin/cwltool=/usr/bin/")
fi
COMMAND_ARR+=(".")
echo
}
+# build_metapackage builds meta packages that help with the python to python 3 package migration
+build_metapackage() {
+ # base package name (e.g. arvados-python-client)
+ BASE_NAME=$1
+ shift
+ PKG_DIR=$1
+ shift
+
+ if [[ -n "$ONLY_BUILD" ]] && [[ "python-$BASE_NAME" != "$ONLY_BUILD" ]]; then
+ return 0
+ fi
+
+ if [[ "$ARVADOS_BUILDING_ITERATION" == "" ]]; then
+ ARVADOS_BUILDING_ITERATION=1
+ fi
+
+ if [[ -z "$ARVADOS_BUILDING_VERSION" ]]; then
+ cd $WORKSPACE/$PKG_DIR
+ pwd
+ rm -rf dist/*
+
+ # Get the latest setuptools
+ if ! pip3 install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'; then
+ echo "Error, unable to upgrade setuptools with XY"
+ echo " pip3 install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'"
+ exit 1
+ fi
+ # filter a useless warning (when building the cwltest package) from the stderr output
+ if ! python3 setup.py $DASHQ_UNLESS_DEBUG sdist 2> >(grep -v 'warning: no previously-included files matching'); then
+ echo "Error, unable to run python3 setup.py sdist for $PKG"
+ exit 1
+ fi
+
+ PYTHON_VERSION=$(awk '($1 == "Version:"){print $2}' *.egg-info/PKG-INFO)
+ UNFILTERED_PYTHON_VERSION=$(echo -n $PYTHON_VERSION | sed s/\.dev/~dev/g |sed 's/\([0-9]\)rc/\1~rc/g')
+
+ else
+ UNFILTERED_PYTHON_VERSION=$ARVADOS_BUILDING_VERSION
+ PYTHON_VERSION=$(echo -n $ARVADOS_BUILDING_VERSION | sed s/~dev/.dev/g | sed s/~rc/rc/g)
+ fi
+
+ cd - >$STDOUT_IF_DEBUG
+ if [[ -d "$BASE_NAME" ]]; then
+ rm -rf $BASE_NAME
+ fi
+ mkdir $BASE_NAME
+ cd $BASE_NAME
+
+ if [[ "$FORMAT" == "deb" ]]; then
+ cat >ns-control <<EOF
+Section: misc
+Priority: optional
+Standards-Version: 3.9.2
+
+Package: python-${BASE_NAME}
+Version: ${PYTHON_VERSION}-${ARVADOS_BUILDING_ITERATION}
+Maintainer: Arvados Package Maintainers <packaging@arvados.org>
+Depends: python3-${BASE_NAME}
+Description: metapackage to ease the upgrade to the Pyhon 3 version of ${BASE_NAME}
+ This package is a metapackage that will automatically install the new version of
+ ${BASE_NAME} which is Python 3 based and has a different name.
+EOF
+
+ /usr/bin/equivs-build ns-control
+ if [[ $? -ne 0 ]]; then
+ echo "Error running 'equivs-build ns-control', is the 'equivs' package installed?"
+ return 1
+ fi
+ elif [[ "$FORMAT" == "rpm" ]]; then
+ cat >meta.spec <<EOF
+Summary: metapackage to ease the upgrade to the Python 3 version of ${BASE_NAME}
+Name: python-${BASE_NAME}
+Version: ${PYTHON_VERSION}
+Release: ${ARVADOS_BUILDING_ITERATION}
+License: distributable
+
+Requires: python3-${BASE_NAME}
+
+%description
+This package is a metapackage that will automatically install the new version of
+python-${BASE_NAME} which is Python 3 based and has a different name.
+
+%prep
+
+%build
+
+%clean
+
+%install
+
+%post
+
+%files
+
+
+%changelog
+* Mon Apr 12 2021 Arvados Package Maintainers <packaging@arvados.org>
+- initial release
+EOF
+
+ /usr/bin/rpmbuild -ba meta.spec
+ if [[ $? -ne 0 ]]; then
+ echo "Error running 'rpmbuild -ba meta.spec', is the 'rpm-build' package installed?"
+ return 1
+ else
+ mv /root/rpmbuild/RPMS/x86_64/python-${BASE_NAME}*.${FORMAT} .
+ if [[ $? -ne 0 ]]; then
+ echo "Error finding rpm file output of 'rpmbuild -ba meta.spec'"
+ return 1
+ fi
+ fi
+ else
+ echo "Unknown format"
+ return 1
+ fi
+
+ if [[ $EXITCODE -ne 0 ]]; then
+ return 1
+ else
+ echo `ls *$FORMAT`
+ mv *$FORMAT $WORKSPACE/packages/$TARGET/
+ fi
+
+ # clean up
+ cd - >$STDOUT_IF_DEBUG
+ if [[ -d "$BASE_NAME" ]]; then
+ rm -rf $BASE_NAME
+ fi
+}
+
# Build packages for everything
fpm_build () {
# Source dir where fpm-info.sh (if any) will be found.
declare -a fpm_args=()
declare -a build_depends=()
declare -a fpm_depends=()
+ declare -a fpm_conflicts=()
declare -a fpm_exclude=()
if [[ ! -d "$SRC_DIR" ]]; then
echo >&2 "BUG: looking in wrong dir for fpm-info.sh: $pkgdir"
for i in "${fpm_depends[@]}"; do
COMMAND_ARR+=('--depends' "$i")
done
+ for i in "${fpm_conflicts[@]}"; do
+ COMMAND_ARR+=('--conflicts' "$i")
+ done
for i in "${fpm_exclude[@]}"; do
COMMAND_ARR+=('--exclude' "$i")
done
|| fatal 'rvm gemset setup'
rvm env
- (bundle version | grep -q 2.0.2) || gem install bundler -v 2.0.2
+ (bundle version | grep -q 2.2.19) || gem install bundler -v 2.2.19
bundle="$(which bundle)"
echo "$bundle"
- "$bundle" version | grep 2.0.2 || fatal 'install bundler'
+ "$bundle" version | grep 2.2.19 || fatal 'install bundler'
else
# When our "bundle install"s need to install new gems to
# satisfy dependencies, we want them to go where "gem install
(
export HOME=$GEMHOME
bundlers="$(gem list --details bundler)"
- versions=(1.16.6 1.17.3 2.0.2)
+ versions=(2.2.19)
for v in ${versions[@]}; do
if ! echo "$bundlers" | fgrep -q "($v)"; then
gem install --user $(for v in ${versions[@]}; do echo bundler:${v}; done)
do_install services/api
do_install services/arv-git-httpd go
do_install services/keepproxy go
- do_install services/keepstore go
do_install services/keep-web go
- do_install services/ws go
}
install_all() {
do_test apps/workbench_profile
}
+test_go() {
+ do_test gofmt
+ for g in "${gostuff[@]}"
+ do
+ do_test "$g" go
+ done
+}
+
help_interactive() {
echo "== Interactive commands:"
echo "TARGET (short for 'test DIR')"
#
# 1. commit is directly tagged. print that.
#
-# 2. commit is on master or a development branch, the nearest tag is older
-# than commit where this branch joins master.
+# 2. commit is on main or a development branch, the nearest tag is older
+# than commit where this branch joins main.
# -> take greatest version tag in repo X.Y.Z and assign X.(Y+1).0
#
# 3. commit is on a release branch, the nearest tag is newer
-# than the commit where this branch joins master.
+# than the commit where this branch joins main.
# -> take nearest tag X.Y.Z and assign X.Y.(Z+1)
tagged=$(git tag --points-at "$commit")
echo $tagged
else
# 1. get the nearest tag with 'git describe'
- # 2. get the merge base between this commit and master
+ # 2. get the merge base between this commit and main
# 3. if the tag is an ancestor of the merge base,
# (tag is older than merge base) increment minor version
# else, tag is newer than merge base, so increment point version
nearest_tag=$(git describe --tags --abbrev=0 --match "$versionglob" "$commit")
- merge_base=$(git merge-base origin/master "$commit")
+ merge_base=$(git merge-base origin/main "$commit")
if git merge-base --is-ancestor "$nearest_tag" "$merge_base" ; then
- # x.(y+1).0.devTIMESTAMP, where x.y.z is the newest version that does not contain $commit
+ # x.(y+1).0~devTIMESTAMP, where x.y.z is the newest version that does not contain $commit
# grep reads the list of tags (-f) that contain $commit and filters them out (-v)
# this prevents a newer tag from retroactively changing the versions of everything before it
- v=$(git tag | grep -vFf <(git tag --contains "$commit") | sort -Vr | head -n1 | perl -pe 's/\.(\d+)\.\d+/".".($1+1).".0"/e')
+ v=$(git tag | grep -vFf <(git tag --contains "$commit") | sort -Vr | head -n1 | perl -pe 's/(\d+)\.(\d+)\.\d+.*/"$1.".($2+1).".0"/e')
else
- # x.y.(z+1).devTIMESTAMP, where x.y.z is the latest released ancestor of $commit
+ # x.y.(z+1)~devTIMESTAMP, where x.y.z is the latest released ancestor of $commit
v=$(echo $nearest_tag | perl -pe 's/(\d+)$/$1+1/e')
fi
isodate=$(TZ=UTC git log -n1 --format=%cd --date=iso "$commit")
-Creative Commons Legal Code
-Attribution-ShareAlike 3.0 United States
+Attribution-ShareAlike 3.0 Unported
+
+ CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM ITS USE.
License
-THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
-COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
-COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
-AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
+THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
-BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE
-BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE
-CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE
-IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
+BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
1. Definitions
- a. "Collective Work" means a work, such as a periodical issue, anthology or
- encyclopedia, in which the Work in its entirety in unmodified form, along
- with one or more other contributions, constituting separate and independent
- works in themselves, are assembled into a collective whole. A work that
- constitutes a Collective Work will not be considered a Derivative Work (as
- defined below) for the purposes of this License.
-
- b. "Creative Commons Compatible License" means a license that is listed at
- http://creativecommons.org/compatiblelicenses that has been approved by
- Creative Commons as being essentially equivalent to this License,
- including, at a minimum, because that license: (i) contains terms that have
- the same purpose, meaning and effect as the License Elements of this
- License; and, (ii) explicitly permits the relicensing of derivatives of
- works made available under that license under this License or either a
- Creative Commons unported license or a Creative Commons jurisdiction
- license with the same License Elements as this License.
-
- c. "Derivative Work" means a work based upon the Work or upon the Work and
- other pre-existing works, such as a translation, musical arrangement,
- dramatization, fictionalization, motion picture version, sound recording,
- art reproduction, abridgment, condensation, or any other form in which the
- Work may be recast, transformed, or adapted, except that a work that
- constitutes a Collective Work will not be considered a Derivative Work for
- the purpose of this License. For the avoidance of doubt, where the Work is
- a musical composition or sound recording, the synchronization of the Work
- in timed-relation with a moving image ("synching") will be considered a
- Derivative Work for the purpose of this License.
-
- d. "License Elements" means the following high-level license attributes as
- selected by Licensor and indicated in the title of this License:
- Attribution, ShareAlike.
-
- e. "Licensor" means the individual, individuals, entity or entities that
- offers the Work under the terms of this License.
-
- f. "Original Author" means the individual, individuals, entity or entities who
- created the Work.
-
- g. "Work" means the copyrightable work of authorship offered under the terms
- of this License.
-
- h. "You" means an individual or entity exercising rights under this License
- who has not previously violated the terms of this License with respect to
- the Work, or who has received express permission from the Licensor to
- exercise rights under this License despite a previous violation.
-
-2. Fair Use Rights. Nothing in this license is intended to reduce, limit, or
-restrict any rights arising from fair use, first sale or other limitations on
-the exclusive rights of the copyright owner under copyright law or other
-applicable laws.
-
-3. License Grant. Subject to the terms and conditions of this License, Licensor
-hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the
-duration of the applicable copyright) license to exercise the rights in the
-Work as stated below:
-
- a. to reproduce the Work, to incorporate the Work into one or more Collective
- Works, and to reproduce the Work as incorporated in the Collective Works;
+ "Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
+ "Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(f) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined below) for the purposes of this License.
+ "Creative Commons Compatible License" means a license that is listed at https://creativecommons.org/compatiblelicenses that has been approved by Creative Commons as being essentially equivalent to this License, including, at a minimum, because that license: (i) contains terms that have the same purpose, meaning and effect as the License Elements of this License; and, (ii) explicitly permits the relicensing of adaptations of works made available under that license under this License or a Creative Commons jurisdiction license with the same License Elements as this License.
+ "Distribute" means to make available to the public the original and copies of the Work or Adaptation, as appropriate, through sale or other transfer of ownership.
+ "License Elements" means the following high-level license attributes as selected by Licensor and indicated in the title of this License: Attribution, ShareAlike.
+ "Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License.
+ "Original Author" means, in the case of a literary or artistic work, the individual, individuals, entity or entities who created the Work or if no individual or entity can be identified, the publisher; and in addition (i) in the case of a performance the actors, singers, musicians, dancers, and other persons who act, sing, deliver, declaim, play in, interpret or otherwise perform literary or artistic works or expressions of folklore; (ii) in the case of a phonogram the producer being the person or legal entity who first fixes the sounds of a performance or other sounds; and, (iii) in the case of broadcasts, the organization that transmits the broadcast.
+ "Work" means the literary and/or artistic work offered under the terms of this License including without limitation any production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression including digital form, such as a book, pamphlet and other writing; a lecture, address, sermon or other work of the same nature; a dramatic or dramatico-musical work; a choreographic work or entertainment in dumb show; a musical composition with or without words; a cinematographic work to which are assimilated works expressed by a process analogous to cinematography; a work of drawing, painting, architecture, sculpture, engraving or lithography; a photographic work to which are assimilated works expressed by a process analogous to photography; a work of applied art; an illustration, map, plan, sketch or three-dimensional work relative to geography, topography, architecture or science; a performance; a broadcast; a phonogram; a compilation of data to the extent it is protected as a copyrightable work; or a work performed by a variety or circus performer to the extent it is not otherwise considered a literary or artistic work.
+ "You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.
+ "Publicly Perform" means to perform public recitations of the Work and to communicate to the public those public recitations, by any means or process, including by wire or wireless means or public digital performances; to make available to the public Works in such a way that members of the public may access these Works from a place and at a place individually chosen by them; to perform the Work to the public by any means or process and the communication to the public of the performances of the Work, including by public digital performance; to broadcast and rebroadcast the Work by any means including signs, sounds or images.
+ "Reproduce" means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium.
- b. to create and reproduce Derivative Works provided that any such
- Derivative Work, including any translation in any medium, takes reasonable
- steps to clearly label, demarcate or otherwise identify that changes were
- made to the original Work. For example, a translation could be marked "The
- original work was translated from English to Spanish," or a modification
- could indicate "The original work has been modified.";
+2. Fair Dealing Rights. Nothing in this License is intended to reduce, limit, or restrict any uses free from copyright or rights arising from limitations or exceptions that are provided for in connection with the copyright protection under copyright law or other applicable laws.
- c. to distribute copies or phonorecords of, display publicly, perform
- publicly, and perform publicly by means of a digital audio transmission the
- Work including as incorporated in Collective Works;
+3. License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:
- d. to distribute copies or phonorecords of, display publicly, perform
- publicly, and perform publicly by means of a digital audio transmission
- Derivative Works.
+ to Reproduce the Work, to incorporate the Work into one or more Collections, and to Reproduce the Work as incorporated in the Collections;
+ to create and Reproduce Adaptations provided that any such Adaptation, including any translation in any medium, takes reasonable steps to clearly label, demarcate or otherwise identify that changes were made to the original Work. For example, a translation could be marked "The original work was translated from English to Spanish," or a modification could indicate "The original work has been modified.";
+ to Distribute and Publicly Perform the Work including as incorporated in Collections; and,
+ to Distribute and Publicly Perform Adaptations.
- e. For the avoidance of doubt, where the Work is a musical composition:
+ For the avoidance of doubt:
+ Non-waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme cannot be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License;
+ Waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme can be waived, the Licensor waives the exclusive right to collect such royalties for any exercise by You of the rights granted under this License; and,
+ Voluntary License Schemes. The Licensor waives the right to collect royalties, whether individually or, in the event that the Licensor is a member of a collecting society that administers voluntary licensing schemes, via that society, from any exercise by You of the rights granted under this License.
- i. Performance Royalties Under Blanket Licenses. Licensor waives the
- exclusive right to collect, whether individually or, in the event that
- Licensor is a member of a performance rights society (e.g. ASCAP, BMI,
- SESAC), via that society, royalties for the public performance or
- public digital performance (e.g. webcast) of the Work.
-
- ii. Mechanical Rights and Statutory Royalties. Licensor waives the
- exclusive right to collect, whether individually or via a music rights
- agency or designated agent (e.g. Harry Fox Agency), royalties for any
- phonorecord You create from the Work ("cover version") and distribute,
- subject to the compulsory license created by 17 USC Section 115 of the
- US Copyright Act (or the equivalent in other jurisdictions).
-
- f. Webcasting Rights and Statutory Royalties. For the avoidance of doubt,
- where the Work is a sound recording, Licensor waives the exclusive right to
- collect, whether individually or via a performance-rights society
- (e.g. SoundExchange), royalties for the public digital performance
- (e.g. webcast) of the Work, subject to the compulsory license created by 17
- USC Section 114 of the US Copyright Act (or the equivalent in other
- jurisdictions).
-
-The above rights may be exercised in all media and formats whether now known or
-hereafter devised. The above rights include the right to make such
-modifications as are technically necessary to exercise the rights in other
-media and formats. All rights not expressly granted by Licensor are hereby
-reserved.
+The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. Subject to Section 8(f), all rights not expressly granted by Licensor are hereby reserved.
4. Restrictions. The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:
- a. You may distribute, publicly display, publicly perform, or publicly
- digitally perform the Work only under the terms of this License, and You
- must include a copy of, or the Uniform Resource Identifier for, this
- License with every copy or phonorecord of the Work You distribute, publicly
- display, publicly perform, or publicly digitally perform. You may not offer
- or impose any terms on the Work that restrict the terms of this License or
- the ability of a recipient of the Work to exercise of the rights granted to
- that recipient under the terms of the License. You may not sublicense the
- Work. You must keep intact all notices that refer to this License and to
- the disclaimer of warranties. When You distribute, publicly display,
- publicly perform, or publicly digitally perform the Work, You may not
- impose any technological measures on the Work that restrict the ability of
- a recipient of the Work from You to exercise of the rights granted to that
- recipient under the terms of the License. This Section 4(a) applies to the
- Work as incorporated in a Collective Work, but this does not require the
- Collective Work apart from the Work itself to be made subject to the terms
- of this License. If You create a Collective Work, upon notice from any
- Licensor You must, to the extent practicable, remove from the Collective
- Work any credit as required by Section 4(c), as requested. If You create a
- Derivative Work, upon notice from any Licensor You must, to the extent
- practicable, remove from the Derivative Work any credit as required by
- Section 4(c), as requested.
-
- b. You may distribute, publicly display, publicly perform, or publicly
- digitally perform a Derivative Work only under: (i) the terms of this
- License; (ii) a later version of this License with the same License
- Elements as this License; (iii) either the Creative Commons (Unported)
- license or a Creative Commons jurisdiction license (either this or a later
- license version) that contains the same License Elements as this License
- (e.g. Attribution-ShareAlike 3.0 (Unported)); (iv) a Creative Commons
- Compatible License. If you license the Derivative Work under one of the
- licenses mentioned in (iv), you must comply with the terms of that
- license. If you license the Derivative Work under the terms of any of the
- licenses mentioned in (i), (ii) or (iii) (the "Applicable License"), you
- must comply with the terms of the Applicable License generally and with the
- following provisions: (I) You must include a copy of, or the Uniform
- Resource Identifier for, the Applicable License with every copy or
- phonorecord of each Derivative Work You distribute, publicly display,
- publicly perform, or publicly digitally perform; (II) You may not offer or
- impose any terms on the Derivative Works that restrict the terms of the
- Applicable License or the ability of a recipient of the Work to exercise
- the rights granted to that recipient under the terms of the Applicable
- License; (III) You must keep intact all notices that refer to the
- Applicable License and to the disclaimer of warranties; and, (IV) when You
- distribute, publicly display, publicly perform, or publicly digitally
- perform the Work, You may not impose any technological measures on the
- Derivative Work that restrict the ability of a recipient of the Derivative
- Work from You to exercise the rights granted to that recipient under the
- terms of the Applicable License. This Section 4(b) applies to the
- Derivative Work as incorporated in a Collective Work, but this does not
- require the Collective Work apart from the Derivative Work itself to be
- made subject to the terms of the Applicable License.
-
- c. If You distribute, publicly display, publicly perform, or publicly
- digitally perform the Work (as defined in Section 1 above) or any
- Derivative Works (as defined in Section 1 above) or Collective Works (as
- defined in Section 1 above), You must, unless a request has been made
- pursuant to Section 4(a), keep intact all copyright notices for the Work
- and provide, reasonable to the medium or means You are utilizing: (i) the
- name of the Original Author (or pseudonym, if applicable) if supplied,
- and/or (ii) if the Original Author and/or Licensor designate another party
- or parties (e.g. a sponsor institute, publishing entity, journal) for
- attribution ("Attribution Parties") in Licensor's copyright notice, terms
- of service or by other reasonable means, the name of such party or parties;
- the title of the Work if supplied; to the extent reasonably practicable,
- the Uniform Resource Identifier, if any, that Licensor specifies to be
- associated with the Work, unless such URI does not refer to the copyright
- notice or licensing information for the Work; and, consistent with Section
- 3(b) in the case of a Derivative Work, a credit identifying the use of the
- Work in the Derivative Work (e.g., "French translation of the Work by
- Original Author," or "Screenplay based on original Work by Original
- Author"). The credit required by this Section 4(c) may be implemented in
- any reasonable manner; provided, however, that in the case of a Derivative
- Work or Collective Work, at a minimum such credit will appear, if a credit
- for all contributing authors of the Derivative Work or Collective Work
- appears, then as part of these credits and in a manner at least as
- prominent as the credits for the other contributing authors. For the
- avoidance of doubt, You may only use the credit required by this Section
- for the purpose of attribution in the manner set out above and, by
- exercising Your rights under this License, You may not implicitly or
- explicitly assert or imply any connection with, sponsorship or endorsement
- by the Original Author, Licensor and/or Attribution Parties, as
- appropriate, of You or Your use of the Work, without the separate, express
- prior written permission of the Original Author, Licensor and/or
- Attribution Parties.
-
+ You may Distribute or Publicly Perform the Work only under the terms of this License. You must include a copy of, or the Uniform Resource Identifier (URI) for, this License with every copy of the Work You Distribute or Publicly Perform. You may not offer or impose any terms on the Work that restrict the terms of this License or the ability of the recipient of the Work to exercise the rights granted to that recipient under the terms of the License. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties with every copy of the Work You Distribute or Publicly Perform. When You Distribute or Publicly Perform the Work, You may not impose any effective technological measures on the Work that restrict the ability of a recipient of the Work from You to exercise the rights granted to that recipient under the terms of the License. This Section 4(a) applies to the Work as incorporated in a Collection, but this does not require the Collection apart from the Work itself to be made subject to the terms of this License. If You create a Collection, upon notice from any Licensor You must, to the extent practicable, remove from the Collection any credit as required by Section 4(c), as requested. If You create an Adaptation, upon notice from any Licensor You must, to the extent practicable, remove from the Adaptation any credit as required by Section 4(c), as requested.
+ You may Distribute or Publicly Perform an Adaptation only under the terms of: (i) this License; (ii) a later version of this License with the same License Elements as this License; (iii) a Creative Commons jurisdiction license (either this or a later license version) that contains the same License Elements as this License (e.g., Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible License. If you license the Adaptation under one of the licenses mentioned in (iv), you must comply with the terms of that license. If you license the Adaptation under the terms of any of the licenses mentioned in (i), (ii) or (iii) (the "Applicable License"), you must comply with the terms of the Applicable License generally and the following provisions: (I) You must include a copy of, or the URI for, the Applicable License with every copy of each Adaptation You Distribute or Publicly Perform; (II) You may not offer or impose any terms on the Adaptation that restrict the terms of the Applicable License or the ability of the recipient of the Adaptation to exercise the rights granted to that recipient under the terms of the Applicable License; (III) You must keep intact all notices that refer to the Applicable License and to the disclaimer of warranties with every copy of the Work as included in the Adaptation You Distribute or Publicly Perform; (IV) when You Distribute or Publicly Perform the Adaptation, You may not impose any effective technological measures on the Adaptation that restrict the ability of a recipient of the Adaptation from You to exercise the rights granted to that recipient under the terms of the Applicable License. This Section 4(b) applies to the Adaptation as incorporated in a Collection, but this does not require the Collection apart from the Adaptation itself to be made subject to the terms of the Applicable License.
+ If You Distribute, or Publicly Perform the Work or any Adaptations or Collections, You must, unless a request has been made pursuant to Section 4(a), keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of the Original Author (or pseudonym, if applicable) if supplied, and/or if the Original Author and/or Licensor designate another party or parties (e.g., a sponsor institute, publishing entity, journal) for attribution ("Attribution Parties") in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; (ii) the title of the Work if supplied; (iii) to the extent reasonably practicable, the URI, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and (iv) , consistent with Ssection 3(b), in the case of an Adaptation, a credit identifying the use of the Work in the Adaptation (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). The credit required by this Section 4(c) may be implemented in any reasonable manner; provided, however, that in the case of a Adaptation or Collection, at a minimum such credit will appear, if a credit for all contributing authors of the Adaptation or Collection appears, then as part of these credits and in a manner at least as prominent as the credits for the other contributing authors. For the avoidance of doubt, You may only use the credit required by this Section for the purpose of attribution in the manner set out above and, by exercising Your rights under this License, You may not implicitly or explicitly assert or imply any connection with, sponsorship or endorsement by the Original Author, Licensor and/or Attribution Parties, as appropriate, of You or Your use of the Work, without the separate, express prior written permission of the Original Author, Licensor and/or Attribution Parties.
+ Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Adaptations or Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation. Licensor agrees that in those jurisdictions (e.g. Japan), in which any exercise of the right granted in Section 3(b) of this License (the right to make Adaptations) would be deemed to be a distortion, mutilation, modification or other derogatory action prejudicial to the Original Author's honor and reputation, the Licensor will waive or not assert, as appropriate, this Section, to the fullest extent permitted by the applicable national law, to enable You to reasonably exercise Your right under Section 3(b) of this License (right to make Adaptations) but not otherwise.
5. Representations, Warranties and Disclaimer
-UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS
-THE WORK AS-IS AND ONLY TO THE EXTENT OF ANY RIGHTS HELD IN THE LICENSED WORK
-BY THE LICENSOR. THE LICENSOR MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
-KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING,
-WITHOUT LIMITATION, WARRANTIES OF TITLE, MARKETABILITY, MERCHANTIBILITY,
-FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR
-OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT
-DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED
-WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
+UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
-6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN
-NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL,
-INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS
-LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGES.
+6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. Termination
- a. This License and the rights granted hereunder will terminate automatically
- upon any breach by You of the terms of this License. Individuals or
- entities who have received Derivative Works or Collective Works from You
- under this License, however, will not have their licenses terminated
- provided such individuals or entities remain in full compliance with those
- licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of
- this License.
-
- b. Subject to the above terms and conditions, the license granted here is
- perpetual (for the duration of the applicable copyright in the
- Work). Notwithstanding the above, Licensor reserves the right to release
- the Work under different license terms or to stop distributing the Work at
- any time; provided, however that any such election will not serve to
- withdraw this License (or any other license that has been, or is required
- to be, granted under the terms of this License), and this License will
- continue in full force and effect unless terminated as stated above.
+ This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Adaptations or Collections from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.
+ Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.
8. Miscellaneous
- a. Each time You distribute or publicly digitally perform the Work (as defined
- in Section 1 above) or a Collective Work (as defined in Section 1 above),
- the Licensor offers to the recipient a license to the Work on the same
- terms and conditions as the license granted to You under this License.
-
- b. Each time You distribute or publicly digitally perform a Derivative Work,
- Licensor offers to the recipient a license to the original Work on the same
- terms and conditions as the license granted to You under this License.
-
- c. If any provision of this License is invalid or unenforceable under
- applicable law, it shall not affect the validity or enforceability of the
- remainder of the terms of this License, and without further action by the
- parties to this agreement, such provision shall be reformed to the minimum
- extent necessary to make such provision valid and enforceable.
-
- d. No term or provision of this License shall be deemed waived and no breach
- consented to unless such waiver or consent shall be in writing and signed
- by the party to be charged with such waiver or consent.
-
- e. This License constitutes the entire agreement between the parties with
- respect to the Work licensed here. There are no understandings, agreements
- or representations with respect to the Work not specified here. Licensor
- shall not be bound by any additional provisions that may appear in any
- communication from You. This License may not be modified without the mutual
- written agreement of the Licensor and You.
+ Each time You Distribute or Publicly Perform the Work or a Collection, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.
+ Each time You Distribute or Publicly Perform an Adaptation, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License.
+ If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
+ No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.
+ This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.
+ The rights granted under, and the subject matter referenced, in this License were drafted utilizing the terminology of the Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979), the Rome Convention of 1961, the WIPO Copyright Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996 and the Universal Copyright Convention (as revised on July 24, 1971). These rights and subject matter take effect in the relevant jurisdiction in which the License terms are sought to be enforced according to the corresponding provisions of the implementation of those treaty provisions in the applicable national law. If the standard suite of rights granted under applicable copyright law includes additional rights not granted under this License, such additional rights are deemed to be included in the License; this License is not intended to restrict the license of any rights under applicable law.
-Creative Commons Notice
+ Creative Commons Notice
- Creative Commons is not a party to this License, and makes no warranty
- whatsoever in connection with the Work. Creative Commons will not be liable
- to You or any party on any legal theory for any damages whatsoever,
- including without limitation any general, special, incidental or
- consequential damages arising in connection to this
- license. Notwithstanding the foregoing two (2) sentences, if Creative
- Commons has expressly identified itself as the Licensor hereunder, it shall
- have all rights and obligations of Licensor.
+ Creative Commons is not a party to this License, and makes no warranty whatsoever in connection with the Work. Creative Commons will not be liable to You or any party on any legal theory for any damages whatsoever, including without limitation any general, special, incidental or consequential damages arising in connection to this license. Notwithstanding the foregoing two (2) sentences, if Creative Commons has expressly identified itself as the Licensor hereunder, it shall have all rights and obligations of Licensor.
- Except for the limited purpose of indicating to the public that the Work is
- licensed under the CCPL, Creative Commons does not authorize the use by
- either party of the trademark "Creative Commons" or any related trademark
- or logo of Creative Commons without the prior written consent of Creative
- Commons. Any permitted use will be in compliance with Creative Commons'
- then-current trademark usage guidelines, as may be published on its website
- or otherwise made available upon request from time to time. For the
- avoidance of doubt, this trademark restriction does not form part of this
- License.
+ Except for the limited purpose of indicating to the public that the Work is licensed under the CCPL, Creative Commons does not authorize the use by either party of the trademark "Creative Commons" or any related trademark or logo of Creative Commons without the prior written consent of Creative Commons. Any permitted use will be in compliance with Creative Commons' then-current trademark usage guidelines, as may be published on its website or otherwise made available upon request from time to time. For the avoidance of doubt, this trademark restriction does not form part of the License.
- Creative Commons may be contacted at http://creativecommons.org/.
+ Creative Commons may be contacted at https://creativecommons.org/.
"git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/costanalyzer"
"git.arvados.org/arvados.git/lib/deduplicationreport"
+ "git.arvados.org/arvados.git/lib/diagnostics"
"git.arvados.org/arvados.git/lib/mount"
)
"costanalyzer": costanalyzer.Command,
"shell": shellCommand{},
"connect-ssh": connectSSHCommand{},
+ "diagnostics": diagnostics.Command{},
})
)
"strings"
"syscall"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/controller/rpc"
"git.arvados.org/arvados.git/sdk/go/arvados"
)
func (shellCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
f := flag.NewFlagSet(prog, flag.ContinueOnError)
- f.SetOutput(stderr)
- f.Usage = func() {
- _, prog := filepath.Split(prog)
- fmt.Fprint(stderr, prog+`: open an interactive shell on a running container.
-
-Usage: `+prog+` [options] [username@]container-uuid [ssh-options] [remote-command [args...]]
-
-Options:
-`)
- f.PrintDefaults()
- }
detachKeys := f.String("detach-keys", "ctrl-],ctrl-]", "set detach key sequence, as in docker-attach(1)")
- err := f.Parse(args)
- if err != nil {
- fmt.Fprintln(stderr, err)
- return 2
- }
-
- if f.NArg() < 1 {
- f.Usage()
+ if ok, code := cmd.ParseFlags(f, prog, args, "[username@]container-uuid [ssh-options] [remote-command [args...]]", stderr); !ok {
+ return code
+ } else if f.NArg() < 1 {
+ fmt.Fprintf(stderr, "missing required argument: container-uuid (try -help)\n")
return 2
}
target := f.Args()[0]
// kex_exchange_identification: Connection closed by remote host
// Connection closed by UNKNOWN port 65535
// exit status 255
+ //
+ // In case our target is a container request, the probe also
+ // resolves it to a container, so we don't connect to two
+ // different containers in a race.
+ var probetarget bytes.Buffer
exitcode := connectSSHCommand{}.RunCommand(
"arvados-client connect-ssh",
[]string{"-detach-keys=" + *detachKeys, "-probe-only=true", target},
- &bytes.Buffer{}, &bytes.Buffer{}, stderr)
+ &bytes.Buffer{}, &probetarget, stderr)
if exitcode != 0 {
return exitcode
}
+ target = strings.Trim(probetarget.String(), "\n")
selfbin, err := os.Readlink("/proc/self/exe")
if err != nil {
`)
f.PrintDefaults()
}
- probeOnly := f.Bool("probe-only", false, "do not transfer IO, just exit 0 immediately if tunnel setup succeeds")
+ probeOnly := f.Bool("probe-only", false, "do not transfer IO, just setup tunnel, print target UUID, and exit")
detachKeys := f.String("detach-keys", "", "set detach key sequence, as in docker-attach(1)")
- if err := f.Parse(args); err != nil {
- fmt.Fprintln(stderr, err)
- return 2
+ if ok, code := cmd.ParseFlags(f, prog, args, "[username@]container-uuid", stderr); !ok {
+ return code
} else if f.NArg() != 1 {
- f.Usage()
+ fmt.Fprintf(stderr, "missing required argument: [username@]container-uuid\n")
return 2
}
targetUUID := f.Args()[0]
defer sshconn.Conn.Close()
if *probeOnly {
+ fmt.Fprintln(stdout, targetUUID)
return 0
}
"crypto/hmac"
"crypto/sha256"
"fmt"
+ "io/ioutil"
+ "net"
+ "net/http"
"net/url"
"os"
"os/exec"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
"git.arvados.org/arvados.git/lib/controller/rpc"
"git.arvados.org/arvados.git/lib/crunchrun"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
check "gopkg.in/check.v1"
)
ContainerUUID: uuid,
Address: "0.0.0.0:0",
AuthSecret: authSecret,
+ // Just forward connections to localhost instead of a
+ // container, so we can test without running a
+ // container.
+ ContainerIPAddress: func() (string, error) { return "0.0.0.0", nil },
}
err := gw.Start()
c.Assert(err, check.IsNil)
c.Check(cmd.Run(), check.NotNil)
c.Log(stderr.String())
c.Check(stderr.String(), check.Matches, `(?ms).*(No such container: theperthcountyconspiracy|exec: \"docker\": executable file not found in \$PATH).*`)
+
+ // Set up an http server, and try using "arvados-client shell"
+ // to forward traffic to it.
+ httpTarget := &httpserver.Server{}
+ httpTarget.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ c.Logf("httpTarget.Handler: incoming request: %s %s", r.Method, r.URL)
+ if r.URL.Path == "/foo" {
+ fmt.Fprintln(w, "bar baz")
+ } else {
+ w.WriteHeader(http.StatusNotFound)
+ }
+ })
+ err = httpTarget.Start()
+ c.Assert(err, check.IsNil)
+
+ ln, err := net.Listen("tcp", ":0")
+ c.Assert(err, check.IsNil)
+ _, forwardedPort, _ := net.SplitHostPort(ln.Addr().String())
+ ln.Close()
+
+ stdout.Reset()
+ stderr.Reset()
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(10*time.Second))
+ defer cancel()
+ cmd = exec.CommandContext(ctx,
+ "go", "run", ".", "shell", uuid,
+ "-L", forwardedPort+":"+httpTarget.Addr,
+ "-o", "controlpath=none",
+ "-o", "userknownhostsfile="+c.MkDir()+"/known_hosts",
+ "-N",
+ )
+ c.Logf("cmd.Args: %s", cmd.Args)
+ cmd.Env = append(cmd.Env, os.Environ()...)
+ cmd.Env = append(cmd.Env, "ARVADOS_API_TOKEN="+arvadostest.ActiveTokenV2)
+ cmd.Stdout = &stdout
+ cmd.Stderr = &stderr
+ cmd.Start()
+
+ forwardedURL := fmt.Sprintf("http://localhost:%s/foo", forwardedPort)
+
+ for range time.NewTicker(time.Second / 20).C {
+ resp, err := http.Get(forwardedURL)
+ if err != nil {
+ if !strings.Contains(err.Error(), "connect") {
+ c.Fatal(err)
+ } else if ctx.Err() != nil {
+ if cmd.Process.Signal(syscall.Signal(0)) != nil {
+ c.Error("OpenSSH exited")
+ } else {
+ c.Errorf("timed out trying to connect: %s", err)
+ }
+ c.Logf("OpenSSH stdout:\n%s", stdout.String())
+ c.Logf("OpenSSH stderr:\n%s", stderr.String())
+ c.FailNow()
+ }
+ // Retry until OpenSSH starts listening
+ continue
+ }
+ c.Check(resp.StatusCode, check.Equals, http.StatusOK)
+ body, err := ioutil.ReadAll(resp.Body)
+ c.Check(err, check.IsNil)
+ c.Check(string(body), check.Equals, "bar baz\n")
+ break
+ }
+
+ var wg sync.WaitGroup
+ for i := 0; i < 10; i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ resp, err := http.Get(forwardedURL)
+ if !c.Check(err, check.IsNil) {
+ return
+ }
+ body, err := ioutil.ReadAll(resp.Body)
+ c.Check(err, check.IsNil)
+ c.Check(string(body), check.Equals, "bar baz\n")
+ }()
+ }
+ wg.Wait()
}
func main() {
if len(os.Args) < 2 || strings.HasPrefix(os.Args[1], "-") {
- parseFlags([]string{"-help"})
+ parseFlags(os.Args[0], []string{"-help"}, os.Stderr)
os.Exit(2)
}
os.Exit(handler.RunCommand(os.Args[0], os.Args[1:], os.Stdin, os.Stdout, os.Stderr))
func (cf cmdFunc) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
logger := ctxlog.New(stderr, "text", "info")
ctx := ctxlog.Context(context.Background(), logger)
- opts, err := parseFlags(args)
- if err != nil {
- logger.WithError(err).Error("error parsing command line flags")
- return 1
+ opts, ok, code := parseFlags(prog, args, stderr)
+ if !ok {
+ return code
}
- err = cf(ctx, opts, stdin, stdout, stderr)
+ err := cf(ctx, opts, stdin, stdout, stderr)
if err != nil {
logger.WithError(err).Error("failed")
return 1
Vendor string
}
-func parseFlags(args []string) (opts, error) {
+func parseFlags(prog string, args []string, stderr io.Writer) (_ opts, ok bool, exitCode int) {
opts := opts{
SourceDir: ".",
TargetOS: "debian:10",
`)
flags.PrintDefaults()
}
- err := flags.Parse(args)
- if err != nil {
- return opts, err
- }
- if len(flags.Args()) > 0 {
- return opts, fmt.Errorf("unrecognized command line arguments: %v", flags.Args())
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return opts, false, code
}
if opts.SourceDir == "" {
d, err := os.Getwd()
if err != nil {
- return opts, fmt.Errorf("Getwd: %w", err)
+ fmt.Fprintf(stderr, "error getting current working directory: %s\n", err)
+ return opts, false, 1
}
opts.SourceDir = d
}
opts.PackageDir = filepath.Clean(opts.PackageDir)
- opts.SourceDir, err = filepath.Abs(opts.SourceDir)
+ abs, err := filepath.Abs(opts.SourceDir)
if err != nil {
- return opts, err
+ fmt.Fprintf(stderr, "error resolving source dir %q: %s\n", opts.SourceDir, err)
+ return opts, false, 1
}
- return opts, nil
+ opts.SourceDir = abs
+ return opts, true, 0
}
opts.TargetOS,
"bash", "-c", `
set -e -o pipefail
-apt-get update
+apt-get --allow-releaseinfo-change update
apt-get install -y --no-install-recommends dpkg-dev eatmydata
mkdir /tmp/pkg
ln -s /pkg/*.deb /tmp/pkg/
(cd /tmp/pkg; dpkg-scanpackages --multiversion . | gzip > Packages.gz)
echo >/etc/apt/sources.list.d/arvados-local.list "deb [trusted=yes] file:///tmp/pkg ./"
-apt-get update
+apt-get --allow-releaseinfo-change update
eatmydata apt-get install -y --no-install-recommends arvados-server-easy postgresql
eatmydata apt-get remove -y dpkg-dev
"bash", "-c", `
set -e -o pipefail
PATH="/var/lib/arvados/bin:$PATH"
-apt-get update
+apt-get --allow-releaseinfo-change update
apt-get install -y --no-install-recommends dpkg-dev
mkdir /tmp/pkg
ln -s /pkg/*.deb /tmp/pkg/
echo
echo >/etc/apt/sources.list.d/arvados-local.list "deb [trusted=yes] file:///tmp/pkg ./"
-apt-get update
+apt-get --allow-releaseinfo-change update
eatmydata apt-get install --reinstall -y --no-install-recommends arvados-server-easy`+versionsuffix+`
SUDO_FORCE_REMOVE=yes apt-get autoremove -y
After=network.target
AssertPathExists=/etc/arvados/config.yml
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
After=network.target
AssertPathExists=/etc/arvados/config.yml
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+[Unit]
+Description=arvados-dispatch-lsf
+Documentation=https://doc.arvados.org/
+After=network.target
+AssertPathExists=/etc/arvados/config.yml
+
+# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
+StartLimitIntervalSec=0
+
+[Service]
+Type=notify
+EnvironmentFile=-/etc/arvados/environment
+ExecStart=/usr/bin/arvados-dispatch-lsf
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
+Restart=always
+RestartSec=1
+
+# systemd<=219 (centos:7, debian:8, ubuntu:trusty) obeys StartLimitInterval in the [Service] section
+StartLimitInterval=0
+
+[Install]
+WantedBy=multi-user.target
After=network.target
AssertPathExists=/etc/arvados/config.yml
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
"git.arvados.org/arvados.git/lib/crunchrun"
"git.arvados.org/arvados.git/lib/dispatchcloud"
"git.arvados.org/arvados.git/lib/install"
+ "git.arvados.org/arvados.git/lib/lsf"
"git.arvados.org/arvados.git/lib/recovercollection"
+ "git.arvados.org/arvados.git/services/keepstore"
"git.arvados.org/arvados.git/services/ws"
)
"controller": controller.Command,
"crunch-run": crunchrun.Command,
"dispatch-cloud": dispatchcloud.Command,
+ "dispatch-lsf": lsf.DispatchCommand,
"install": install.Command,
"init": install.InitCommand,
+ "keepstore": keepstore.Command,
"recover-collection": recovercollection.Command,
"ws": ws.Command,
})
Description=Arvados Keep Storage Daemon
Documentation=https://doc.arvados.org/
After=network.target
-
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
+AssertPathExists=/etc/arvados/config.yml
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
# generating the documentation for the SDKs, which (the R docs
# especially) take a fair bit of time and slow down the edit-preview
# cycle.
+#
+# To generate and view the documentation locally, run this command
+#
+# rake && sensible-browser .site/index.html
+#
+# Or alternatively:
+#
+# baseurl=http://localhost:8000 rake && rake run
+#
+# and then visit http://localhost:8000 in a browser.
require "rubygems"
require "colorize"
Dir.chdir(".site") do
`which linkchecker`
if $? == 0
- system "linkchecker index.html --ignore-url='!file://'" or exit $?.exitstatus
+ # we need --check-extern to check relative links, weird but true
+ system "linkchecker index.html --check-extern --ignore-url='!file://'" or exit $?.exitstatus
else
puts "Warning: linkchecker not found, skipping run".colorize(:light_red)
end
- user/cwl/cwl-extensions.html.textile.liquid
- user/cwl/federated-workflows.html.textile.liquid
- user/cwl/cwl-versions.html.textile.liquid
+ - user/cwl/crunchstat-summary.html.textile.liquid
+ - user/cwl/costanalyzer.html.textile.liquid
+ - user/debugging/container-shell-access.html.textile.liquid
- Working with git repositories:
- user/tutorials/add-new-repository.html.textile.liquid
- user/tutorials/git-arvados-guide.html.textile.liquid
- sdk/java-v2/index.html.textile.liquid
- sdk/java-v2/example.html.textile.liquid
- sdk/java-v2/javadoc.html.textile.liquid
- - Java v1:
- - sdk/java/index.html.textile.liquid
- - sdk/java/example.html.textile.liquid
- Perl:
- sdk/perl/index.html.textile.liquid
- sdk/perl/example.html.textile.liquid
- api/keep-webdav.html.textile.liquid
- api/keep-s3.html.textile.liquid
- api/keep-web-urls.html.textile.liquid
+ - api/projects.html.textile.liquid
- api/methods/collections.html.textile.liquid
- api/methods/repositories.html.textile.liquid
- Container engine:
- architecture/manifest-format.html.textile.liquid
- Computation with Crunch:
- api/execution.html.textile.liquid
+ - architecture/dispatchcloud.html.textile.liquid
+ - architecture/singularity.html.textile.liquid
- Other:
- api/permission-model.html.textile.liquid
- architecture/federation.html.textile.liquid
- Data Management:
- admin/collection-versioning.html.textile.liquid
- admin/collection-managed-properties.html.textile.liquid
+ - admin/restricting-upload-download.html.textile.liquid
- admin/keep-balance.html.textile.liquid
- admin/controlling-container-reuse.html.textile.liquid
- admin/logs-table-management.html.textile.liquid
- - admin/workbench2-vocabulary.html.textile.liquid
+ - admin/metadata-vocabulary.html.textile.liquid
- admin/storage-classes.html.textile.liquid
- admin/keep-recovering-data.html.textile.liquid
+ - admin/keep-measuring-deduplication.html.textile.liquid
- Cloud:
- admin/spot-instances.html.textile.liquid
- admin/cloudtest.html.textile.liquid
- install/config.html.textile.liquid
- admin/config-migration.html.textile.liquid
- admin/config.html.textile.liquid
+ - admin/config-urls.html.textile.liquid
- Core:
- install/install-api-server.html.textile.liquid
- Keep:
- install/install-shell-server.html.textile.liquid
- install/install-webshell.html.textile.liquid
- install/install-arv-git-httpd.html.textile.liquid
- - Containers API (cloud):
+ - Containers API (all):
- install/install-jobs-image.html.textile.liquid
+ - Containers API (cloud):
- install/crunch2-cloud/install-compute-node.html.textile.liquid
- install/crunch2-cloud/install-dispatch-cloud.html.textile.liquid
- - Containers API (slurm):
+ - Compute nodes (Slurm or LSF):
+ - install/crunch2/install-compute-node-docker.html.textile.liquid
+ - install/crunch2/install-compute-node-singularity.html.textile.liquid
+ - Containers API (Slurm):
- install/crunch2-slurm/install-dispatch.html.textile.liquid
- install/crunch2-slurm/configure-slurm.html.textile.liquid
- - install/crunch2-slurm/install-compute-node.html.textile.liquid
- install/crunch2-slurm/install-test.html.textile.liquid
+ - Containers API (LSF):
+ - install/crunch2-lsf/install-dispatch.html.textile.liquid
+ - Additional configuration:
+ - install/container-shell-access.html.textile.liquid
- External dependencies:
- install/install-postgresql.html.textile.liquid
- install/ruby.html.textile.liquid
--- /dev/null
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% if site.current_version and site.current_version != 'main' %}
+{% assign branchname = site.current_version | slice: 0, 3 | append: '-dev' %}
+{% else %}
+{% assign branchname = 'main' %}
+{% endif %}
|vcpus|integer|Number of cores to be used to run this process.|Optional. However, a ContainerRequest that is in "Committed" state must provide this.|
|keep_cache_ram|integer|Number of keep cache bytes to be used to run this process.|Optional.|
|API|boolean|When set, ARVADOS_API_HOST and ARVADOS_API_TOKEN will be set, and container will have networking enabled to access the Arvados API server.|Optional.|
+|cuda_driver_version|string|Minimum CUDA driver version.|Optional.|
+|cuda_hardware_capability|string|Minimum CUDA hardware capability.|Optional.|
+|cuda_device_count|int|Number of GPUs to request.|Optional.|
h2. Scheduling parameters
-Parameters to be passed to the container scheduler (e.g., SLURM) when running a container.
+Parameters to be passed to the container scheduler (e.g., Slurm) when running a container.
table(table table-bordered table-condensed).
|_. Key|_. Type|_. Description|_. Notes|
<notextile>
<pre><code>~$ <span class="userinput">cd /var/www/arvados-api/current</span>
-$ <span class="userinput">sudo -u <b>webserver-user</b> RAILS_ENV=production bundle exec script/create_superuser_token.rb</span>
+$ <span class="userinput">sudo -u <b>webserver-user</b> RAILS_ENV=production bin/bundle exec script/create_superuser_token.rb</span>
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
</code></pre>
</notextile>
<pre><code>pub rsa2048 2010-11-15 [SC]
B2DA 2991 656E B4A5 0314 CA2B 5716 5911 1078 ECD7
uid [ unknown] Arvados Automatic Signing Key <sysadmin@arvados.org>
-uid [ unknown] Curoverse, Inc Automatic Signing Key <sysadmin@curoverse.com>
sub rsa2048 2010-11-15 [E]
</code></pre>
</notextile>
h3. Changing ulimits
Docker containers inherit ulimits from the Docker daemon. However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job. You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon. For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000@.
+
+h2. Troubleshooting
+
+h3. Workflows fail with @ValidationException: Not found: '/var/lib/cwl/workflow.json#main'@
+
+A possible configuration error is having Docker installed as a @snap@ package rather than a @deb@ package. This is a problem because @snap@ packages are partially containerized and may have a different view of the filesystem than @crunch-run@. This will produce confusing problems, for example, directory bind mounts sent to Docker that are empty (instead of containing the intended files) and resulting in unexpected "file not found" errors.
+
+To check for this situation, run @snap list@ and look for @docker@. If found, run @snap remove docker@ and follow the instructions to above to "install Docker Engine":#install_docker .
--- /dev/null
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+If you plan to use custom certificates, please set the variable <i>USE_LETSENCRYPT=no</i> and copy your certificates to the directory specified with the variable @CUSTOM_CERTS_DIR@ (usually "./certs") in the remote directory where you copied the @provision.sh@ script. From this dir, the provision script will install the certificates required for the role you're installing.
+
+The script expects cert/key files with these basenames (matching the role except for <i>keepweb</i>, which is split in both <i>download / collections</i>):
+
+* "controller"
+* "websocket"
+* "workbench"
+* "workbench2"
+* "webshell"
+* "download" # Part of keepweb
+* "collections" # Part of keepweb
+* "keepproxy"
+
+Ie., for 'keepproxy', the script will lookup for
+
+<notextile>
+<pre><code>${CUSTOM_CERTS_DIR}/keepproxy.crt
+${CUSTOM_CERTS_DIR}/keepproxy.key
+</code></pre>
+</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+<notextile>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install curl gnupg2 ca-certificates</span>
+# <span class="userinput">curl https://apt.arvados.org/pubkey.gpg -o /etc/apt/trusted.gpg.d/arvados.asc</span>
+</code></pre>
+</notextile>
+The Arvados package signing GPG key is also available via the keyservers, though they can be unreliable. To retrieve the signing key via keyserver.ubuntu.com:
<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install gnupg</span>
-# <span class="userinput">/usr/bin/apt-key adv --keyserver pool.sks-keyservers.net --recv 1078ECD7</span>
-</code></pre>
+<pre><code># <span class="userinput">/usr/bin/apt-key adv --keyserver keyserver.ubuntu.com --recv 1078ECD7</code></pre>
</notextile>
This template recognizes four variables:
* railshost: The hostname included in the prompt, to let the user know where to run the command. If this is the empty string, no hostname will be displayed. Default "apiserver".
* railsdir: The directory included in the prompt, to let the user know where to run the command. Default "/var/www/arvados-api/current".
-* railscmd: The full command to run. Default "bundle exec rails console".
+* railscmd: The full command to run. Default "bin/rails console".
* railsout: The expected output of the command, if any.
{% endcomment %} Change *@webserver-user@* to the user that runs your web server process. If you install Phusion Passenger as we recommend, this is *@www-data@* on Debian-based systems, and *@nginx@* on Red Hat-based systems.
{% endunless %}
{% unless railscmd %}
- {% assign railscmd = "bundle exec rails console" %}
+ {% assign railscmd = "bin/rails console" %}
{% endunless %}
<notextile>
h3. Debian and Ubuntu
-Ubuntu 16.04 (xenial) ships with Ruby 2.3, which is not supported by Arvados. Use "RVM":#rvm to install Ruby 2.5 or later.
-
-Debian 10 (buster) and Ubuntu 18.04 (bionic) and later ship with Ruby 2.5, which is supported by Arvados.
+Debian 10 (buster) and Ubuntu 18.04 (bionic) and later ship with Ruby 2.5 or newer, which is sufficient for Arvados.
<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install ruby ruby-dev bundler</span></code></pre>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install ruby ruby-dev</span></code></pre>
</notextile>
h2(#rvm). Option 2: Install with RVM
apt-get --no-install-recommends install gpg curl
</pre>
-h3. Install RVM
+h3. Install RVM, Ruby and Bundler
<notextile>
-<pre><code># <span class="userinput">gpg --keyserver pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
+<pre><code># <span class="userinput">gpg --keyserver pgp.mit.edu --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
\curl -sSL https://get.rvm.io | bash -s stable --ruby=2.5
</span></code></pre></notextile>
+This command installs the latest Ruby 2.5.x release, as well as the @gem@ and @bundle@ commands.
+
To use Ruby installed from RVM, load it in an open shell like this:
<notextile>
-<pre><code><span class="userinput">. /usr/local/rvm/scripts/rvm
+<pre><code><span class="userinput">source /usr/local/rvm/scripts/rvm
</span></code></pre></notextile>
Alternately you can use @rvm-exec@ (the first parameter is the ruby version to use, or "default"), for example:
<notextile>
-<pre><code><span class="userinput">rvm-exec default rails console
+<pre><code><span class="userinput">rvm-exec default ruby -v
</span></code></pre></notextile>
-Finally, install Bundler:
-
-<notextile>
-<pre><code>~$ <span class="userinput">gem install bundler</span>
-</code></pre></notextile>
-
h2(#fromsource). Option 3: Install from source
-Install prerequisites for Debian 10:
+Install prerequisites for Debian 10, Ubuntu 18.04 and Ubuntu 20.04:
<notextile>
<pre><code><span class="userinput">sudo apt-get install \
make automake libtool bison sqlite-devel tar
</span></code></pre></notextile>
-Install prerequisites for Ubuntu 16.04:
-
-<notextile>
-<pre><code><span class="userinput">sudo apt-get install \
- bison build-essential gettext libcurl3 \
- libcurl3-openssl-dev libpcre3-dev libreadline-dev \
- libssl-dev libxslt1.1 zlib1g-dev
-</span></code></pre></notextile>
-
Build and install Ruby:
<notextile>
<pre><code><span class="userinput">mkdir -p ~/src
cd ~/src
-curl -f http://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.5.tar.gz | tar xz
-cd ruby-2.5.5
+curl -f http://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.8.tar.gz | tar xz
+cd ruby-2.5.8
./configure --disable-install-rdoc
make
sudo make install
+# Make sure the post install script can find the gem and ruby executables
+sudo ln -s /usr/local/bin/gem /usr/bin/gem
+sudo ln -s /usr/local/bin/ruby /usr/bin/ruby
+# Install bundler
sudo -i gem install bundler</span>
</code></pre></notextile>
-{
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}{
"strict_tags": false,
"tags": {
"IDTAGANIMALS": {
--- /dev/null
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h2(#singularity_mksquashfs_configuration). Singularity mksquashfs configuration
+
+{% if show_docker_warning != nil %}
+{% include 'notebox_begin_warning' %}
+This section is only relevant when using Singularity. Skip this section when using Docker.
+{% include 'notebox_end' %}
+{% endif %}
+
+Docker images are converted on the fly by @mksquashfs@, which can consume a considerable amount of RAM. The RAM usage of mksquashfs can be restricted in @/etc/singularity/singularity.conf@ with a line like @mksquashfs mem = 256M@. The amount of memory made available for mksquashfs should be configured lower than the smallest amount of memory requested by a container on the cluster to avoid the conversion being killed for using too much memory. The default memory allocation in CWL is 256M, so that is also a good choice for the @mksquashfs mem@ setting.
{% endcomment %}
{% include 'notebox_begin' %}
-This tutorial assumes that you have access to the "Arvados command line tools":/user/getting_started/setup-cli.html and have set the "API token":{{site.baseurl}}/user/reference/api-tokens.html and confirmed a "working environment.":{{site.baseurl}}/user/getting_started/check-environment.html .
+This tutorial assumes that you have access to the "Arvados command line tools":{{ site.baseurl }}/user/getting_started/setup-cli.html and have set the "API token":{{site.baseurl}}/user/reference/api-tokens.html and confirmed a "working environment.":{{site.baseurl}}/user/getting_started/check-environment.html .
{% include 'notebox_end' %}
If there's a need to prevent a non-admin user from modifying a specific property, even by its owner, the @Protected@ attribute can be set to @true@, like so:
+<pre>
+Collections:
+ ManagedProperties:
+ sample_id: {Protected: true}
+</pre>
+
+This configuration won't assign a @sample_id@ property on collection creation, but if the user adds it to any collection, its value is protected from that point on.
+
+Another use case would be to protect properties that were automatically assigned by the system:
+
<pre>
Collections:
ManagedProperties:
responsible_person_uuid: {Function: original_owner, Protected: true}
</pre>
-This property can be applied to any of the defined managed properties. If missing, it's assumed as being @false@ by default.
+If missing, the @Protected@ attribute it’s assumed as being @false@ by default.
h3. Supporting example scripts
# is older than the amount of seconds defined on PreserveVersionIfIdle,
# a snapshot of the collection's previous state is created and linked to
# the current collection.
- CollectionVersioning: false
+ CollectionVersioning: true
# This setting control the auto-save aspect of collection versioning, and can be set to:
# 0s = auto-create a new version on every update.
# -1s = never auto-create new versions.
# > 0s = auto-create a new version when older than the specified number of seconds.
- PreserveVersionIfIdle: -1s
+ PreserveVersionIfIdle: 10s
</pre>
Note that if you set @CollectionVersioning@ to @false@ after being enabled, old versions will still be accessible, but further changes will not be versioned.
Change to the API server directory and use the following commands:
<pre>
-$ RAILS_ENV=production bundle exec rake config:migrate > config.yml
+$ RAILS_ENV=production bin/rake config:migrate > config.yml
$ cp config.yml /etc/arvados/config.yml
</pre>
If you wish to update @config.yml@ configuration by hand, or check that everything has been migrated, use @config:diff@ to print configuration items that differ between @application.yml@ and the system @config.yml@.
<pre>
-$ RAILS_ENV=production bundle exec rake config:diff
+$ RAILS_ENV=production bin/rake config:diff
</pre>
This command will also report if no migrations are required.
Change to the workbench server directory and use the following commands:
<pre>
-$ RAILS_ENV=production bundle exec rake config:migrate > config.yml
+$ RAILS_ENV=production bin/rake config:migrate > config.yml
$ cp config.yml /etc/arvados/config.yml
</pre>
If you wish to update @config.yml@ configuration by hand, or check that everything has been migrated, use @config:diff@ to print configuration items that differ between @application.yml@ and the system @config.yml@.
<pre>
-$ RAILS_ENV=production bundle exec rake config:diff
+$ RAILS_ENV=production bin/rake config:diff
</pre>
This command will also report if no migrations are required.
# After applying changes, re-run @arvados-server config-check@ again to check for additional warnings and recommendations.
# When you are satisfied, delete the legacy config file, restart the service, and check its startup logs.
# Copy the updated @config.yml@ file to your next node, and repeat the process there.
+# When you have a @config.yml@ file that includes all volumes on all keepstores, it is important to add a 'Rendezvous' parameter to the InternalURLs entries to make sure the old volume identifiers line up with the new config. If you don't do this, @keep-balance@ will want to shuffle all the existing data around to match the new volume order. The 'Rendezvous' value should be the last 15 characters of the keepstore's UUID in the old configuration. Here's an example:
+
+<notextile>
+<pre><code>Clusters:
+ xxxxx:
+ Services:
+ Keepstore:
+ InternalURLs:
+ "http://keep1.xxxxx.arvadosapi.com:25107": {Rendezvous: "eim6eefaibesh3i"}
+ "http://keep2.xxxxx.arvadosapi.com:25107": {Rendezvous: "yequoodalai7ahg"}
+ "http://keep3.xxxxx.arvadosapi.com:25107": {Rendezvous: "eipheho6re1shou"}
+ "http://keep4.xxxxx.arvadosapi.com:25107": {Rendezvous: "ahk7chahthae3oo"}
+</code></pre>
+</notextile>
+
+In this example, the keepstore with the name `keep1` had the uuid `xxxxx-bi6l4-eim6eefaibesh3i` in the old configuration.
After migrating and removing all legacy config files, make sure the @/etc/arvados/config.yml@ file is identical across all system nodes -- API server, keepstore, etc. -- and restart all services to make sure they are using the latest configuration.
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: InternalURLs and ExternalURL
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+The Arvados configuration is stored at @/etc/arvados/config.yml@. See the "Configuration reference":config.html for more detail.
+
+The @Services@ section lists a number of Arvados services, each with an @InternalURLs@ and/or @ExternalURL@ configuration key. This document explains the precise meaning of these configuration keys, and how they are used by the Arvados services.
+
+The @ExternalURL@ is the address where the service should be reachable by clients, both from inside and from outside the Arvados cluster. Some services do not expose an Arvados API, only Prometheus metrics. In that case, @ExternalURL@ is not used.
+
+The keys under @InternalURLs@ are addresses that are used by the reverse proxy (e.g. Nginx) that fronts Arvados services. The exception is the @Keepstore@ service, where clients connect directly to the addresses listed under @InternalURLs@. If a service is not fronted by a reverse proxy, e.g. when its endpoint only exposes Prometheus metrics, the intention is that metrics are collected directly from the endpoints defined in @InternalURLs@.
+
+@InternalURLs@ are also used by the service itself to figure out which address/port to listen on.
+
+If the Arvados service lives behind a reverse proxy (e.g. Nginx), configuring the reverse proxy and the @InternalURLs@ and @ExternalURL@ values must be done in concert.
+
+h2. Overview
+
+<div class="offset1">
+table(table table-bordered table-condensed).
+|_.Service |_.ExternalURL required? |_.InternalURLs required?|_.InternalURLs must be reachable from other cluster nodes?|_.Note|
+|railsapi |no |yes|no ^1^|InternalURLs only used by Controller|
+|controller |yes |yes|no ^2^|InternalURLs only used by reverse proxy (e.g. Nginx)|
+|arvados-dispatch-cloud|no |yes|no ^3^|InternalURLs only used to expose Prometheus metrics|
+|arvados-dispatch-lsf|no |yes|no ^3^|InternalURLs only used to expose Prometheus metrics|
+|git-http |yes |yes|no ^2^|InternalURLs only used by reverse proxy (e.g. Nginx)|
+|git-ssh |yes |no |no ||
+|keepproxy |yes |yes|no ^2^|InternalURLs only used by reverse proxy (e.g. Nginx)|
+|keepstore |no |yes|yes |All clients connect to InternalURLs|
+|keep-balance |no |yes|no ^3^|InternalURLs only used to expose Prometheus metrics|
+|keep-web |yes |yes|no ^2^|InternalURLs only used by reverse proxy (e.g. Nginx)|
+|websocket |yes |yes|no ^2^|InternalURLs only used by reverse proxy (e.g. Nginx)|
+|workbench1 |yes |no|no ||
+|workbench2 |yes |no|no ||
+</div>
+
+^1^ If @Controller@ runs on a different host than @RailsAPI@, the @InternalURLs@ will need to be reachable from the host that runs @Controller@.
+^2^ If the reverse proxy (e.g. Nginx) does not run on the same host as the Arvados service it fronts, the @InternalURLs@ will need to be reachable from the host that runs the reverse proxy.
+^3^ If the Prometheus metrics are not collected from the same machine that runs the service, the @InternalURLs@ will need to be reachable from the host that collects the metrics.
+
+When @InternalURLs@ do not need to be reachable from other nodes, it is most secure to use loopback addresses as @InternalURLs@, e.g. @http://127.0.0.1:9005@.
+
+It is recommended to use a split-horizon DNS setup where the hostnames specified in @ExternalURL@ resolve to an internal IP address from inside the Arvados cluster, and a publicly routed external IP address when resolved from outside the cluster. This simplifies firewalling and provides optimally efficient traffic routing. In a cloud environment where traffic that flows via public IP addresses is charged, using split horizon DNS can also avoid unnecessary expense.
+
+h2. Examples
+
+The remainder of this document walks through a number of examples to provide more detail.
+
+h3. Keep-balance
+
+Consider this section for the @Keep-balance@ service:
+
+{% codeblock as yaml %}
+ Keepbalance:
+ InternalURLs:
+ "http://ip-10-0-1-233.internal:9005/": {}
+{% endcodeblock %}
+
+@Keep-balance@ has an API endpoint, but it is only used to expose "Prometheus":https://prometheus.io metrics.
+
+There is no @ExternalURL@ key because @Keep-balance@ does not have an Arvados API, no Arvados services need to connect to @Keep-balance@.
+
+The value for @InternalURLs@ tells the @Keep-balance@ service to start up and listen on port 9005, if it is started on a host where @ip-10-0-1-233.internal@ resolves to a local IP address. If @Keep-balance@ is started on a machine where the @ip-10-0-1-233.internal@ hostname does not resolve to a local IP address, it would refuse to start up, because it would not be able to find a local IP address to listen on.
+
+It is also possible to use IP addresses in @InternalURLs@, for example:
+
+{% codeblock as yaml %}
+ Keepbalance:
+ InternalURLs:
+ "http://127.0.0.1:9005/": {}
+{% endcodeblock %}
+
+In this example, @Keep-balance@ would start up and listen on port 9005 at the @127.0.0.1@ IP address. Prometheus would only be able to access the @Keep-balance@ metrics if it could reach that IP and port, e.g. if it runs on the same machine.
+
+Finally, it is also possible to listen on all interfaces, for example:
+
+{% codeblock as yaml %}
+ Keepbalance:
+ InternalURLs:
+ "http://0.0.0.0:9005/": {}
+{% endcodeblock %}
+
+In this case, @Keep-balance@ will listen on port 9005 on all IP addresses local to the machine.
+
+h3. Keepstore
+
+Consider this section for the @Keepstore@ service:
+
+{% codeblock as yaml %}
+ Keepstore:
+ InternalURLs:
+ "http://keep0.ClusterID.example.com:25107": {}
+ "http://keep1.ClusterID.example.com:25107": {}
+{% endcodeblock %}
+
+There is no @ExternalURL@ key because @Keepstore@ is only accessed from inside the Arvados cluster. For access from outside, all traffic goes via @Keepproxy@.
+
+When @Keepstore@ is installed on the host where @keep0.ClusterID.example.com@ resolves to a local IP address, it will listen on port 25107 on that IP address. Likewise on the @keep1.ClusterID.example.com@ host. On all other systems, @Keepstore@ will refuse to start.
+
+h3. Keepproxy
+
+Consider this section for the @Keepproxy@ service:
+
+{% codeblock as yaml %}
+ Keepproxy:
+ ExternalURL: https://keep.ClusterID.example.com
+ InternalURLs:
+ "http://localhost:25107": {}
+{% endcodeblock %}
+
+The @ExternalURL@ advertised is @https://keep.ClusterID.example.com@. The @Keepproxy@ service will start up on @localhost@ port 25107, however. This is possible because we also configure Nginx to terminate SSL and sit in front of the @Keepproxy@ service:
+
+<notextile><pre><code>upstream keepproxy {
+ server 127.0.0.1:<span class="userinput">25107</span>;
+}
+
+server {
+ listen 443 ssl;
+ server_name <span class="userinput">keep.ClusterID.example.com</span>;
+
+ proxy_connect_timeout 90s;
+ proxy_read_timeout 300s;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_http_version 1.1;
+ proxy_request_buffering off;
+ proxy_max_temp_file_size 0;
+
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
+
+ # Clients need to be able to upload blocks of data up to 64MiB in size.
+ client_max_body_size 64m;
+
+ location / {
+ proxy_pass http://keepproxy;
+ }
+}
+</code></pre></notextile>
+
+If a client connects to the @Keepproxy@ service, it will talk to Nginx which will reverse proxy the traffic to the @Keepproxy@ service.
+
+h3. Workbench
+
+Consider this section for the @Workbench@ service:
+
+{% codeblock as yaml %}
+ Workbench1:
+ ExternalURL: "https://workbench.ClusterID.example.com"
+{% endcodeblock %}
+
+The @ExternalURL@ advertised is @https://workbench.ClusterID.example.com@. There is no value for @InternalURLs@ because Workbench1 is a Rails application served by Passenger. The only client connecting to the Passenger process is the reverse proxy (e.g. Nginx), and the listening host/post is configured in its configuration:
+
+<notextile><pre><code>
+server {
+ listen 443 ssl;
+ server_name workbench.ClusterID.example.com;
+
+ ssl_certificate /YOUR/PATH/TO/cert.pem;
+ ssl_certificate_key /YOUR/PATH/TO/cert.key;
+
+ root /var/www/arvados-workbench/current/public;
+ index index.html;
+
+ passenger_enabled on;
+ # If you're using RVM, uncomment the line below.
+ #passenger_ruby /usr/local/rvm/wrappers/default/ruby;
+
+ # `client_max_body_size` should match the corresponding setting in
+ # the API.MaxRequestSize and Controller's server's Nginx configuration.
+ client_max_body_size 128m;
+}
+</code></pre></notextile>
+
+h3. API server
+
+Consider this section for the @RailsAPI@ service:
+
+{% codeblock as yaml %}
+ RailsAPI:
+ InternalURLs:
+ "http://localhost:8004": {}
+{% endcodeblock %}
+
+There is no @ExternalURL@ defined because the @RailsAPI@ is not directly accessible and does not need to advertise a URL: all traffic to it flows via @Controller@, which is the only client that talks to it.
+
+The @RailsAPI@ service is also a Rails application, and its listening host/port is defined in the Nginx configuration:
+
+<notextile><pre><code>
+server {
+ # This configures the Arvados API server. It is written using Ruby
+ # on Rails and uses the Passenger application server.
+
+ listen localhost:8004;
+ server_name localhost-api;
+
+ root /var/www/arvados-api/current/public;
+ index index.html index.htm index.php;
+
+ passenger_enabled on;
+
+ # If you are using RVM, uncomment the line below.
+ # If you're using system ruby, leave it commented out.
+ #passenger_ruby /usr/local/rvm/wrappers/default/ruby;
+
+ # This value effectively limits the size of API objects users can
+ # create, especially collections. If you change this, you should
+ # also ensure the following settings match it:
+ # * `client_max_body_size` in the previous server section
+ # * `API.MaxRequestSize` in config.yml
+ client_max_body_size 128m;
+}
+</code></pre></notextile>
+
+So then, why is there a need to specify @InternalURLs@ for the @RailsAPI@ service? It is there because this is how the @Controller@ service locates the @RailsAPI@ service it should talk to. Since this connection is internal to the Arvados cluster, @Controller@ uses @InternalURLs@ to find the @RailsAPI@ endpoint.
+
+h3. Controller
+
+Consider this section for the @Controller@ service:
+
+{% codeblock as yaml %}
+ Controller:
+ InternalURLs:
+ "http://localhost:8003": {}
+ ExternalURL: "https://ClusterID.example.com"
+{% endcodeblock %}
+
+The @ExternalURL@ advertised is @https://ClusterID.example.com@. The @Controller@ service will start up on @localhost@ port 8003. Nginx is configured to sit in front of the @Controller@ service and terminates SSL:
+
+<notextile><pre><code>
+# This is the port where nginx expects to contact arvados-controller.
+upstream controller {
+ server localhost:8003 fail_timeout=10s;
+}
+
+server {
+ # This configures the public https port that clients will actually connect to,
+ # the request is reverse proxied to the upstream 'controller'
+
+ listen 443 ssl;
+ server_name ClusterID.example.com;
+
+ ssl_certificate /YOUR/PATH/TO/cert.pem;
+ ssl_certificate_key /YOUR/PATH/TO/cert.key;
+
+ # Refer to the comment about this setting in the passenger (arvados
+ # api server) section of your Nginx configuration.
+ client_max_body_size 128m;
+
+ location / {
+ proxy_pass http://controller;
+ proxy_redirect off;
+ proxy_connect_timeout 90s;
+ proxy_read_timeout 300s;
+
+ proxy_set_header Host $http_host;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
+ proxy_set_header X-External-Client $external_client;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_set_header X-Real-IP $remote_addr;
+ }
+}
+</code></pre></notextile>
+
+
--- /dev/null
+---
+layout: default
+navsection: admin
+title: "Measuring deduplication"
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+The @arvados-client@ tool can be used to generate a deduplication report across an arbitrary number of collections. It can be installed from packages (@apt install arvados-client@ or @yum install arvados-client@).
+
+h2(#syntax). Syntax
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client deduplication-report -h</span>
+Usage:
+ arvados-client deduplication-report [options ...] <collection-uuid> <collection-uuid> ...
+
+ arvados-client deduplication-report [options ...] <collection-pdh>,<collection_uuid> \
+ <collection-pdh>,<collection_uuid> ...
+
+ This program analyzes the overlap in blocks used by 2 or more collections. It
+ prints a deduplication report that shows the nominal space used by the
+ collections, as well as the actual size and the amount of space that is saved
+ by Keep's deduplication.
+
+ The list of collections may be provided in two ways. A list of collection
+ uuids is sufficient. Alternatively, the PDH for each collection may also be
+ provided. This is will greatly speed up operation when the list contains
+ multiple collections with the same PDH.
+
+ Exit status will be zero if there were no errors generating the report.
+
+Example:
+
+ Use the 'arv' and 'jq' commands to get the list of the 100
+ largest collections and generate the deduplication report:
+
+ arv collection list --order 'file_size_total desc' --limit 100 | \
+ jq -r '.items[] | [.portable_data_hash,.uuid] |@csv' | \
+ sed -e 's/"//g'|tr '\n' ' ' | \
+ xargs arvados-client deduplication-report
+
+Options:
+ -log-level string
+ logging level (debug, info, ...) (default "info")
+</code>
+</pre>
+</notextile>
+
+The usual environment variables (@ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@) need to be set for the deduplication report to be be generated. To get cluster-wide results, an admin token will need to be supplied. Users can also run this report, but only collections their token is able to read will be included.
+
+Example output (with uuids and portable data hashes obscured) from a small Arvados cluster:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv collection list --order 'file_size_total desc' --limit 10 | jq -r '.items[] | [.portable_data_hash,.uuid] |@csv' |sed -e 's/"//g'|tr '\n' ' ' |xargs arvados-client deduplication-report</span>
+Collection _____-_____-_______________: pdh ________________________________+5003343; nominal size 7382073267640 (6.7 TiB); file count 2796
+Collection _____-_____-_______________: pdh ________________________________+4961919; nominal size 6989909625775 (6.4 TiB); file count 5592
+Collection _____-_____-_______________: pdh ________________________________+1903643; nominal size 2677933564052 (2.4 TiB); file count 2796
+Collection _____-_____-_______________: pdh ________________________________+1903643; nominal size 2677933564052 (2.4 TiB); file count 2796
+Collection _____-_____-_______________: pdh ________________________________+137710; nominal size 191858151583 (179 GiB); file count 201
+Collection _____-_____-_______________: pdh ________________________________+137636; nominal size 191858101962 (179 GiB); file count 200
+Collection _____-_____-_______________: pdh ________________________________+135350; nominal size 191715427388 (178 GiB); file count 201
+Collection _____-_____-_______________: pdh ________________________________+135276; nominal size 191715384167 (178 GiB); file count 200
+Collection _____-_____-_______________: pdh ________________________________+135350; nominal size 191707276684 (178 GiB); file count 201
+Collection _____-_____-_______________: pdh ________________________________+135276; nominal size 191707233463 (178 GiB); file count 200
+
+Collections: 10
+Nominal size of stored data: 20878411596766 bytes (19 TiB)
+Actual size of stored data: 17053104444050 bytes (16 TiB)
+Saved by Keep deduplication: 3825307152716 bytes (3.5 TiB)
+
+</code>
+</pre>
+</notextile>
h2(#check_collection_versioning). Consider collection versioning
-Arvados supports collection versioning. If it has been "enabled":{{ site.baseurl }}/admin/collection-versioning.html on your cluster, the deleted collection may be recoverable from an older version. See "Using collection versioning":{{ site.baseurl }}/user/topics/collection-versioning.html for details.
+Arvados supports collection versioning. If it has not been "disabled":{{ site.baseurl }}/admin/collection-versioning.html on your cluster, the deleted collection may be recoverable from an older version. See "Using collection versioning":{{ site.baseurl }}/user/topics/collection-versioning.html for details.
h2(#recover_collection). Recovering collections
* Delete the affected collections so that job reuse doesn't attempt to reuse them (it's likely that if one block is missing, they all are, so they're unlikely to contain any useful data)
* Resubmit any container requests for which you want the output collections regenerated
-The Arvados repository contains a tool that can be used to generate a report to help with this task at "arvados/tools/keep-xref/keep-xref.py":https://github.com/arvados/arvados/blob/master/tools/keep-xref/keep-xref.py
+The Arvados repository contains a tool that can be used to generate a report to help with this task at "arvados/tools/keep-xref/keep-xref.py":https://github.com/arvados/arvados/blob/main/tools/keep-xref/keep-xref.py
---
layout: default
navsection: admin
-title: User properties vocabulary
+title: Metadata vocabulary
...
{% comment %}
Many Arvados objects (like collections and projects) can store metadata as properties that in turn can be used in searches allowing a flexible way of organizing data inside the system.
-The Workbench2 user interface enables the site adminitrator to set up a properties vocabulary formal definition so that users can select from predefined key/value pairs of properties, offering the possibility to add different terms for the same concept.
+Arvados enables the site administrator to set up a formal metadata vocabulary definition so that users can select from predefined key/value pairs of properties, offering the possibility to add different terms for the same concept on clients' UI such as workbench2.
-h2. Workbench2 configuration
+The Controller service loads and caches the configured vocabulary file in memory at startup time, exporting it on a particular endpoint. From time to time, it'll check for updates in the local copy and refresh its cache if validation passes.
-Workbench2 retrieves the vocabulary file URL from the cluster config as shown:
+h2. Configuration
+
+The site administrator should place the JSON vocabulary file on the same host as the controller service and set up the config file as follows:
<notextile>
<pre><code>Cluster:
zzzzz:
- Workbench:
- VocabularyURL: <span class="userinput">https://site.example.com/vocabulary.json</span>
+ API:
+ VocabularyPath: <span class="userinput">/etc/arvados/vocabulary.json</span>
</code></pre>
</notextile>
The following is an example of a vocabulary definition:
{% codeblock as json %}
-{% include 'wb2_vocabulary_example' %}
+{% include 'metadata_vocabulary_example' %}
{% endcodeblock %}
-If the @strict_tags@ flag at the root level is @true@, it will restrict the users from saving property keys other than the ones defined in the vocabulary. Take notice that this restriction is at the client level on Workbench2, it doesn't limit the user's ability to set any arbitrary property via other means (e.g. Python SDK or CLI commands)
+For clients to be able to query the vocabulary definition, a special endpoint is exposed on the @controller@ service: @/arvados/v1/vocabulary@. This endpoint doesn't require authentication and returns the vocabulary definition in JSON format.
+
+If the @strict_tags@ flag at the root level is @true@, it will restrict the users from saving property keys other than the ones defined in the vocabulary. This restriction is enforced at the backend level to ensure consistency across different clients.
Inside the @tags@ member, IDs are defined (@IDTAGANIMALS@, @IDTAGCOMMENT@, @IDTAGIMPORTANCES@) and can have any format that the current application requires. Every key will declare at least a @labels@ list with zero or more label objects.
|arvados-api-server||
|arvados-controller|✓|
|arvados-dispatch-cloud|✓|
+|arvados-dispatch-lsf|✓|
|arvados-git-httpd||
|arvados-ws|✓|
|composer||
--- /dev/null
+---
+layout: default
+navsection: admin
+title: Restricting upload or download
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+For some use cases, you may want to limit the ability of users to upload or download data from outside the cluster. (By "outside" we mean from networks other than the cluster's own private network). For example, this makes it possible to share restricted data sets with users so that they may run their own data analysis on the cluster, while preventing them from easily downloading the data set to their local workstation.
+
+This feature exists in addition to the existing Arvados permission system. Users can only download from collections they have @read@ access to, and can only upload to projects and collections they have @write@ access to.
+
+There are two services involved in accessing data from outside the cluster.
+
+h2. Keepproxy Permissions
+
+Permitting @keeproxy@ makes it possible to use @arv-put@ and @arv-get@, and upload from Workbench 1. It works in terms of individual 64 MiB keep blocks. It prints a log line each time a user uploads or downloads an individual block. Those logs are usually stored by @journald@ or @syslog@.
+
+The default policy allows anyone to upload or download.
+
+<pre>
+ Collections:
+ KeepproxyPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+</pre>
+
+h2. WebDAV and S3 API Permissions
+
+Permitting @WebDAV@ makes it possible to use WebDAV, S3 API, download from Workbench 1, and upload/download with Workbench 2. It works in terms of individual files. It prints a log each time a user uploads or downloads a file. When @WebDAVLogEvents@ (default true) is enabled, it also adds an entry into the API server @logs@ table.
+
+When a user attempts to upload or download from a service without permission, they will receive a @403 Forbidden@ response. This only applies to file content.
+
+Denying download permission does not deny access to access to XML file listings with PROPFIND, or auto-generated HTML documents containing file listings.
+
+Denying upload permission does not deny other operations that modify collections without directly accessing file content, such as MOVE and COPY.
+
+The default policy allows anyone to upload or download.
+
+<pre>
+ Collections:
+ WebDAVPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+ WebDAVLogEvents: true
+ </pre>
+
+When a user or admin creates a sharing link, a custom scoped token is embedded in that link. This effectively allows anonymous user access to the associated data via that link. These custom scoped tokens are always treated as user tokens for the purposes of restricting download access, even when created by an admin user. In other words, these custom scoped tokens, when used in a sharing link, are always subject to the value of the @WebDAVPermission/User/Download@ configuration setting.
+
+If that custom scoped token is used with @arv-get@, its use will be subject to the value of the @KeepproxyPermission/User/Download@ configuration setting.
+
+h2. Shell node and container permissions
+
+Be aware that even when upload and download from outside the network is not allowed, a user who has access to a shell node or runs a container still has internal access to Keep. (This is necessary to be able to run workflows). From the shell node or container, a user could send data outside the network by some other method, although this requires more intent than accidentally clicking on a link and downloading a file. It is possible to set up a firewall to prevent shell and compute nodes from making connections to hosts outside the private network. Exactly how to configure firewalls is out of scope for this page, as it depends on the specific network infrastructure of your cluster.
+
+h2. Choosing a policy
+
+This distinction between WebDAV and Keepproxy is important for auditing. WebDAV records 'upload' and 'download' events on the API server that are included in the "User Activity Report":user-activity.html, whereas @keepproxy@ only logs upload and download of individual blocks, which require a reverse lookup to determine the collection(s) and file(s) a block is associated with.
+
+You set separate permissions for @WebDAV@ and @Keepproxy@, with separate policies for regular users and admin users.
+
+These policies apply to only access from outside the cluster, using Workbench or Arvados CLI tools.
+
+The @WebDAVLogEvents@ option should be enabled if you intend to the run the "User Activity Report":user-activity.html . If you don't need audits, or you are running a site that is mostly serving public data to anonymous downloaders, you can disable in to avoid the extra API server request.
+
+h3. Audited downloads
+
+For ease of access auditing, this policy prevents downloads using @arv-get@. Downloads through WebDAV and S3 API are permitted, but logged. Uploads are allowed.
+
+<pre>
+ Collections:
+ WebDAVPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ KeepproxyPermission:
+ User:
+ Download: false
+ Upload: true
+ Admin:
+ Download: false
+ Upload: true
+ WebDAVLogEvents: true
+</pre>
+
+h3. Disallow downloads by regular users
+
+This policy prevents regular users (non-admin) from downloading data. Uploading is allowed. This supports the case where restricted data sets are shared with users so that they may run their own data analysis on the cluster, while preventing them from downloading the data set to their local workstation. Be aware that users won't be able to download the results of their analysis, either, requiring an admin in the loop or some other process to release results.
+
+<pre>
+ Collections:
+ WebDAVPermission:
+ User:
+ Download: false
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ KeepproxyPermission:
+ User:
+ Download: false
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+ WebDAVLogEvents: true
+</pre>
+
+h3. Disallow uploads by regular users
+
+This policy is suitable for an installation where data is being shared with a group of users who are allowed to download the data, but not permitted to store their own data on the cluster.
+
+<pre>
+ Collections:
+ WebDAVPermission:
+ User:
+ Download: true
+ Upload: false
+ Admin:
+ Download: true
+ Upload: true
+
+ KeepproxyPermission:
+ User:
+ Download: true
+ Upload: false
+ Admin:
+ Download: true
+ Upload: true
+ WebDAVLogEvents: true
+</pre>
+
+
+h2. Accessing the audit log
+
+When @WebDAVLogEvents@ is enabled, uploads and downloads of files are logged in the Arvados audit log. These events are included in the "User Activity Report":user-activity.html. The audit log can also be accessed via the API, SDKs or command line. For example, to show the 100 most recent file downloads:
+
+<pre>
+arv log list --filters '[["event_type","=","file_download"]]' -o 'created_at desc' -l 100
+</pre>
+
+For uploads, use the @file_upload@ event type.
+
+Note that this only covers upload and download activity via WebDAV, S3, Workbench 1 (download only) and Workbench 2.
+
+File upload in Workbench 1 and the @arv-get@ and @arv-put@ tools use @Keepproxy@, which does not log activity to the audit log because it operates at the block level, not the file level. @Keepproxy@ records the uuid of the user that owns the token used in the request in its system logs. Those logs are usually stored by @journald@ or @syslog@. A typical log line for such a block download looks like this:
+
+<pre>
+Jul 20 15:03:38 workbench.xxxx1.arvadosapi.com keepproxy[63828]: {"level":"info","locator":"abcdefghijklmnopqrstuvwxyz012345+53251584","msg":"Block download","time":"2021-07-20T15:03:38.458792300Z","user_full_name":"Albert User","user_uuid":"ce8i5-tpzed-abcdefghijklmno"}
+</pre>
+
+It is possible to do a reverse lookup from the locator to find all matching collections: the @manifest_text@ field of a collection lists all the block locators that are part of the collection. The @manifest_text@ field also provides the relevant filename in the collection. Because this lookup is rather involved and there is no automated tool to do it, we recommend disabling @KeepproxyPermission/User/Download@ and @KeepproxyPermission/User/Upload@ for sites where the audit log is important and @arv-get@ and @arv-put@ are not essential.
Storage classes (alternately known as "storage tiers") allow you to control which volumes should be used to store particular collection data blocks. This can be used to implement data storage policies such as moving data to archival storage.
-The storage classes for each volume are set in the per-volume "keepstore configuration":{{site.baseurl}}/install/install-keepstore.html
+In the default Arvados configuration, with no storage classes specified in the configuration file, all volumes belong to a single implicit storage class called "default". Apart from that, names of storage classes are internal to the cluster and decided by the administrator. Other than the implicit "default" class, Arvados currently does not define any standard storage class names.
+
+To use multiple storage classes, update the @StorageClasses@ and @Volumes@ sections of your configuration file.
+* Every storage class you use (including "default") must be defined in the @StorageClasses@ section.
+* The @StorageClasses@ section must use @Default: true@ to indicate at least one default storage class. When a client/user does not specify storage classes when creating a new collection, the default storage classes are used implicitly.
+* If some storage classes are faster or cheaper to access than others, assign a higher @Priority@ to the faster ones. When reading data, volumes with high priority storage classes are searched first.
+
+Example:
<pre>
+ StorageClasses:
+
+ default:
+ # When reading a block that is stored on multiple volumes,
+ # prefer a volume with this class.
+ Priority: 20
+
+ # When a client does not specify a storage class when saving a
+ # new collection, use this one.
+ Default: true
+
+ archival:
+ Priority: 10
+
Volumes:
+
ClusterID-nyw5e-000000000000000:
# This volume is in the "default" storage class.
StorageClasses:
default: true
+
ClusterID-nyw5e-000000000000001:
- # Specify this volume is in the "archival" storage class.
+ # This volume is in the "archival" storage class.
StorageClasses:
archival: true
</pre>
-Names of storage classes are internal to the cluster and decided by the administrator. Aside from "default", Arvados currently does not define any standard storage class names.
+Refer to the "configuration reference":{{site.baseurl}}/admin/config.html for more details.
h3. Using storage classes
h3. Storage management notes
-The "keep-balance":{{site.baseurl}}/install/install-keep-balance.html service is responsible for deciding which blocks should be placed on which keepstore volumes. As part of the rebalancing behavior, it will determine where a block should go in order to satisfy the desired storage classes, and issue pull requests to copy the block from its original volume to the desired volume. The block will subsequently be moved to trash on the original volume.
+When uploading data, if a data block cannot be uploaded to all desired storage classes, it will result in a fatal error. Data blocks will not be uploaded to volumes that do not have the desired storage class.
-If a block appears in multiple collections with different storage classes, the block will be stored in separate volumes for each storage class, even if that results in overreplication, unless there is a volume which has all the desired storage classes.
+If you change the storage classes for a collection, the data is not moved immediately. The "keep-balance":{{site.baseurl}}/install/install-keep-balance.html service is responsible for deciding which blocks should be placed on which keepstore volumes. As part of the rebalancing behavior, it will determine where a block should go in order to satisfy the desired storage classes, and issue pull requests to copy the block from its original volume to the desired volume. The block will subsequently be moved to trash on the original volume.
-If a collection has a desired storage class which is not available in any keepstore volume, the collection's blocks will remain in place, and an error will appear in the @keep-balance@ logs.
+If a block is assigned to multiple storage classes, the block will be stored on @desired_replication@ number of volumes for storage class, even if that results in overreplication.
-This feature does not provide a hard guarantee on where data will be stored. Data may be written to default storage and moved to the desired storage class later. If controlling data locality is a hard requirement (such as legal restrictions on the location of data) we recommend setting up multiple Arvados clusters.
+If a collection has a desired storage class which is not available in any keepstore volume, the collection's blocks will remain in place, and an error will appear in the @keep-balance@ logs.
---
layout: default
navsection: admin
-title: Setting token expiration policy
+title: Automatic logout and token expiration
...
{% comment %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-When a user logs in to Workbench, they receive a newly created token that grants access to the Arvados API on behalf of that user. By default, this token does not expire until the user explicitly logs off.
+When a user logs in to Workbench, they receive a newly created token (a long string of random characters) which grants access to the Arvados API on behalf of that user. In the default configuration, this token does not expire until the user explicitly logs out.
-Security policies, such as for GxP Compliance, may require that tokens expire by default in order to limit the risk associated with a token being leaked.
+Security policies, such as those required to comply with regulations such as HIPAA and GxP, may include policies for "automatic logoff". In order to limit the window of risk associated with unauthorized access of the desktop of an Arvados user, or a token being leaked, Arvados offers options for automatic logout from the web app, and to configure access tokens to expire by default.
-The @Login.TokenLifetime@ configuration enables the administrator to set a expiration lifetime for tokens granted through the login flow.
+The @Workbench.IdleTimeout@, @Login.TokenLifetime@, and @API.MaxTokenLifetime@ options give the administrator ways to control automatic expiration of tokens granted through the login flow.
-h2. Setting token expiration
+If you are looking for information on how to expire a token manually, see how to "delete a single token":user-management-cli.html#delete-token and "delete all tokens belonging to a user":user-management-cli.html#delete-all-tokens .
-Suppose that the organization's security policy requires that user sessions should not be valid for more than 12 hours, the cluster configuration should be set like the following:
+h2. Automatic logout
+
+Use @Workbench.IdleTimeout@ to configure Workbench 2 for automatic logout after a period of idle time. For example, this configuration would log the user out after five minutes of no keyboard or pointer activity:
+
+<pre>
+Clusters:
+ zzzzz:
+ ...
+ Workbench:
+ IdleTimeout: 5m
+ ...
+</pre>
+
+When idle timeout is set, several behaviors and considerations apply:
+
+* The user will be automatically logged out after a period of inactivity. When the automatic logout happens, the token associated with that session will be revoked.
+* Users should use the "open in new tab" functionality of Workbench 2. This will share the same token between tabs without requiring the user to log in again. Logging out will apply to all browser tabs that use the same token.
+* If the user closes a Workbench tab without first logging out, the browser will forget the token, but not expire the token (this is desirable if the user has several tabs open).
+* If the user closes all Workbench tabs, they will be required to log in again.
+* This only affects browser behavior. Automatic logout should be used together automatic token expiration described below.
+
+The default value for @Workbench.IdleTimeout@ is zero, which disables auto-logout.
+
+h2. Automatic expiration of login tokens
+
+Use @Login.TokenLifetime@ to set the lifetime for tokens issued through the login process. This is the maximum amount of time a user can maintain a session before having to log in again. This setting applies to both regular and admin user logins. Here is an example configuration that would require the user to log in again after 12 hours:
<pre>
Clusters:
...
</pre>
-With this configuration, users will have to re-login every 12 hours.
+This is independent of @Workbench.IdleTimeout@. Even if Workbench auto-logout is disabled, this option will ensure that the user is always required to log in again after the configured amount of time.
+
+h2. Untrusted login tokens
+
+<pre>
+Clusters:
+ zzzzz:
+ ...
+ Login:
+ TrustLoginTokens: false
+ ...
+</pre>
+
+When `TrustLoginTokens` is `false`, tokens issued through login will be "untrusted" by default. Untrusted tokens cannot be used to list other tokens issued to the user, and cannot be used to grant new tokens. This stops an attacker from leveraging a leaked token to aquire other tokens, but also interferes with some Workbench features that create new tokens on behalf of the user.
+
+The default value @Login.TokenLifetime@ is zero, meaning login tokens do not expire (unless @API.MaxTokenLifetime@ is set).
+
+h2. Automatic expiration of all tokens
+
+Use @API.MaxTokenLifetime@ to set the maximum lifetime for any access token created by regular (non-admin) users. For example, this configuration would require that all tokens expire after 24 hours:
+
+<pre>
+Clusters:
+ zzzzz:
+ ...
+ API:
+ MaxTokenLifetime: 24h
+ ...
+</pre>
+
+Tokens created without an explicit expiration time, or that exceed maximum lifetime, will be set to @API.MaxTokenLifetime@.
+
+Similar to @Login.TokenLifetime@, this option ensures that the user is always required to log in again after the configured amount of time.
+
+Unlike @Login.TokenLifetime@, this applies to all API operations that manipulate tokens, regardless of whether the token was created by logging in, or by using the API. If @Login.TokenLifetime@ is greater than @API.MaxTokenLifetime@, MaxTokenLifetime takes precedence.
+
+Admin users are permitted to create tokens with expiration times further in the future than @MaxTokenLifetime@.
+
+The default value @MaxTokenLifetime@ is zero, which means there is no maximum token lifetime.
+
+h2. Choosing a policy
+
+@Workbench.IdleTimeout@ only affects browser behavior. It is strongly recommended that automatic browser logout be used together with @Login.TokenLifetime@, which is enforced on API side.
+
+@TrustLoginTokens: true@ (default value) is less restrictive. Be aware that an unrestricted token can be "refreshed" to gain access for an indefinite period. This means, during the window that the token is valid, the user is permitted to create a new token, which will have a new expiration further in the future (of course, once the token has expired, this is no longer possible). Unrestricted tokens are required for some Workbench features, as well as ease of use in other contexts, such as the Arvados command line. This option is recommended if many users will interact with the system through the command line.
+
+@TrustLoginTokens: false@ is more restrictive. A token obtained by logging into Workbench cannot be "refreshed" to gain access for an indefinite period. However, it interferes with some Workbench features, as well as ease of use in other contexts, such as the Arvados command line. This option is recommended only if most users will only ever interact with the system through Workbench or WebShell. For users or service accounts that need to tokens with fewer restrictions, the admin can "create a token at the command line":user-management-cli.html#create-token using the @SystemRootToken@.
-When this configuration is active, the workbench client will also be "untrusted" by default. This means tokens issued to workbench cannot be used to list other tokens issued to the user, and cannot be used to grant new tokens. This stops an attacker from leveraging a leaked token to aquire other tokens.
+In every case, admin users may always create tokens with expiration dates far in the future.
-The default @TokenLifetime@ is zero, which disables this feature.
+These policies do not apply to tokens created by the API server for the purposes of authorizing a container to run, as those tokens are automatically expired when the container is finished.
h2. Applying policy to existing tokens
-If you have an existing Arvados installation and want to set a token lifetime policy, there may be user tokens already granted. The administrator can use the following @rake@ tasks to enforce the new policy.
+If you have an existing Arvados installation and want to set a token lifetime policy, there may be long-lived user tokens already granted. The administrator can use the following @rake@ tasks to enforce the new policy.
The @db:check_long_lived_tokens@ task will list which users have tokens with no expiration date.
<notextile>
-<pre><code># <span class="userinput">bundle exec rake db:check_long_lived_tokens</span>
+<pre><code># <span class="userinput">bin/rake db:check_long_lived_tokens</span>
Found 6 long-lived tokens from users:
user2,user2@example.com,zzzzz-tpzed-5vzt5wc62k46p6r
admin,admin@example.com,zzzzz-tpzed-6drplgwq9nm5cox
To apply the new policy to existing tokens, use the @db:fix_long_lived_tokens@ task.
<notextile>
-<pre><code># <span class="userinput">bundle exec rake db:fix_long_lived_tokens</span>
+<pre><code># <span class="userinput">bin/rake db:fix_long_lived_tokens</span>
Setting token expiration to: 2020-08-25 03:30:50 +0000
6 tokens updated.
</code></pre>
<div class="releasenotes">
</notextile>
-h2(#main). development main (as of 2020-12-10)
+h2(#main). development main (as of 2021-11-10)
-"Upgrading from 2.1.0":#v2_1_0
+"previous: Upgrading from 2.3.0":#v2_3_0
+
+h3. Role groups are visible to all users by default
+
+The permission model has changed such that all role groups are visible to all active users. This enables users to share objects with groups they don't belong to. To preserve the previous behavior, where role groups are only visible to members and admins, add @RoleGroupsVisibleToAll: false@ to the @Users@ section of your configuration file.
+
+h3. Default LSF arguments have changed
+
+If you use LSF and your configuration specifies @Containers.LSF.BsubArgumentsList@, you should update it to include the new arguments (@"-R", "select[mem>=%MMB]", ...@, see "configuration reference":{{site.baseurl}}/admin/config.html). Otherwise, containers that are too big to run on any LSF host will remain in the LSF queue instead of being cancelled.
+
+h3. Previously trashed role groups will be deleted
+
+Due to a bug in previous versions, the @DELETE@ operation on a role group caused the group to be flagged as trash in the database, but continue to grant permissions regardless. After upgrading, any role groups that had been trashed this way will be deleted. This might surprise some users if they were relying on permissions that were still in effect due to this bug. Future @DELETE@ operations on a role group will immediately delete the group and revoke the associated permissions.
+
+h3. Users are visible to other users by default
+
+When a new user is set up (either via @AutoSetupNewUsers@ config or via Workbench admin interface) the user immediately becomes visible to other users. To revert to the previous behavior, where the administrator must add two users to the same group using the Workbench admin interface in order for the users to see each other, change the new @Users.ActivatedUsersAreVisibleToOthers@ config to @false@.
+
+h3. Dedicated keepstore process for each container
+
+When Arvados runs a container via @arvados-dispatch-cloud@, the @crunch-run@ supervisor process now brings up its own keepstore server to handle I/O for mounted collections, outputs, and logs. With the default configuration, the keepstore process allocates one 64 MiB block buffer per VCPU requested by the container. For most workloads this will increase throughput, reduce total network traffic, and make it possible to run more containers at once without provisioning additional keepstore nodes to handle the I/O load.
+* If you have containers that can effectively handle multiple I/O threads per VCPU, consider increasing the @Containers.LocalKeepBlobBuffersPerVCPU@ value.
+* If you already have a robust permanent keepstore infrastructure, you can set @Containers.LocalKeepBlobBuffersPerVCPU@ to 0 to disable this feature and preserve the previous behavior of sending container I/O traffic to your separately provisioned keepstore servers.
+* This feature is enabled only if no volumes use @AccessViaHosts@, and no volumes have underlying @Replication@ less than @Collections.DefaultReplication@. If the feature is configured but cannot be enabled due to an incompatible volume configuration, this will be noted in the @crunch-run.txt@ file in the container log.
+
+h3. Backend support for vocabulary checking
+
+If your installation uses the vocabulary feature on Workbench2, you will need to update the cluster configuration by moving the vocabulary definition file to the node where @controller@ runs, and set the @API.VocabularyPath@ configuration parameter to the local path where the file was placed.
+This will enable the vocabulary checking cluster-wide, including Workbench2. The @Workbench.VocabularyURL@ configuration parameter is deprecated and will be removed in a future release.
+You can read more about how this feature works on the "admin page":{{site.baseurl}}/admin/metadata-vocabulary.html.
+
+h2(#v2_3_0). v2.3.0 (2021-10-27)
+
+"previous: Upgrading to 2.2.0":#v2_2_0
+
+h3. Ubuntu 18.04 packages for arvados-api-server and arvados-workbench now conflict with ruby-bundler
+
+Ubuntu 18.04 ships with Bundler version 1.16.1, which is no longer compatible with the Gemfiles in the Arvados packages (made with Bundler 2.2.19). The Ubuntu 18.04 packages for arvados-api-server and arvados-workbench now conflict with the ruby-bundler package to work around this issue. The post-install scripts for arvados-api-server and arvados-workbench install the proper version of Bundler as a gem.
+
+h3. Removed unused @update_uuid@ endpoint for users.
+
+The @update_uuid@ endpoint was superseded by the "link accounts feature":{{site.baseurl}}/admin/link-accounts.html, so it's no longer available.
+
+h3. Removed deprecated '@@' search operator
+
+The '@@' full text search operator, previously deprecated, has been removed. To perform a string search across multiple columns, use the 'ilike' operator on 'any' column as described in the "available list method filter section":{{site.baseurl}}/api/methods.html#substringsearchfilter of the API documentation.
+
+h3. Storage classes must be defined explicitly
+
+If your configuration uses the StorageClasses attribute on any Keep volumes, you must add a new @StorageClasses@ section that lists all of your storage classes. Refer to the updated documentation about "configuring storage classes":{{site.baseurl}}/admin/storage-classes.html for details.
+
+h3. keep-balance requires access to PostgreSQL
+
+Make sure the keep-balance process can connect to your PostgreSQL server using the settings in your config file. (In previous versions, keep-balance accessed the database through controller instead of connecting to the database server directly.)
+
+h3. crunch-dispatch-local now requires config.yml
+
+The @crunch-dispatch-local@ dispatcher now reads the API host and token from the system wide @/etc/arvados/config.yml@ . It will fail to start that file is not found or not readable.
+
+h3. Multi-file docker image collections
+
+Typically a docker image collection contains a single @.tar@ file at the top level. Handling of atypical cases has changed. If a docker image collection contains files with extensions other than @.tar@, they will be ignored (previously they could cause errors). If a docker image collection contains multiple @.tar@ files, it will cause an error at runtime, "cannot choose from multiple tar files in image collection" (previously one of the @.tar@ files was selected). Subdirectories are ignored. The @arv keep docker@ command always creates a collection with a single @.tar@ file, and never uses subdirectories, so this change will not affect most users.
+
+h2(#v2_2_0). v2.2.0 (2021-06-03)
+
+"previous: Upgrading to 2.1.0":#v2_1_0
+
+h3. New spelling of S3 credential configs
+
+If you use the S3 driver for Keep volumes and specify credentials in your configuration file (as opposed to using an IAM role), you should change the spelling of the @AccessKey@ and @SecretKey@ config keys to @AccessKeyID@ and @SecretAccessKey@. If you don't update them, the previous spellings will still be accepted, but warnings will be logged at server startup.
h3. New proxy parameters for arvados-controller
Now that Python 3 is part of the base repository in CentOS 7, the Python 3 dependency for Centos7 Arvados packages was changed from SCL rh-python36 to python3.
+h3. ForceLegacyAPI14 option removed
+
+The ForceLegacyAPI14 configuration option has been removed. In the unlikely event it is mentioned in your config file, remove it to avoid "deprecated/unknown config" warning logs.
+
h2(#v2_1_0). v2.1.0 (2020-10-13)
-"Upgrading from 2.0.0":#v2_0_0
+"previous: Upgrading to 2.0.0":#v2_0_0
h3. LoginCluster conflicts with other Login providers
A satellite cluster that delegates its user login to a central user database must only have `Login.LoginCluster` set, or it will return an error. This is a change in behavior, previously it would return an error if another login provider was _not_ configured, even though the provider would never be used.
+h3. Minimum supported Python version is now 3.5
+
+We no longer publish Python 2 based distribution packages for our Python components. There are equivalent packages based on Python 3, but their names are slightly different. If you were using the Python 2 based packages, you can install the Python 3 based package for a drop in replacement. On Debian and Ubuntu:
+
+<pre>
+ apt remove python-arvados-fuse && apt install python3-arvados-fuse
+ apt remove python-arvados-python-client && apt install python3-arvados-python-client
+ apt remove python-arvados-cwl-runner && apt install python3-arvados-cwl-runner
+ apt remove python-crunchstat-summary && apt install python3-crunchstat-summary
+ apt remove python-cwltest && apt install python3-cwltest
+</pre>
+
+On CentOS:
+
+<pre>
+ yum remove python-arvados-fuse && yum install python3-arvados-fuse
+ yum remove python-arvados-python-client && yum install python3-arvados-python-client
+ yum remove python-arvados-cwl-runner && yum install python3-arvados-cwl-runner
+ yum remove python-crunchstat-summary && yum install python3-crunchstat-summary
+ yum remove python-cwltest && yum install python3-cwltest
+</pre>
+
h3. Minimum supported Ruby version is now 2.5
The minimum supported Ruby version is now 2.5. If you are running Arvados on Debian 9 or Ubuntu 16.04, you may need to switch to using RVM or upgrade your OS. See "Install Ruby and Bundler":../install/ruby.html for more information.
h2(#v2_0_0). v2.0.0 (2020-02-07)
-"Upgrading from 1.4":#v1_4_1
+"previous: Upgrading to 1.4.1":#v1_4_1
Arvados 2.0 is a major upgrade, with many changes. Please read these upgrade notes carefully before you begin.
h3. New property vocabulary format for Workbench2
-(feature "#14151":https://dev.arvados.org/issues/14151) Workbench2 supports a new vocabulary format and it isn't compatible with the previous one, please read the "workbench2 vocabulary format admin page":{{site.baseurl}}/admin/workbench2-vocabulary.html for more information.
+(feature "#14151":https://dev.arvados.org/issues/14151) Workbench2 supports a new vocabulary format and it isn't compatible with the previous one, please read the "metadata vocabulary format admin page":{{site.baseurl}}/admin/metadata-vocabulary.html for more information.
h3. Cloud installations only: node manager replaced by arvados-dispatch-cloud
h2(#v1_4_1). v1.4.1 (2019-09-20)
-"Upgrading from 1.4.0":#v1_4_0
+"previous: Upgrading to 1.4.0":#v1_4_0
h3. Centos7 Python 3 dependency upgraded to rh-python36
h2(#v1_4_0). v1.4.0 (2019-06-05)
-"Upgrading from 1.3.3":#v1_3_3
+"previous: Upgrading to 1.3.3":#v1_3_3
h3. Populating the new file_count and file_size_total columns on the collections table
As part of story "#9945":https://dev.arvados.org/issues/9945, the distribution packaging (deb/rpm) of our Python packages has changed. These packages now include a built-in virtualenv to reduce dependencies on system packages. We have also stopped packaging and publishing backports for all the Python dependencies of our packages, as they are no longer needed.
-One practical consequence of this change is that the use of the Arvados Python SDK (aka "import arvados") will require a tweak if the SDK was installed from a distribution package. It now requires the loading of the virtualenv environment from our packages. The "Install documentation for the Arvados Python SDK":/sdk/python/sdk-python.html reflects this change. This does not affect the use of the command line tools (e.g. arv-get, etc.).
+One practical consequence of this change is that the use of the Arvados Python SDK (aka "import arvados") will require a tweak if the SDK was installed from a distribution package. It now requires the loading of the virtualenv environment from our packages. The "Install documentation for the Arvados Python SDK":{{ site.baseurl }}/sdk/python/sdk-python.html reflects this change. This does not affect the use of the command line tools (e.g. arv-get, etc.).
Python scripts that rely on the distribution Arvados Python SDK packages to import the Arvados SDK will need to be tweaked to load the correct Python environment.
h2(#v1_3_3). v1.3.3 (2019-05-14)
-"Upgrading from 1.3.0":#v1_3_0
+"previous: Upgrading to 1.3.0":#v1_3_0
This release corrects a potential data loss issue, if you are running Arvados 1.3.0 or 1.3.1 we strongly recommended disabling @keep-balance@ until you can upgrade to 1.3.3 or 1.4.0. With keep-balance disabled, there is no chance of data loss.
-We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/arvados/arvados/blob/master/tools/keep-xref/keep-xref.py which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
+We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/arvados/arvados/blob/main/tools/keep-xref/keep-xref.py which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
h2(#v1_3_0). v1.3.0 (2018-12-05)
-"Upgrading from 1.2":#v1_2_0
+"previous: Upgrading to 1.2":#v1_2_0
This release includes several database migrations, which will be executed automatically as part of the API server upgrade. On large Arvados installations, these migrations will take a while. We've seen the upgrade take 30 minutes or more on installations with a lot of collections.
h2(#v1_2_0). v1.2.0 (2018-09-05)
-"Upgrading from 1.1.2 or 1.1.3":#v1_1_2
+"previous: Upgrading to 1.1.2 or 1.1.3":#v1_1_2
h3. Regenerate Postgres table statistics
h2(#v1_1_4). v1.1.4 (2018-04-10)
-"Upgrading from 1.1.3":#v1_1_3
+"previous: Upgrading to 1.1.3":#v1_1_3
h3. arvados-cwl-runner regressions (2018-04-05)
h2(#v1_1_2). v1.1.2 (2017-12-22)
-"Upgrading from 1.1.0 or 1.1.1":#v1_1_0
+"previous: Upgrading to 1.1.0 or 1.1.1":#v1_1_0
h3. The minimum version for Postgres is now 9.4 (2017-12-08)
This installation method is recommended to make the CLI tools available system-wide. It can coexist with the installation method described in option 2, below.
-First, configure the "Arvados package repositories":../../install/packages.html
+First, configure the "Arvados package repositories":{{ site.baseurl }}/install/packages.html
{% assign arvados_component = 'python3-arvados-user-activity' %}
Step 3: Run @pip install .@ in an appropriate installation environment, such as a @virtualenv@.
-Note: depends on the "Arvados Python SDK":../sdk/python/sdk-python.html and its associated build prerequisites (e.g. @pycurl@).
+Note: depends on the "Arvados Python SDK":{{ site.baseurl }}/sdk/python/sdk-python.html and its associated build prerequisites (e.g. @pycurl@).
h2. Usage
ARVADOS_API_TOKEN=v2/zzzzz-gj3su-yyyyyyyyyyyyyyy/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
</pre>
-h3(#delete-token). Delete a token
+h3(#delete-token). Delete a single token
-If you need to revoke a token, for example the token is leaked to an unauthorized party, you can delete the token at the command line.
+As a user or admin, if you need to revoke a specific, known token, for example a token that may have been leaked to an unauthorized party, you can delete it at the command line.
-1. First, determine the token UUID. If it is a "v2" format token (starts with "v2/") then the token UUID is middle section between the two slashes. For example:
+First, determine the token UUID. If it is a "v2" format token (starts with "v2/") then the token UUID is middle section between the two slashes. For example:
<pre>
v2/zzzzz-gj3su-yyyyyyyyyyyyyyy/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
If you have a "bare" token (only the secret part) then, as an admin, you need to query the token to get the uuid:
<pre>
-$ ARVADOS_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx arv api_client_authorization current
-{
- "href":"/api_client_authorizations/x33hz-gj3su-fk8nbj4byptz6ma",
- "kind":"arvados#apiClientAuthorization",
- "etag":"77wktnitqeelbgb4riv84zi2q",
- "uuid":"zzzzz-gj3su-yyyyyyyyyyyyyyy",
- "owner_uuid":"zzzzz-tpzed-j8w1ymjsn4vf4v4",
- "created_at":"2020-09-25T15:19:48.606984000Z",
- "modified_by_client_uuid":null,
- "modified_by_user_uuid":null,
- "modified_at":null,
- "user_id":3,
- "api_client_id":1,
- "api_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
- "created_by_ip_address":null,
- "default_owner_uuid":null,
- "expires_at":null,
- "last_used_at":null,
- "last_used_by_ip_address":null,
- "scopes":[
- "all"
- ]
-}
+$ ARVADOS_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx arv --format=uuid api_client_authorization current
+zzzzz-gj3su-yyyyyyyyyyyyyyy
+</pre>
+
+Now you can delete the token:
+
+<pre>
+$ ARVADOS_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx arv api_client_authorization delete --uuid zzzzz-gj3su-yyyyyyyyyyyyyyy
</pre>
-2. Now use the token to delete itself:
+h3(#delete-all-tokens). Delete all tokens belonging to a user
+
+First, "obtain a valid token for the user.":#create-token
+
+Then, use that token to get all the user's tokens, and delete each one:
<pre>
-$ ARVADOS_API_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx arv api_client_authorization delete --uuid zzzzz-gj3su-yyyyyyyyyyyyyyy
+$ ARVADOS_API_TOKEN=xxxxtoken-belonging-to-user-whose-tokens-will-be-deletedxxxxxxxx ; \
+for uuid in $(arv --format=uuid api_client_authorization list) ; do \
+arv api_client_authorization delete --uuid $uuid ; \
+done
</pre>
h2. Adding Permissions
# To submit work, create a "container request":{{site.baseurl}}/api/methods/container_requests.html in the @Committed@ state.
# The system will fufill the container request by creating or reusing a "Container object":{{site.baseurl}}/api/methods/containers.html and assigning it to the @container_uuid@ field. If the same request has been submitted in the past, it may reuse an existing container. The reuse behavior can be suppressed with @use_existing: false@ in the container request.
-# The dispatcher process will notice a new container in @Queued@ state and submit a container executor to the underlying work queuing system (such as SLURM).
+# The dispatcher process will notice a new container in @Queued@ state and submit a container executor to the underlying work queuing system (such as Slurm).
# The container executes. Upon termination the container goes into the @Complete@ state. If the container execution was interrupted or lost due to system failure, it will go into the @Cancelled@ state.
# When the container associated with the container request is completed, the container request will go into the @Final@ state.
# The @output_uuid@ field of the container request contains the uuid of output collection produced by container request.
When serving files that will render directly in the browser, it is important to properly configure the keep-web service to migitate cross-site-scripting (XSS) attacks. A HTML page can be stored in a collection. If an attacker causes a victim to visit that page through Workbench, the HTML will be rendered by the browser. If all collections are served at the same domain, the browser will consider collections as coming from the same origin, which will grant access to the same browsing data (cookies and local storage). This would enable malicious Javascript on that page to access Arvados on behalf of the victim.
-This can be mitigated by having separate domains for each collection, or limiting preview to circumstances where the collection is not accessed with the user's regular full-access token. For cluster administrators that understand the risks, this protection can also be turned off.
+This can be mitigated by having separate domains for each collection, or limiting preview to circumstances where the collection is not accessed with the user's regular full-access token. For clusters where this risk is acceptable, this protection can also be turned off by setting the @Collections/TrustAllContent@ configuration flag to true, see the "configuration reference":../admin/config.html for more detail.
The following "same origin" URL patterns are supported for public collections and collections shared anonymously via secret links (i.e., collections which can be served by keep-web without making use of any implicit credentials like cookies). See "Same-origin URLs" below.
This mainly affects Workbench's ability to show inline content, so it should be taken into account when configuring both services' URL schemes.
-You can read more about the definition of a _same-site_ request at the "RFC 6265bis-03 page":https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-5.2
\ No newline at end of file
+You can read more about the definition of a _same-site_ request at the "RFC 6265bis-03 page":https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-5.2
|_. Argument |_. Type |_. Description |_. Location |
|{resource_type}|object|Name is the singular form of the resource type, e.g., for the "collections" resource, this argument is "collection"|body|
|{cluster_id}|string|Optional, the cluster on which to create the object if not the current cluster.|query|
+|select |array |Attributes of the new object to return in the response (by default, all available attributes are returned).
+Example: @["uuid","name","modified_at"]@|query|
h2. delete
table(table table-bordered table-condensed).
|_. Argument |_. Type |_. Description |_. Location |
{background:#ccffcc}.|uuid|string|The UUID of the object in question.|path|
+|select |array |Attributes of the deleted object to return in the response (by default, all available attributes are returned).
+Example: @["uuid","name","modified_at"]@|query|
h2. get
table(table table-bordered table-condensed).
|_. Argument |_. Type |_. Description |_. Location |
{background:#ccffcc}.|uuid|string|The UUID of the object in question.|path|
+|select |array |Attributes of the object to return in the response (by default, all available attributes are returned).
+Example: @["uuid","name","modified_at"]@|query|
h2(#index). list
-The @list@ method requests an list of resources of that type. It corresponds to the HTTP request @GET /arvados/v1/resource_type@. All resources support "list" method unless otherwise noted.
+The @list@ method requests an list of resources of that type. It corresponds to the HTTP request @GET /arvados/v1/resource_type@. All resources support the @list@ method unless otherwise noted.
Arguments:
|order |array |Attributes to use as sort keys to determine the order resources are returned, each optionally followed by @asc@ or @desc@ to indicate ascending or descending order. (If not specified, it will be ascending).
Example: @["head_uuid asc","modified_at desc"]@
Default: @["modified_at desc", "uuid asc"]@|query|
-|select |array |Set of attributes to include in the response.
-Example: @["head_uuid","tail_uuid"]@
-Default: all available attributes. As a special case, collections do not return "manifest_text" unless explicitly selected.|query|
-|distinct|boolean|@true@: (default) do not return duplicate objects
-@false@: permitted to return duplicates|query|
+|select |array |Attributes of each object to return in the response (by default, all available attributes are returned, except collections, which do not return @manifest_text@ unless explicitly selected).
+Example: @["uuid","name","modified_at"]@|query|
+|distinct|boolean|When returning multiple records whose selected attributes (see @select@) are equal, return them as a single response entry.
+Default is @false@.|query|
|count|string|@"exact"@ (default): Include an @items_available@ response field giving the number of distinct matching items that can be retrieved (irrespective of @limit@ and @offset@ arguments).
@"none"@: Omit the @items_available@ response field. This option will produce a faster response.|query|
|1|operator|string|Comparison operator|@>@, @>=@, @like@, @not in@|
|2|operand|string, array, or null|Value to compare with the resource attribute|@"d00220fb%"@, @"1234"@, @["foo","bar"]@, @nil@|
-The following operators are available.[1]
+The following operators are available.
table(table table-bordered table-condensed).
|_. Operator|_. Operand type|_. Description|_. Example|
-|@=@, @!=@|string, number, timestamp, or null|Equality comparison|@["tail_uuid","=","xyzzy-j7d0g-fffffffffffffff"]@ @["tail_uuid","!=",null]@|
+|@=@, @!=@, @<>@|string, number, timestamp, JSON-encoded array, JSON-encoded object, or null|Equality comparison|@["tail_uuid","=","xyzzy-j7d0g-fffffffffffffff"]@
+@["tail_uuid","!=",null]@
+@["storage_classes_desired","=","[\"default\"]"]@|
|@<@, @<=@, @>=@, @>@|string, number, or timestamp|Ordering comparison|@["script_version",">","123"]@|
|@like@, @ilike@|string|SQL pattern match. Single character match is @_@ and wildcard is @%@. The @ilike@ operator is case-insensitive|@["script_version","like","d00220fb%"]@|
|@in@, @not in@|array of strings|Set membership|@["script_version","in",["main","d00220fb38d4b85ca8fc28a8151702a2b9d1dec5"]]@|
|@is_a@|string|Arvados object type|@["head_uuid","is_a","arvados#collection"]@|
-|@exists@|string|Test if a subproperty is present.|@["properties","exists","my_subproperty"]@|
-
-Note:
+|@exists@|string|Presence of subproperty|@["properties","exists","my_subproperty"]@|
+|@contains@|string, array of strings|Presence of one or more keys or array elements|@["storage_classes_desired", "contains", ["foo", "bar"]]@ (matches both @["foo", "bar"]@ and @["foo", "bar", "baz"]@)
+(note @[..., "contains", "foo"]@ is also accepted, and is equivalent to @[..., "contains", ["foo"]]@)|
h4(#substringsearchfilter). Filtering using substring search
|@like@, @ilike@|string|SQL pattern match, single character match is @_@ and wildcard is @%@, ilike is case-insensitive|@["properties.my_subproperty", "like", "d00220fb%"]@|
|@in@, @not in@|array of strings|Set membership|@["properties.my_subproperty", "in", ["fizz", "buzz"]]@|
|@exists@|boolean|Test if a subproperty is present or not (determined by operand).|@["properties.my_subproperty", "exists", true]@|
-|@contains@|string, number|Filter where subproperty has a value either by exact match or value is element of subproperty list.|@["foo", "contains", "bar"]@ will find both @{"foo": "bar"}@ and @{"foo": ["bar", "baz"]}@.|
+|@contains@|string, number|Filter where subproperty has a value either by exact match or value is element of subproperty list.|@["properties.foo", "contains", "bar"]@ will find both @{"foo": "bar"}@ and @{"foo": ["bar", "baz"]}@.|
Note that exclusion filters @!=@ and @not in@ will return records for which the property is not defined at all. To restrict filtering to records on which the subproperty is defined, combine with an @exists@ filter.
+h4(#filterexpression). Filtering using boolean expressions
+
+In addition to the three-element array form described above, a string containing a boolean expression is also accepted. The following restrictions apply:
+* The expression must contain exactly one operator.
+* The operator must be @=@, @<@, @<=@, @>@, or @>=@.
+* There must be exactly one pair of parentheses, surrounding the entire expression.
+* Each operand must be the name of a numeric attribute like @replication_desired@ (literal values like @3@ and non-numeric attributes like @uuid@ are not accepted).
+* The expression must not contain whitespace other than an ASCII space (newline and tab characters are not accepted).
+
+Examples:
+* @(replication_desired > replication_confirmed)@
+* @(replication_desired = replication_confirmed)@
+
+Both types of filter (boolean expressions and @[attribute, operator, operand]@ filters) can be combined in the same API call. Example:
+* @{"filters": ["(replication_desired > replication_confirmed)", ["replication_desired", "<", 2]]}@
+
h4. Federated listing
Federated listing forwards a request to multiple clusters and combines the results. Currently only a very restricted form of the "list" method is supported.
|_. Argument |_. Type |_. Description |_. Location |
{background:#ccffcc}.|uuid|string|The UUID of the resource in question.|path||
|{resource_type}|object||query||
-
-fn1^. NOTE: The filter operator for full-text search (@@) which previously worked (but was undocumented) is deprecated and will be removed in a future release.
+|select |array |Attributes of the updated object to return in the response (by default, all available attributes are returned).
+Example: @["uuid","name","modified_at"]@|query|
|manifest_text|text|||
|replication_desired|number|Minimum storage replication level desired for each data block referenced by this collection. A value of @null@ signifies that the site default replication level (typically 2) is desired.|@2@|
|replication_confirmed|number|Replication level most recently confirmed by the storage system. This field is null when a collection is first created, and is reset to null when the manifest_text changes in a way that introduces a new data block. An integer value indicates the replication level of the _least replicated_ data block in the collection.|@2@, null|
-|replication_confirmed_at|datetime|When replication_confirmed was confirmed. If replication_confirmed is null, this field is also null.||
+|replication_confirmed_at|datetime|When @replication_confirmed@ was confirmed. If @replication_confirmed@ is null, this field is also null.||
+|storage_classes_desired|list|An optional list of storage class names where the blocks should be saved. If not provided, the cluster's default storage class(es) will be set.|@['archival']@|
+|storage_classes_confirmed|list|Storage classes most recently confirmed by the storage system. This field is an empty list when a collection is first created.|@'archival']@, @[]@|
+|storage_classes_confirmed_at|datetime|When @storage_classes_confirmed@ was confirmed. If @storage_classes_confirmed@ is @[]@, this field is null.||
|trash_at|datetime|If @trash_at@ is non-null and in the past, this collection will be hidden from API calls. May be untrashed.||
|delete_at|datetime|If @delete_at@ is non-null and in the past, the collection may be permanently deleted.||
|is_trashed|boolean|True if @trash_at@ is in the past, false if not.||
h3. get
-Gets a Collection's metadata by UUID or portable data hash. When making a request by portable data hash, the returned record will only have the @portable_data_hash@ and @manifest_text@.
+Gets a Collection's metadata by UUID or portable data hash. When making a request by portable data hash, attributes other than @portable_data_hash@ and @manifest_text@ are not returned, even when requested explicitly using the @select@ parameter.
Arguments:
table(table table-bordered table-condensed).
|_. Argument |_. Type |_. Description |_. Location |_. Example |
-{background:#ccffcc}.|uuid|string|The UUID of the Collection in question.|path||
+{background:#ccffcc}.|uuid|string|The UUID or portable data hash of the Collection in question.|path||
h3. list
|runtime_token|string|A v2 token to be passed into the container itself, used to access Keep-backed mounts, etc. |Not returned in API responses. Reset to null when state is "Complete" or "Cancelled".|
|runtime_user_uuid|string|The user permission that will be granted to this container.||
|runtime_auth_scopes|array of string|The scopes associated with the auth token used to run this container.||
+|output_storage_classes|array of strings|The storage classes that will be used for the log and output collections of this container request|default is ["default"]|
h2(#priority). Priority
Priority 1000 is the highest priority.
-The actual order that containers execute is determined by the underlying scheduling software (e.g. SLURM) and may be based on a combination of container priority, submission time, available resources, and other factors.
+The actual order that containers execute is determined by the underlying scheduling software (e.g. Slurm) and may be based on a combination of container priority, submission time, available resources, and other factors.
In the current implementation, the magnitude of difference in priority between two containers affects the weight of priority vs age in determining scheduling order. If two containers have only a small difference in priority (for example, 500 and 501) and the lower priority container has a longer queue time, the lower priority container may be scheduled before the higher priority container. Use a greater magnitude difference (for example, 500 and 600) to give higher weight to priority over queue time.
|runtime_token|string|A v2 token to be passed into the container itself, used to access Keep-backed mounts, etc.|Not returned in API responses. Reset to null when state is "Complete" or "Cancelled".|
|gateway_address|string|Address (host:port) of gateway server.|Internal use only.|
|interactive_session_started|boolean|Indicates whether @arvados-client shell@ has been used to run commands in the container, which may have altered the container's behavior and output.||
+|output_storage_classes|array of strings|The storage classes that will be used for the log and output collections of this container||
h2(#container_states). Container states
|error|string|The existance of this key indicates the container will definitely fail, or has already failed.|Optional.|
|warning|string|Indicates something unusual happened or is currently happening, but isn't considered fatal.|Optional.|
|activity|string|A message for the end user about what state the container is currently in.|Optional.|
-|errorDetails|string|Additional structured error details.|Optional.|
-|warningDetails|string|Additional structured warning details.|Optional.|
+|errorDetail|string|Additional structured error details.|Optional.|
+|warningDetail|string|Additional structured warning details.|Optional.|
h2(#scheduling_parameters). {% include 'container_scheduling_parameters' %}
table(table table-bordered table-condensed).
|_. Attribute|_. Type|_. Description|_. Example|
|name|string|||
-|group_class|string|Type of group. This does not affect behavior, but determines how the group is presented in the user interface. For example, @project@ indicates that the group should be displayed by Workbench and arv-mount as a project for organizing and naming objects.|@"project"@
-null|
+|group_class|string|Type of group. @project@ and @filter@ indicate that the group should be displayed by Workbench and arv-mount as a project for organizing and naming objects. @role@ is used as part of the "permission system":{{site.baseurl}}/api/permission-model.html. |@"filter"@
+@"project"@
+@"role"@|
|description|text|||
|properties|hash|User-defined metadata, may be used in queries using "subproperty filters":{{site.baseurl}}/api/methods.html#subpropertyfilters ||
|writable_by|array|List of UUID strings identifying Users and other Groups that have write permission for this Group. Only users who are allowed to administer the Group will receive a full list. Other users will receive a partial list that includes the Group's owner_uuid and (if applicable) their own user UUID.||
|delete_at|datetime|If @delete_at@ is non-null and in the past, the group and all objects directly or indirectly owned by the group may be permanently deleted.||
|is_trashed|datetime|True if @trash_at@ is in the past, false if not.||
+@filter@ groups are virtual groups; they can not own other objects. Filter groups have a special @properties@ field named @filters@, which must be an array of filter conditions. See "list method filters":{{site.baseurl}}/api/methods.html#filters for details on the syntax of valid filters, but keep in mind that the attributes must include the object type (@collections@, @container_requests@, @groups@, @workflows@), separated with a dot from the field to be filtered on.
+
+Filters are applied with an implied *and* between them, but each filter only applies to the object type specified. The results are subject to the usual access controls - they are a subset of all objects the user can see. Here is an example:
+
+<pre>
+ "properties":{
+ "filters":[
+ [
+ "groups.name",
+ "like",
+ "Public%"
+ ]
+ ]
+ },
+</pre>
+
+This @filter@ group will return all groups (projects) that have a name starting with the word @Public@ and are visible to the user issuing the query. Because groups can contain many types of object, it will also return all objects of other types that the user can see.
+
+The 'is_a' filter operator is of particular interest to limit the @filter@ group 'content' to the desired object(s). When the 'is_a' operator is used, the attribute must be 'uuid'. The operand may be a string or an array which means objects of either type will match the filter. This example will return all groups (projects) that have a name starting with the word @Public@, as well as all collections that are in the project with uuid @zzzzz-j7d0g-0123456789abcde@.
+
+<pre>
+ "properties":{
+ "filters":[
+ [
+ "groups.name",
+ "like",
+ "Public%"
+ ],
+ [
+ "collections.owner_uuid",
+ "=",
+ "zzzzz-j7d0g-0123456789abcde"
+ ],
+ [
+ "uuid",
+ "is_a",
+ [
+ "arvados#group",
+ "arvados#collection"
+ ]
+ ]
+ ]
+ },
+ </pre>
+
h2. Methods
See "Common resource methods":{{site.baseurl}}/api/methods.html for more information about @create@, @delete@, @get@, @list@, and @update@.
|_. Argument |_. Type |_. Description |_. Location |_. Example |
{background:#ccffcc}.|uuid|string|The UUID of the Link in question.|path||
|link|object||query||
+
+h3. get_permissions
+
+Get all permission links that point directly to given UUID (in the head_uuid field). The requesting user must have @can_manage@ permission or be an admin.
+
+Arguments:
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|uuid|string|The UUID of the object.|path||
{background:#ccffcc}.|uuid|string|The UUID of the User in question.|path||
|user|object|The new attributes.|query||
-h3(#update_uuid). update_uuid
-
-Change the UUID of an existing user, updating all database references accordingly.
-
-This method can only be used by an admin user. It should only be used when the affected user is idle. New references to the affected user that are established _while the update_uuid operation is in progress_ might not be migrated as expected.
-
-Arguments:
-
-table(table table-bordered table-condensed).
-|_. Argument |_. Type |_. Description |_. Location |_. Example |
-{background:#ccffcc}.|uuid|string|The current UUID of the user in question.|path|@zzzzz-tpzed-12345abcde12345@|
-{background:#ccffcc}.|new_uuid|string|The desired new UUID. It is an error to use a UUID belonging to an existing user.|query|@zzzzz-tpzed-abcde12345abcde@|
-
h3. setup
Set up a user. Adds the user to the "All users" group. Enables the user to invoke @activate@. See "user management":{{site.baseurl}}/admin/user-management.html for details.
|old_user_uuid|uuid|The uuid of the "old" account|query||
|new_owner_uuid|uuid|The uuid of a project to which objects owned by the "old" user will be reassigned.|query||
|redirect_to_new_user|boolean|If true, also redirect login and reassign authorization credentials from "old" user to the "new" user|query||
+
+h3. authenticate
+
+Create a new API token based on username/password credentials. Returns an "API client authorization":api_client_authorizations.html object containing the API token, or an "error object.":../requests.html#errors
+
+Valid credentials are determined by the choice of "configured login backend.":{{site.baseurl}}/install/setup-login.html
+
+Note: this endpoint cannot be used with login backends that use web-based third party authentication, such as Google or OpenID Connect.
+
+Arguments:
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|username|string|The username.|body||
+{background:#ccffcc}.|password|string|The password.|body||
h2. Ownership
-All Arvados objects have an @owner_uuid@ field. Valid uuid types for @owner_uuid@ are "User" and "Group". For Group, the @group_class@ must be a "project".
+All Arvados objects have an @owner_uuid@ field. Valid uuid types for @owner_uuid@ are "User" and "Group". In the case of a Group, the @group_class@ must be "project".
The User or Group specified by @owner_uuid@ has *can_manage* permission on the object. This permission is one way: an object that is owned does not get any special permissions on the User or Group that owns it.
If a User has *can_manage* permission on some object, the user has the ability to read, create, update and delete permission links with @head_uuid@ of the managed object. In other words, the user has the ability to modify the permission grants on the object.
-The *can_login* @name@ is only meaningful on a permission link with with @tail_uuid@ a user UUID and @head_uuid@ a Virtual Machine UUID. A permission link of this type gives the user UUID permission to log into the Virtual Machine UUID. The username for the VM is specified in the @properties@ field. Group membership can be specified that way as well, optionally. See the "VM login section on the CLI cheat sheet":/install/cheat_sheet.html#vm-login for an example.
+The *can_login* @name@ is only meaningful on a permission link with with @tail_uuid@ a user UUID and @head_uuid@ a Virtual Machine UUID. A permission link of this type gives the user UUID permission to log into the Virtual Machine UUID. The username for the VM is specified in the @properties@ field. Group membership can be specified that way as well, optionally. See the "VM login section on the 'User management at the CLI' page":{{ site.baseurl }}/admin/user-management-cli.html#vm-login for an example.
h3. Transitive permissions
A "project" is a subtype of Group that is displayed as a "Project" in Workbench, and as a directory by @arv-mount@.
* A project can own things (appear in @owner_uuid@)
* A project can be owned by a user or another project.
-* The name of a project is unique only among projects with the same owner_uuid.
+* The name of a project is unique only among projects and filters with the same owner_uuid.
* Projects can be targets (@head_uuid@) of permission links, but not origins (@tail_uuid@). Putting a project in a @tail_uuid@ field is an error.
+A "filter" is a subtype of Group that is displayed as a "Project" in Workbench, and as a directory by @arv-mount@. See "the groups API documentation":{{ site.baseurl }}/api/methods/groups.html for more information.
+* A filter group cannot own things (cannot appear in @owner_uuid@). Putting a filter group in an @owner_uuid@ field is an error.
+* A filter group can be owned by a user or a project.
+* The name of a filter is unique only among projects and filters with the same owner_uuid.
+* Filters can be targets (@head_uuid@) of permission links, but not origins (@tail_uuid@). Putting a filter in a @tail_uuid@ field is an error.
+
A "role" is a subtype of Group that is treated in Workbench as a group of users who have permissions in common (typically an organizational group).
* A role cannot own things (cannot appear in @owner_uuid@). Putting a role in an @owner_uuid@ field is an error.
* All roles are owned by the system user.
* The name of a role is unique across a single Arvados cluster.
* Roles can be both targets (@head_uuid@) and origins (@tail_uuid@) of permission links.
+* By default, all roles are visible to all active users. However, if the configuration entry @Users.RoleGroupsVisibleToAll@ is @false@, visibility is determined by normal permission rules, _i.e._, a role is only visible to users who have that role, and to admins.
h3. Access through Roles
--- /dev/null
+---
+layout: default
+navsection: api
+title: "Projects and filter groups"
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Arvados @projects@ are used to organize objects. Projects can contain @collections@, @container requests@, @workflows@, etc. Projects can also contain other projects. An object is part of a project if the @owner_uuid@ of the object is set to the uuid of the project.
+
+Projects are implemented as a subtype of the Arvados @group@ object type, with @group_class@ set to the value "project". More information is available in the "groups API reference":{{ site.baseurl }}/api/methods/groups.html.
+
+Projects can be manipulated via Workbench, the cli tools, the SDKs, and the Arvados APIs.
+
+h2. The home project
+
+Each user has a @home project@, which is implemented differently. This is a virtual project that is comprised of all objects owned by the user, in other words, all objects with the @owner_uuid@ set to the @uuid@ of the user. The home project is accessible via Workbench, which makes it easy view its contents and to move objects from and to the home project. The home project is also accessible via FUSE, WebDAV and the S3 interface.
+
+The same thing can be done via the APIs. To put something in a user's home project via the cli or SDKs, one would set the @owner_uuid@ of the object to the user's @uuid@. This also implies that this user now has full ownership and control over that object.
+
+The contents of the home project can be accessed with the @group contents@ API, e.g. via the cli with this command:
+<pre>arv group contents --uuid zzzzz-tpzed-123456789012345</pre>
+In this command, `zzzzz-tpzed-123456789012345` is a @user@ uuid, which is unusual because we are using it as the argument to a @groups@ API. The @group contents@ API is normally used with a @group@ uuid.
+
+Because the home project is a virtual project, other operations via the @groups@ API are not supported.
+
+h2. Filter groups
+
+Filter groups are another type of virtual project. They are implemented as an Arvados @group@ object with @group_class@ set to the value "filter".
+
+Filter groups define one or more filters which are applied to all objects that the current user can see, and returned as the contents of the @group@. Filter groups are described in more detail in the "groups API reference":{{site.baseurl}}/api/methods/groups.html, and the rules for creating valid filters are the same as for "list method filters":{{site.baseurl}}/api/methods.html#filters.
+
+Filter groups are accessible (read-only) via Workbench and the Arvados FUSE mount, WebDAV and S3 interface. Filter groups must currently be defined via the API, SDK or cli, there is no Workbench support yet.
+
+As an example, create a filter group with the @arv@ cli:
+
+<notextile>
+<pre><code>~$ <span class="userinput"> FILTER_GROUP_UUID=`arv -s group create --group '{
+ "group_class":"filter",
+ "name":"my filter group",
+ "properties":{
+ "filters":
+ [
+ ["collections.name","ilike","%test%"],
+ ["uuid","is_a","arvados#collection"]
+ ]
+ }
+ }'`
+</code>
+</pre>
+</notextile>
+This filter group will contain all collections visible to the current user whose name matches the word @test@ (case insensitive).
+
+To see how this works via the keep FUSE mount, create a few matching (and non-matching) collections:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv collection create --collection '{"name":"empty test collection 1"}'</span>
+~$ <span class="userinput">arv collection create --collection '{"name":"another empty collection"}'</span>
+~$ <span class="userinput">arv collection create --collection '{"name":"empty Test collection 2"}'</span>
+~$ <span class="userinput">mkdir -p keep</span>
+~$ <span class="userinput">arv-mount keep</span>
+~$ <span class="userinput">ls keep/by_id/$FILTER_GROUP_UUID/ -C1</span>
+'empty test collection 1'
+'empty Test collection 2'</code>
+</pre>
+</notextile>
API requests must provide the API token using the @Authorization@ header in the following format:
<pre>
-$ curl -v -H "Authorization: OAuth2 xxxxapitokenxxxx" https://192.168.5.2:8000/arvados/v1/collections
+$ curl -v -H "Authorization: Bearer xxxxapitokenxxxx" https://192.168.5.2:8000/arvados/v1/collections
> GET /arvados/v1/collections HTTP/1.1
> ...
-> Authorization: OAuth2 xxxxapitokenxxxx
+> Authorization: Bearer xxxxapitokenxxxx
> ...
</pre>
+On a cluster configured to use an OpenID Connect provider (other than Google) as a login backend, Arvados can be configured to accept an OpenID Connect access token in place of an Arvados API token. OIDC access tokens are also accepted by a cluster that delegates login to another cluster (LoginCluster) which in turn has this feature configured. See @Login.OpenIDConnect.AcceptAccessTokenScope@ in the "default config.yml file":{{site.baseurl}}/admin/config.html for details.
+
+<pre>
+$ curl -v -H "Authorization: Bearer xxxx-openid-connect-access-token-xxxx" https://192.168.5.2:8000/arvados/v1/collections
+</pre>
+
h3. Parameters
Request parameters may be provided in one of two ways. They may be provided in the "query" section of request URI, or they may be provided in the body of the request with application/x-www-form-urlencoded encoding. If parameters are provided in both places, their values will be merged. Parameter names must be unique. If a parameter appears multiple times, the behavior is undefined.
Results are returned JSON-encoded in the response body.
-h3. Errors
+h3(#errors). Errors
If a request cannot be fulfilled, the API will return 4xx or 5xx HTTP status code. Be aware that the API server may return a 404 (Not Found) status for resources that exist but for which the client does not have read access. The API will also return an error record:
h3. Create a new record
<pre>
-$ curl -v -X POST --data-urlencode 'collection={"name":"empty collection"}' -H "Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections | jq .
+$ curl -v -X POST --data-urlencode 'collection={"name":"empty collection"}' -H "Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections | jq .
> POST /arvados/v1/collections HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 192.168.5.2:8000
> Accept: */*
-> Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
+> Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
> Content-Length: 54
> Content-Type: application/x-www-form-urlencoded
>
h3. Delete a record
<pre>
-$ curl -X DELETE -v -H "Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-m1ma0mxxfg3mbcc | jq .
+$ curl -X DELETE -v -H "Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-m1ma0mxxfg3mbcc | jq .
> DELETE /arvados/v1/collections/962eh-4zz18-m1ma0mxxfg3mbcc HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 192.168.5.2:8000
> Accept: */*
-> Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
+> Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
h3. Get a specific record
<pre>
-$ curl -v -H "Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km | jq .
+$ curl -v -H "Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km | jq .
> GET /arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 192.168.5.2:8000
> Accept: */*
-> Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
+> Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
(Note, return result is truncated).
<pre>
-$ curl -v -G --data-urlencode 'filters=[["created_at",">","2016-11-08T21:38:24.124834000Z"]]' -H "Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections | jq .
+$ curl -v -G --data-urlencode 'filters=[["created_at",">","2016-11-08T21:38:24.124834000Z"]]' -H "Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections | jq .
> GET /arvados/v1/collections?filters=%5B%5B%22uuid%22%2C%20%22%3D%22%2C%20%22962eh-4zz18-xi32mpz2621o8km%22%5D%5D HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 192.168.5.2:8000
> Accept: */*
-> Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
+> Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
h3. Update a field
<pre>
-$ curl -v -X PUT --data-urlencode 'collection={"name":"rna.SRR948778.bam"}' -H "Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km | jq .
+$ curl -v -X PUT --data-urlencode 'collection={"name":"rna.SRR948778.bam"}' -H "Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr" https://192.168.5.2:8000/arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km | jq .
> PUT /arvados/v1/collections/962eh-4zz18-xi32mpz2621o8km HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 192.168.5.2:8000
> Accept: */*
-> Authorization: OAuth2 oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
+> Authorization: Bearer oz0os4nyudswvglxhdlnrgnuelxptmj7qu7dpwvyz3g9ocqtr
> Content-Length: 53
> Content-Type: application/x-www-form-urlencoded
>
All requests to the API server must have an API token. API tokens can be issued by going though the login flow, or created via the API. At this time, only browser based applications can perform login from email/password. Command line applications and services must use an API token provided via the @ARVADOS_API_TOKEN@ environment variable or configuration file.
-h2. Browser login
+h2. Login
-Browser based applications can perform log in via the following highlevel flow:
+Browser based applications can log in using one of the two possible flows:
-# The web application presents a "login" link to @/login@ on the API server with a @return_to@ parameter provided in the query portion of the URL. For example @https://{{ site.arvados_api_host }}/login?return_to=XXX@ , where @return_to=XXX@ is the URL of the login page for the web application.
-# The "login" link takes the browser to the login page (this may involve several redirects)
-# The user logs in. API server authenticates the user and issues a new API token.
-# The browser is redirected to the login page URL provided in @return_to=XXX@ with the addition of @?api_token=xxxxapitokenxxxx@.
-# The web application gets the login request with the included authorization token.
+h3. Authenticate via a third party
-!{{site.baseurl}}/images/Session_Establishment.svg!
+# The web application instructs the user to click on a link to the @/login@ endpoint on the API server. This link should include the @return_to@ parameter in the query portion of the URL. For example @https://{{ site.arvados_api_host }}/login?return_to=XXX@ , where @return_to=XXX@ is a page in the web application.
+# The @/login@ endpoint redirects the user to the configured third party authentication provider (e.g. Google or other OpenID Connect provider).
+# The user logs in to the third party provider, then they are redirected back to the API server.
+# The API server authenticates the user, issues a new API token, and redirects the browser to the URL provided in @return_to=XXX@ with the addition of @?api_token=xxxxapitokenxxxx@.
+# The web application gets the authorization token from the query and uses it to access the API server on the user's behalf.
+
+h3. Direct username/password authentication
+
+# The web application presents username and password fields.
+# When the submit button is pressed, using Javascript, the browser sends a POST request to @/arvados/v1/users/authenticate@
+** The request payload type is @application/javascript@
+** The request body is a JSON object with @username@ and @password@ fields.
+# The API server receives the username and password, authenticates them with the upstream provider (such as LDAP or PAM), and responds with the @api_client_authorization@ object for the new API token.
+# The web application receives the authorization token in the response and uses it to access the API server on the user's behalf.
+
+h3. Using an OpenID Connect access token
-The "browser authentication process is documented in detail on the Arvados wiki.":https://dev.arvados.org/projects/arvados/wiki/Workbench_authentication_process
+A cluster that uses OpenID Connect as a login provider can be configured to accept OIDC access tokens as well as Arvados API tokens (this is disabled by default; see @Login.OpenIDConnect.AcceptAccessToken@ in the "default config.yml file":{{site.baseurl}}/admin/config.html).
+# The client obtains an access token from the OpenID Connect provider via some method outside of Arvados.
+# The client presents the access token with an Arvados API request (e.g., request header @Authorization: Bearer xxxxaccesstokenxxxx@).
+# Depending on configuration, the API server decodes the access token (which must be a signed JWT) and confirms that it includes the required scope (see @Login.OpenIDConnect.AcceptAccessTokenScope@ in the "default config.yml file":{{site.baseurl}}/admin/config.html).
+# The API server uses the provider's UserInfo endpoint to validate the presented token.
+# If the token is valid, it is cached in the Arvados database and accepted in subsequent API calls for the next 10 minutes.
+
+h3. Diagram
+
+!{{site.baseurl}}/images/Session_Establishment.svg!
h2. User activation
--- /dev/null
+---
+layout: default
+navsection: architecture
+title: Dispatching containers to cloud VMs
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+The arvados-dispatch-cloud component runs Arvados user containers on generic public cloud infrastructure by automatically creating and destroying VMs (“instances”) of various sizes according to demand, preparing the instances’ runtime environments, and running containers on them.
+
+This does not use a cloud provider’s container-execution service.
+
+h2. Overview
+
+In this diagram, the black edges show interactions involved in starting a VM instance and running a container. The blue edges show the “container shell” communication channel.
+
+!{max-width:40em}{{site.baseurl}}/architecture/dispatchcloud.svg!
+
+{% comment %}
+# svg generated using https://graphviz.it/
+digraph {
+ subgraph cluster_cloudvm {
+ node [color=black] [fillcolor=white] [style=filled];
+ style = filled;
+ color = lightgrey;
+ label = "cloud instance (VM)";
+ "SSH server" -> "crunch-run" [label = "start crunch-run"];
+ "crunch-run" -> docker [label = "create container"];
+ "crunch-run" -> docker [label = "shell"] [color = blue] [fontcolor = blue];
+ "crunch-run" -> container [label = "tcp/http"] [color = blue] [fontcolor = blue];
+ docker -> container;
+ }
+ "cloud provider" [shape=box] [style=dashed];
+ dispatcher -> controller [label = "get container queue"];
+ dispatcher -> "cloud provider" [label = "create/destroy/list VMs"];
+ "cloud provider" -> "SSH server" [label = "add authorized_keys"];
+ "crunch-run" -> controller [label = "update\ngateway ip:port,\ncontainer state,\noutput, ..."];
+ client -> controller [label = "shell/tcp/http (https tunnel)"] [color = blue] [fontcolor = blue];
+ controller -> "crunch-run" [label = "shell/tcp/http (https tunnel)"] [color = blue] [fontcolor = blue];
+ dispatcher -> "SSH server" [label = "start crunch-run"];
+}
+{% endcomment %}
+
+h2. Scheduling
+
+The dispatcher periodically polls arvados-controller to get a list of containers that are ready to run. Whenever this list changes, the dispatcher runs a scheduling loop that selects a suitable instance type for each container, allocates the highest priority containers to idle instances, requests new instances if needed, and shuts down instances that have been idle for longer than the configured idle timeout. Currently the dispatcher only runs one container at a time on an instance, even if the instance has enough RAM and CPUs to accommodate more.
+
+h2. Creating instances
+
+When creating a new instance, the dispatcher uses the cloud provider’s metadata feature to add a tag with key “InstanceSetID” and a value derived from its Arvados authentication token. This enables the dispatcher to recognize and reconnect to existing instances that belong to it, and continue monitoring existing containers, after a restart or upgrade.
+
+When using the Azure cloud service, the dispatcher needs to first create a new network interface, then attach it to a new instance. The network interface is also tagged with “InstanceSetID”.
+
+If the cloud provider returns a rate-limiting error when creating a new instance, the dispatcher avoids requesting new instances for a short period, and shuts down idle nodes more aggressively (i.e., without waiting for the usual idle timeout to elapse) until a new instance is successfully created.
+
+h2. Recovering state after a restart
+
+Restarting the dispatcher does not interrupt containers that are already running. When the dispatcher starts up, it gets the cloud provider’s current list of instances that have the expected InstanceSetID tag value. It ignores instances without that tag, so it won’t interfere with other VM instances in the same cloud account. It runs the boot probe command on each instance, checks for containers that were started by a previous invocation and are still running, and resumes monitoring. Before dispatching any new containers to a previously existing instance, it ensures the crunch-run program is updated if needed.
+
+h2. Instance boot process
+
+When the cloud provider indicates that a new instance has been created, the dispatcher connects to the instance’s SSH service (see “instance control channel” below) and executes the configured boot probe command. If this fails, the dispatcher retries until the configured boot timeout is reached, then shuts down the instance. When the boot probe succeeds, the dispatcher copies the crunch-run program to the instance, and runs it to check for running containers before reporting the instance’s state as “idle” or “busy”. (Normally of course a freshly booted instance has no containers running, but this covers the case where the dispatcher itself has restarted and containers submitted by the previous dispatcher process are still running.)
+
+The dispatcher and crunch-run programs are both packaged in a single executable file: when dispatcher copies crunch-run to an instance, it is really copying itself. This ensures the dispatcher is always using the version of crunch-run that it expects.
+
+h2. Boot probe command
+
+The purpose of the boot probe command is to ensure the dispatcher does not try to schedule containers on an instance before the instance is ready, even if its SSH daemon comes up early in the boot process. The default boot probe command, @systemctl is-system-running@, is appropriate for images that use @systemd@ to manage the boot process. Another approach is to use a custom startup script in the VM image that writes a file when it finishes, and a boot probe command that checks for that file, such as @cat /var/run/boot.complete@.
+
+h2. Automatic instance shutdown
+
+Normally, the dispatcher shuts down any instance that has remained idle for 1 minute (see TimeoutIdle configuration) but there are some exceptions to this rule. If the cloud provider returns a quota error when trying to create a new instance, the dispatcher shuts down idle nodes right away, in case the idle nodes are contributing to the quota. Also, the operator can use the management API to set an instance’s idle behavior to “drain” or “hold”. “Drain” shuts down the instance as soon as it becomes idle, which can be used to recycle a suspect node without interrupting a running container. “Hold” keeps the instance alive indefinitely without scheduling additional containers on it, which can be used to investigate problems like a failed startup script.
+
+Each instance is tagged with its current idle behavior (using the tag name “IdleBehavior”), which makes it visible in the cloud provider’s console and ensures the behavior is retained if dispatcher restarts.
+
+h2. Management API
+
+The dispatcher provides an HTTP management interface, which provides the operator with more visibility and control for purposes of troubleshooting and monitoring. APIs are provided to return details of current VM instances and running/scheduled containers as seen by the dispatcher, immediately terminate containers and instances, and control the on-idle behavior of instances. This interface also provides Prometheus metrics. See the "cloud dispatcher management API":{{site.baseurl}}/api/dispatch.html documentation for details.
+
+h2. Instance control channel (SSH)
+
+The dispatcher uses a multiplexed SSH connection to monitor instance boot progress, install the crunch-run supervisor program, start and stop containers, and detect crashed containers and failing instances. It establishes a persistent SSH connection to each cloud instance when the instance first appears, retrying/reconnecting as needed.
+
+Cloud VMs typically generate a random SSH host key at boot time, making host key verification impossible. To provide some assurance the dispatcher is connecting to the intended instance, when it creates a new instance the dispatcher generates a random “instance secret”, uses the cloud provider’s bootstrap command feature to save it in @/var/run/arvados-instance-secret@ on the new instance, and executes @cat /var/run/arvados-instance-secret@ to verify the instance’ identity when first connecting to its SSH server. Each instance is also tagged with its instance secret, so it can still be verified after a dispatcher restart.
+
+h2. Container communication channel (https tunnel)
+
+The crunch-run program runs a gateway server which facilitates the “container shell” feature without sending traffic through the dispatcher process. The gateway server accepts TLS connections from arvados-controller on a dynamic TCP port (typically in the range 32768-60999, see @sysctl net.ipv4.ip_local_port_range@). Crunch-run saves the selected port, along with the external IP address of the VM instance as seen by the dispatcher, in the @gateway_address@ field in the container record so arvados-controller can connect to it.
+
+On the client host (typically a shell node or a user’s workstation) the @arvados-client shell@ command sends an https “connect” request to arvados-controller, which sends an https “connect” request to the gateway server. These tunnels convey SSH protocol traffic between the user’s SSH client and crunch-run’s built-in SSH server, which uses @docker exec@ to run commands inside the container.
+
+Arvados-controller and crunch-run gateway server authenticate each other using a self-signed certificate and a shared secret based on the cluster-wide @SystemRootToken@. If that token changes (and the dispatcher restarts to load the new token) while a container is running, the container will stop accepting container shell traffic.
+
+h2. Scaling
+
+Architecturally, the dispatcher is _designed_ to accommodate multiple concurrent dispatcher processes on multiple hosts, each using a different authorization token, but such a configuration is not yet supported. Currently, each cluster should run a single dispatcher process. A single process can support thousands of concurrent VM instances.
--- /dev/null
+<svg version="1.1" xmlns="http://www.w3.org/2000/svg" width="542.87pt" height="561.4pt" viewBox="0 0 542.87 561.4"><style type="text/css">.dashed {stroke-dasharray: 5,5} .dotted {stroke-dasharray: 1,5} .overlay {fill: none; pointer-events: all}</style><g><g transform="translate(4, 557.4000244140625) scale(1,1)"><polygon stroke="#fffffe" stroke-opacity="0" fill="#ffffff" points="-4,4 -4,-557.4 538.87,-557.4 538.87,4"></polygon><g class="subgraph"><title>cluster_cloudvm</title><path stroke="#d3d3d3" fill="#d3d3d3" d="M 30.22,-8 L 30.22,-385.8,209.22,-385.8,209.22,-8 Z"></path><text x="119.72" y="-369.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">cloud instance (VM)</text></g><g class="node"><title>SSH server</title><path stroke="#000000" fill="#ffffff" d="M 93.22,-335 m -54.99,0 a 54.99,18 0 1,0 109.98,0 a 54.99,18 0 1,0 -109.98,0"></path><text x="93.22" y="-330.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">SSH server</text></g><g class="node"><title>crunch-run</title><path stroke="#000000" fill="#ffffff" d="M 148.22,-195.8 m -53.29,0 a 53.29,18 0 1,0 106.58,0 a 53.29,18 0 1,0 -106.58,0"></path><text x="148.22" y="-191.6" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">crunch-run</text></g><g class="relation" style="opacity: 1;"><title>SSH server->crunch-run</title><path stroke="#000000" fill="none" d="M 82.52,-317.19 C 79.55,-311.62,76.72,-305.24,75.2,-299,68.17,-269.97,59.29,-257.07,75.2,-231.8,80.6,-223.24,88.73,-216.72,97.64,-211.78"></path><path class="solid" stroke="#000000" fill="#000000" d="M 99.28,-214.87 L 106.73,-207.34,96.21,-208.58 Z"></path><text x="119.72" y="-261.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">start crunch-run</text></g><g class="node"><title>docker</title><path stroke="#000000" fill="#ffffff" d="M 85.22,-107 m -37.12,0 a 37.12,18 0 1,0 74.24,0 a 37.12,18 0 1,0 -74.24,0"></path><text x="85.22" y="-102.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">docker</text></g><g class="relation" style="opacity: 1;"><title>crunch-run->docker</title><path stroke="#000000" fill="none" d="M 100.13,-187.64 C 83.75,-182.64,67.09,-174.17,57.22,-159.8,51.11,-150.9,54.81,-140.52,61.26,-131.4"></path><path class="solid" stroke="#000000" fill="#000000" d="M 64.14,-133.41 L 67.76,-123.46,58.72,-128.98 Z"></path><path stroke="#0000ff" fill="none" d="M 151.54,-177.75 C 152.71,-167.03,152.46,-153.32,146.22,-143,141.05,-134.47,132.98,-127.82,124.38,-122.72"></path><path class="solid" stroke="#0000ff" fill="#0000ff" d="M 125.8,-119.52 L 115.32,-117.96,122.54,-125.71 Z"></path><text x="102.71" y="-147.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">create container</text><text x="165.44" y="-147.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#0000ff">shell</text></g><g class="node"><title>container</title><path stroke="#000000" fill="#ffffff" d="M 119.22,-34 m -46.93,0 a 46.93,18 0 1,0 93.86,0 a 46.93,18 0 1,0 -93.86,0"></path><text x="119.22" y="-29.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">container</text></g><g class="relation" style="opacity: 1;"><title>crunch-run->container</title><path stroke="#0000ff" fill="none" d="M 168.59,-179 C 174.07,-173.57,179.29,-167.02,182.22,-159.8,185.02,-152.88,184.16,-150.21,182.22,-143,173.79,-111.83,153.88,-80.42,138.69,-59.57"></path><path class="solid" stroke="#0000ff" fill="#0000ff" d="M 141.3,-57.22 L 132.51,-51.32,135.7,-61.42 Z"></path><text x="197.6" y="-102.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#0000ff">tcp/http</text></g><g class="relation" style="opacity: 1;"><title>docker->container</title><path stroke="#000000" fill="none" d="M 93.27,-89.17 C 97.24,-80.9,102.11,-70.72,106.56,-61.44"></path><path class="solid" stroke="#000000" fill="#000000" d="M 109.82,-62.73 L 110.98,-52.2,103.5,-59.71 Z"></path></g><g class="node"><title>controller</title><path stroke="#000000" fill="none" d="M 292.22,-335 m -48.65,0 a 48.65,18 0 1,0 97.3,0 a 48.65,18 0 1,0 -97.3,0"></path><text x="292.22" y="-330.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">controller</text></g><g class="relation" style="opacity: 1;"><title>crunch-run->controller</title><path stroke="#000000" fill="none" d="M 156.37,-213.74 C 159,-219.43,161.84,-225.84,164.22,-231.8,175.91,-261.13,164.83,-276.77,187.24,-299,200.71,-312.35,219.4,-320.61,237.28,-325.72"></path><path class="solid" stroke="#000000" fill="#000000" d="M 236.54,-329.14 L 247.09,-328.25,238.28,-322.37 Z"></path><text x="233.7" y="-286.4" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">update</text><text x="233.7" y="-269.6" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">gateway ip:port,</text><text x="233.7" y="-252.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">container state,</text><text x="233.7" y="-236" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">output, ...</text></g><g class="node"><title>cloud provider</title><path class="dashed" stroke="#000000" fill="none" d="M 202.25,-464.6 L 104.18,-464.6,104.18,-428.6,202.25,-428.6 Z"></path><text x="153.22" y="-442.4" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">cloud provider</text></g><g class="relation" style="opacity: 1;"><title>cloud provider->SSH server</title><path stroke="#000000" fill="none" d="M 143.84,-428.47 C 134.07,-410.62,118.63,-382.42,107.37,-361.85"></path><path class="solid" stroke="#000000" fill="#000000" d="M 110.4,-360.1 L 102.52,-353,104.26,-363.46 Z"></path><text x="191.95" y="-398" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">add authorized_keys</text></g><g class="node"><title>dispatcher</title><path stroke="#000000" fill="none" d="M 121.22,-535.4 m -50.94,0 a 50.94,18 0 1,0 101.88,0 a 50.94,18 0 1,0 -101.88,0"></path><text x="121.22" y="-531.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">dispatcher</text></g><g class="relation" style="opacity: 1;"><title>dispatcher->SSH server</title><path stroke="#000000" fill="none" d="M 84.19,-522.95 C 57.27,-512.46,22.76,-494.01,6.2,-464.6,-15.98,-425.19,28.29,-382.2,61.44,-357.25"></path><path class="solid" stroke="#000000" fill="#000000" d="M 63.56,-360.03 L 69.58,-351.31,59.43,-354.38 Z"></path><text x="50.72" y="-442.4" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">start crunch-run</text></g><g class="relation" style="opacity: 1;"><title>dispatcher->cloud provider</title><path stroke="#000000" fill="none" d="M 118.61,-517.02 C 117.76,-506.68,117.95,-493.49,122.2,-482.6,123.51,-479.25,125.3,-476,127.36,-472.94"></path><path class="solid" stroke="#000000" fill="#000000" d="M 130.31,-474.84 L 133.71,-464.8,124.8,-470.53 Z"></path><text x="187.72" y="-486.8" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">create/destroy/list VMs</text></g><g class="relation" style="opacity: 1;"><title>dispatcher->controller</title><path stroke="#000000" fill="none" d="M 171.65,-533.15 C 199.28,-529.81,232.11,-521.04,253.22,-499.4,262.74,-489.64,279.17,-406.88,287.33,-363.03"></path><path class="solid" stroke="#000000" fill="#000000" d="M 290.8,-363.51 L 289.17,-353.04,283.91,-362.24 Z"></path><text x="330.03" y="-442.4" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">get container queue</text></g><g class="relation" style="opacity: 1;"><title>controller->crunch-run</title><path stroke="#0000ff" fill="none" d="M 295.12,-316.84 C 297.86,-294.44,298.93,-255.32,278.22,-231.8,261.45,-212.76,235.53,-203.57,211.23,-199.31"></path><path class="solid" stroke="#0000ff" fill="#0000ff" d="M 211.72,-195.84 L 201.31,-197.81,210.68,-202.76 Z"></path><text x="372.04" y="-261.2" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#0000ff">shell/tcp/http (https tunnel)</text></g><g class="node"><title>client</title><path stroke="#000000" fill="none" d="M 427.22,-446.6 m -32.48,0 a 32.48,18 0 1,0 64.96,0 a 32.48,18 0 1,0 -64.96,0"></path><text x="427.22" y="-442.4" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#000000">client</text></g><g class="relation" style="opacity: 1;"><title>client->controller</title><path stroke="#0000ff" fill="none" d="M 409.57,-431.27 C 386.64,-412.66,346.37,-379.96,319.49,-358.15"></path><path class="solid" stroke="#0000ff" fill="#0000ff" d="M 321.59,-355.34 L 311.62,-351.75,317.17,-360.77 Z"></path><text x="459.04" y="-398" text-anchor="middle" font-family="'Times-Roman',serif" font-size="14" fill="#0000ff">shell/tcp/http (https tunnel)</text></g></g></g></svg>
\ No newline at end of file
h3. Keep clients for data access
In order to access data in Keep, a client is needed to store data in and retrieve data from Keep. Different types of Keep clients exist:
-* a command line client like "@arv-get@":/user/tutorials/tutorial-keep-get.html#download-using-arv or "@arv-put@":/user/tutorials/tutorial-keep.html#upload-using-command
-* a FUSE mount provided by "@arv-mount@":/user/tutorials/tutorial-keep-mount-gnu-linux.html
+* a command line client like "@arv-get@":{{ site.baseurl }}/user/tutorials/tutorial-keep-get.html#download-using-arv or "@arv-put@":{{ site.baseurl }}/user/tutorials/tutorial-keep.html#upload-using-command
+* a FUSE mount provided by "@arv-mount@":{{ site.baseurl }}/user/tutorials/tutorial-keep-mount-gnu-linux.html
* a WebDAV mount provided by @keep-web@
* an S3-compatible endpoint provided by @keep-web@
-* programmatic access via the "Arvados SDKs":/sdk/index.html
+* programmatic access via the "Arvados SDKs":{{ site.baseurl }}/sdk/index.html
-In essense, these clients all do the same thing: they translate file and directory references into requests for Keep blocks and collection manifests. How Keep clients work, and how they use rendezvous hashing, is described in greater detail in "the next section":/architecture/keep-clients.html.
+In essense, these clients all do the same thing: they translate file and directory references into requests for Keep blocks and collection manifests. How Keep clients work, and how they use rendezvous hashing, is described in greater detail in "the next section":{{ site.baseurl }}/architecture/keep-clients.html.
For example, when a request comes in to read a file from Keep, the client will
* request the collection object (including its manifest) from the API server
h3. API server
-The API server stores collection objects and all associated metadata. That includes data about where the blocks for a collection are to be stored, e.g. when "storage classes":/admin/storage-classes.html are configured, as well as the desired and confirmed replication count for each block. It also stores the ACLs that control access to the collections. Finally, the API server provides Keep clients with time-based block signatures for access.
+The API server stores collection objects and all associated metadata. That includes data about where the blocks for a collection are to be stored, e.g. when "storage classes":{{ site.baseurl }}/admin/storage-classes.html are configured, as well as the desired and confirmed replication count for each block. It also stores the ACLs that control access to the collections. Finally, the API server provides Keep clients with time-based block signatures for access.
h3. Keepstore
|_. collection state|_. is_trashed|_. trash_at|_. delete_at|_. get|_. list|_. list?include_trash=true|_. can be modified|
|persisted collection|false |null |null |yes |yes |yes |yes |
|expiring collection|false |future |future |yes |yes |yes |yes |
-|trashed collection|true |past |future |no |no |yes |only is_trashed, trash_at and delete_at attribtues|
+|trashed collection|true |past |future |no |no |yes |only is_trashed, trash_at and delete_at attributes|
|deleted collection|true|past |past |no |no |no |no |
h2(#block_lifecycle). Block lifecycle
--- /dev/null
+---
+layout: default
+navsection: architecture
+title: Singularity
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Arvados can be configured to use "Singularity":https://sylabs.io/singularity/ instead of Docker to execute containers on cloud nodes or a Slurm/LSF cluster. Singularity may be preferable due to its simpler installation and lack of long-running daemon process and special system users/groups. For on premises Slurm/LSF clusters, see the "Set up a compute node with Singularity":{{ site.baseurl }}/install/crunch2/install-compute-node-singularity.html page. For cloud compute clusters, see the "Build a cloud compute node image":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html page.
+
+h2. Design overview
+
+When Arvados is configured to use Singularity as the runtime engine for Crunch, containers are executed by Singularity. The images specified in workflows and tool definitions must be Docker images uploaded via @arv-keepdocker@ or @arvados-cwl-runner@. When Singularity is the runtime engine, these images are converted to Singularity format (@.sif@) at runtime, as needed.
+
+To avoid repeating this conversion work unnecessarily, the @.sif@ files are cached in @Keep@. This is done on a per-user basis. If it does not exist yet, a new Arvados project named @.cache@ is automatically created in the user's home project. Similarly, a subproject named @auto-generated singularity images@ will be created in the @.cache@ project. The automatically generated @.sif@ files are stored in collections in that project, with an expiration date two weeks in the future. If the cached image exists when Crunch runs a new container, the expiration date will be pushed out, so that it is always 2 weeks in the future from the most recent start of a container using the image.
+
+It is safe to empty out or even remove the .cache project or any of its contents; if necessary the cache projects and the @.sif@ files will automatically be regenerated.
+
+h2. Notes
+
+* Programs running in Singularity containers may behave differently than when run in Docker, due to differences between Singularity and Docker. For example, the root (image) filesystem is read-only in a Singularity container. Programs that attempt to write outside a designated output or temporary directory are likely to fail.
+
+* When using Singularity as the runtime engine, the compute node needs to have a compatible Singularity executable installed, as well as the @mksquashfs@ program used to convert Docker images to Singularity's @.sif@ format. The Arvados "compute node image build script":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html includes these executables since Arvados 2.3.0.
+
+h2. Limitations
+
+Arvados @Singularity@ support is a work in progress. These are the current limitations of the implementation:
+
+* Even when using the Singularity runtime, users' container images are expected to be saved in Docker format. Specifying a @.sif@ file as an image when submitting a container request is not yet supported.
+* Arvados' Singularity implementation does not yet limit the amount of memory available in a container. Each container will have access to all memory on the host where it runs, unless memory use is restricted by Slurm/LSF.
+* The Docker ENTRYPOINT instruction is ignored.
+* Arvados is tested with Singularity version 3.7.4. Other versions may not work.
-<?xml version="1.0" standalone="yes"?>
-<!-- Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0 -->
-
-<svg version="1.1" viewBox="0.0 0.0 1338.0 1283.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l1338.0 0l0 1283.0l-1338.0 0l0 -1283.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l1338.0 0l0 1283.0l-1338.0 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m529.084 59.792652l179.27557 0l0 94.645676l-179.27557 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m529.084 59.792652l179.27557 0l0 94.645676l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m573.0276 114.035484l-3.609375 -13.59375l1.84375 0l2.0625 8.90625q0.34375 1.40625 0.578125 2.78125q0.515625 -2.171875 0.609375 -2.515625l2.59375 -9.171875l2.171875 0l1.953125 6.875q0.734375 2.5625 1.046875 4.8125q0.265625 -1.28125 0.6875 -2.953125l2.125 -8.734375l1.8125 0l-3.734375 13.59375l-1.734375 0l-2.859375 -10.359375q-0.359375 -1.296875 -0.421875 -1.59375q-0.21875 0.9375 -0.40625 1.59375l-2.890625 10.359375l-1.828125 0zm14.389893 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.266357 4.921875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm6.2438965 0l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm10.859375 0l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm15.594482 1.828125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813171 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.890625 3.609375l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m186.2126 85.77165l342.2677 2.708664" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m186.2126 85.77165l336.26794 2.6611862" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m522.4674 90.08451l4.5510254 -1.6157684l-4.5248413 -1.6875916z" fill-rule="evenodd"></path><path fill="#d9ead3" d="m464.64304 281.8714l154.07877 -82.47244l154.07874 82.47244l-154.07874 82.47244z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m464.64304 281.8714l154.07877 -82.47244l154.07874 82.47244l-154.07874 82.47244z" fill-rule="nonzero"></path><path fill="#000000" d="m550.6512 266.79138l5.234375 -13.593735l1.9375 0l5.5625 13.593735l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.7031097 -0.96875 -2.8124847q-0.265625 1.3125 -0.734375 2.5937347l-1.5 4.0zm9.8029175 5.578125l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm9.750732 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.297546 3.796875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm9.40625 -3.796875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm14.9158325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2506714 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#000000" d="m558.36993 287.57263q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.516296 1.328125l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.328125 0l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm21.933289 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813232 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm5.6257324 4.9375l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm8.813171 5.0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.926086 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375z" fill-rule="nonzero"></path><path fill="#000000" d="m559.7137 309.182q-0.828125 0.921875 -1.8125 1.390625q-0.96875 0.453125 -2.09375 0.453125q-2.09375 0 -3.3125 -1.40625q-1.0 -1.15625 -1.0 -2.578125q0 -1.265625 0.8125 -2.28125q0.8125 -1.015625 2.421875 -1.78125q-0.90625 -1.0625 -1.21875 -1.71875q-0.296875 -0.65625 -0.296875 -1.265625q0 -1.234375 0.953125 -2.125q0.953125 -0.90625 2.421875 -0.90625q1.390625 0 2.265625 0.859375q0.890625 0.84375 0.890625 2.046875q0 1.9375 -2.5625 3.3125l2.4375 3.09375q0.421875 -0.8125 0.640625 -1.890625l1.734375 0.375q-0.4375 1.78125 -1.203125 2.9375q0.9375 1.234375 2.125 2.078125l-1.125 1.328125q-1.0 -0.640625 -2.078125 -1.921875zm-3.40625 -7.078125q1.09375 -0.640625 1.40625 -1.125q0.328125 -0.484375 0.328125 -1.0625q0 -0.703125 -0.453125 -1.140625q-0.4375 -0.4375 -1.09375 -0.4375q-0.671875 0 -1.125 0.4375q-0.453125 0.421875 -0.453125 1.0625q0 0.3125 0.15625 0.65625q0.171875 0.34375 0.5 0.734375l0.734375 0.875zm2.359375 5.765625l-3.0625 -3.796875q-1.359375 0.8125 -1.84375 1.5q-0.46875 0.6875 -0.46875 1.375q0 0.8125 0.65625 1.703125q0.671875 0.890625 1.875 0.890625q0.75 0 1.546875 -0.46875q0.8125 -0.46875 1.296875 -1.203125zm17.329956 1.703125q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.516357 1.328125l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.328125 0l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm5.6760864 0l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm8.813171 5.0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.926086 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.235107 2.53125q0 -0.34375 0 -0.5q0 -0.984375 0.265625 -1.703125q0.21875 -0.546875 0.671875 -1.09375q0.328125 -0.390625 1.1875 -1.15625q0.875 -0.765625 1.125 -1.21875q0.265625 -0.453125 0.265625 -1.0q0 -0.96875 -0.765625 -1.703125q-0.75 -0.734375 -1.859375 -0.734375q-1.0625 0 -1.78125 0.671875q-0.703125 0.65625 -0.9375 2.078125l-1.71875 -0.203125q0.234375 -1.90625 1.375 -2.90625q1.15625 -1.015625 3.03125 -1.015625q2.0 0 3.1875 1.09375q1.1875 1.078125 1.1875 2.609375q0 0.890625 -0.421875 1.640625q-0.40625 0.75 -1.625 1.828125q-0.8125 0.734375 -1.0625 1.078125q-0.25 0.34375 -0.375 0.796875q-0.125 0.4375 -0.140625 1.4375l-1.609375 0zm-0.09375 3.34375l0 -1.90625l1.890625 0l0 1.90625l-1.890625 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m848.9265 239.90552l156.34644 0l0 88.59842l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m848.9265 239.90552l156.34644 0l0 88.59842l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m865.75464 274.7966l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm14.683289 2.15625l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm6.694702 1.5l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9783325 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735046 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069824 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6658325 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m859.58276 302.12473l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.7614136 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.6033325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281921 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm19.442871 0l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.460388 -4.375l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584167 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9626465 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m467.042 484.1076l154.07874 -74.80313l154.07874 74.80313l-154.07874 74.80316z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m467.042 484.1076l154.07874 -74.80313l154.07874 74.80313l-154.07874 74.80316z" fill-rule="nonzero"></path><path fill="#000000" d="m553.94073 486.65262l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584229 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438171 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.5042114 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm22.309021 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.000732 5.875l3.59375 -5.125l-3.328125 -4.734375l2.09375 0l1.515625 2.3125q0.421875 0.65625 0.671875 1.109375q0.421875 -0.609375 0.765625 -1.09375l1.65625 -2.328125l1.984375 0l-3.390625 4.640625l3.65625 5.21875l-2.046875 0l-2.03125 -3.0625l-0.53125 -0.828125l-2.59375 3.890625l-2.015625 0zm10.453125 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.4572754 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm13.65625 1.4375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.8552246 -1.4375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm13.125 -0.40625q0 -0.34375 0 -0.5q0 -0.984375 0.265625 -1.703125q0.21875 -0.546875 0.671875 -1.09375q0.328125 -0.390625 1.1875 -1.15625q0.875 -0.765625 1.125 -1.21875q0.265625 -0.453125 0.265625 -1.0q0 -0.96875 -0.765625 -1.703125q-0.75 -0.734375 -1.859375 -0.734375q-1.0625 0 -1.78125 0.671875q-0.703125 0.65625 -0.9375 2.078125l-1.71875 -0.203125q0.234375 -1.90625 1.375 -2.90625q1.15625 -1.015625 3.03125 -1.015625q2.0 0 3.1875 1.09375q1.1875 1.078125 1.1875 2.609375q0 0.890625 -0.421875 1.640625q-0.40625 0.75 -1.625 1.828125q-0.8125 0.734375 -1.0625 1.078125q-0.25 0.34375 -0.375 0.796875q-0.125 0.4375 -0.140625 1.4375l-1.609375 0zm-0.09375 3.34375l0 -1.90625l1.890625 0l0 1.90625l-1.890625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m618.7218 154.43832l1.1968384 48.0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m618.7218 154.43832l1.0472412 42.00186" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m618.11786 196.48135l1.7643433 4.495514l1.5380859 -4.5778503z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m896.65094 455.34122l2.3936768 43.653534" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m896.65094 455.34122l2.0651855 37.662506" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m897.06683 493.09418l1.8977661 4.440857l1.4007568 -4.621704z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m772.80054 281.8714l76.12598 1.669281" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m772.80054 281.8714l70.12744 1.5377502" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m842.8917 285.0605l4.573242 -1.5518494l-4.5007935 -1.750824z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m620.52234 360.3176l1.1968384 48.0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m620.52234 360.3176l1.0472412 42.00183" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m619.9184 402.36063l1.7643433 4.495514l1.5380859 -4.5778503z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m585.021 367.1076l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m595.4741 394.02762l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0zm12.644836 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m788.84515 248.8924l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m803.142 275.81238l0 -5.765625l-5.234375 -7.828125l2.1875 0l2.671875 4.09375q0.75 1.15625 1.390625 2.296875q0.609375 -1.0625 1.484375 -2.40625l2.625 -3.984375l2.109375 0l-5.4375 7.828125l0 5.765625l-1.796875 0zm15.1466675 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m845.084 442.14172l156.34644 0l0 88.59845l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m845.084 442.14172l156.34644 0l0 88.59845l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m861.9121 477.0328l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm14.683289 2.15625l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm6.694763 1.5l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9782715 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069214 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6658325 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m855.74023 504.36093l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.7614136 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.6033325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm19.44281 0l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.460388 -4.375l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584167 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9627075 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094421 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m768.958 484.1076l76.12598 1.6693115" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m768.958 484.1076l70.12744 1.5377808" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m839.0492 487.2967l4.573242 -1.5518494l-4.5007935 -1.750824z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m785.0026 451.1286l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m799.2995 478.0486l0 -5.765625l-5.234375 -7.828125l2.1875 0l2.671875 4.09375q0.75 1.15625 1.390625 2.296875q0.609375 -1.0625 1.484375 -2.40625l2.625 -3.984375l2.109375 0l-5.4375 7.828125l0 5.765625l-1.796875 0zm15.1467285 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438171 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1093.5826 486.44095l3.4645996 -377.88977" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1093.5826 486.44095l3.4645996 -377.88977" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1005.27295 284.2047l89.60632 1.6378174" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1005.27295 284.2047l83.6073 1.5281677" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m1088.8501 287.38434l4.567505 -1.5685425l-4.507202 -1.734375z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m1099.9213 111.42519l-391.55908 -2.8661423" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1099.9213 111.42519l-385.5592 -2.8222198" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m714.37415 106.95129l-4.550049 1.6184692l4.525879 1.684906z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m1001.4304 485.62204l89.60632 1.6378174" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1001.4304 485.62204l83.6073 1.5281372" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m1085.0076 488.80167l4.567505 -1.5685425l-4.50708 -1.734375z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m621.1207 558.91077l0.12597656 76.81891" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m621.1207 558.91077l0.1161499 70.81891" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m619.58514 629.73236l1.6591797 4.5354004l1.6442871 -4.5408325z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m579.0289 573.6352l47.338562 0l0 34.42517l-47.338562 0z" fill-rule="nonzero"></path><path fill="#000000" d="m589.482 600.5552l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0zm12.644836 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125z" fill-rule="nonzero"></path><path fill="#ead1dc" d="m545.084 634.39105l156.34644 0l0 70.26776l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m545.084 634.39105l156.34644 0l0 70.26776l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m557.92773 654.44495l-3.609375 -13.59375l1.84375 0l2.0625 8.90625q0.34375 1.40625 0.578125 2.78125q0.515625 -2.171875 0.609375 -2.515625l2.59375 -9.171875l2.171875 0l1.953125 6.875q0.734375 2.5625 1.046875 4.8125q0.265625 -1.28125 0.6875 -2.953125l2.125 -8.734375l1.8125 0l-3.734375 13.59375l-1.734375 0l-2.859375 -10.359375q-0.359375 -1.296875 -0.421875 -1.59375q-0.21875 0.9375 -0.40625 1.59375l-2.890625 10.359375l-1.828125 0zm21.764893 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.078857 5.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.265625 -1.3125q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.293396 9.65625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297607 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819824 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047546 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m557.1621 676.44495l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.660461 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7854614 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm19.215271 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm7.9645386 0.28125q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0632324 4.9375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.9313965 0.8125l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047607 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125z" fill-rule="nonzero"></path><path fill="#000000" d="m554.05273 698.44495l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm8.601013 0.234375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm11.585327 -0.234375l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm3.5510864 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm8.985107 5.734375l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm9.313171 -6.578125l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m603.2966 782.2992l2.3937378 43.653564" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m603.2966 782.2992l2.0652466 37.662598" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m603.7125 820.0522l1.8977051 4.440857l1.4008179 -4.621704z" fill-rule="evenodd"></path><path fill="#bf9000" d="m512.5171 813.52496l114.74011 -60.960632l114.74017 60.960632l-114.74017 60.96057z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m512.5171 813.52496l114.74011 -60.960632l114.74017 60.960632l-114.74017 60.96057z" fill-rule="nonzero"></path><path fill="#000000" d="m605.663 816.06995l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.4436035 0l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5060425 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625z" fill-rule="nonzero"></path><path fill="#f1c232" d="m677.6772 941.51184l179.27557 0l0 94.64563l-179.27557 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m677.6772 941.51184l179.27557 0l0 94.64563l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m725.6051 990.4265l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm7.9332886 5.328125l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm21.978333 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0944824 -6.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.0979004 0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm15.796875 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.531982 4.9375l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm7.5788574 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#ffd966" d="m400.60892 941.51184l179.2756 0l0 94.64563l-179.2756 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m400.60892 941.51184l179.2756 0l0 94.64563l-179.2756 0z" fill-rule="nonzero"></path><path fill="#000000" d="m422.49536 995.75464l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.250702 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm6.228302 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813202 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0788574 4.9375l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290802 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.043396 6.109375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm11.616577 3.546875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm8.188232 1.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm11.828125 2.9375l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm18.035461 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m627.2572 874.48553l0 33.513184l-137.00787 0l0 33.510498" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m627.2572 874.48553l0 33.513123l-137.00787 0l0 30.083435" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m490.24933 938.0821l-1.1245728 -1.1245728l1.1245728 3.0897827l1.1246033 -3.0897827z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m627.2572 874.48553l0 33.513184l140.06299 0l0 33.510498" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m627.2572 874.48553l0 33.513123l140.06299 0l0 30.083435" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m767.3202 938.0821l-1.1245728 -1.1245728l1.1245728 3.0897827l1.1245728 -3.0897827z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m733.7454 1068.1392l137.00787 0l0 48.0l-137.00787 0z" fill-rule="nonzero"></path><path fill="#000000" d="m742.7142 1095.0591l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm16.256042 5.578125l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm7.5788574 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm19.215271 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020386 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1027.8976 907.0079l229.48035 0l0 94.64569l-229.48035 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1038.3976 933.92786l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176147 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281982 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1135254 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9782715 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.547607 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm6.546875 2.109375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm10.366577 0l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm13.18396 4.921875l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9436035 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1037.757 951.55286l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584229 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9626465 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm18.423096 0l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.6604 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7854004 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270996 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1037.757 973.55286l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.4436035 0l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5061035 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625zm21.819702 5.09375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#bf9000" d="m550.4829 1121.1864l156.3465 0l0 76.81885l-156.3465 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m550.4829 1121.1864l156.3465 0l0 76.81885l-156.3465 0z" fill-rule="nonzero"></path><path fill="#000000" d="m571.6152 1166.5157l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm11.058289 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm16.016357 1.75l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm14.031921 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5427246 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.5354004 0l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.6989746 3.796875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm21.042664 -3.796875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507324 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094421 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m490.2467 1036.1575l0 42.51465l138.42523 0l0 42.52478" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m490.2467 1036.1575l0 42.51465l138.42523 0l0 39.097656" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m628.67194 1117.7698l-1.1246338 -1.1246338l1.1246338 3.0898438l1.1245728 -3.0898438z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m767.31494 1036.1575l0 42.51465l-138.64563 0l0 42.52478" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m767.31494 1036.1575l0 42.51465l-138.64563 0l0 39.097656" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m628.6693 1117.7698l-1.1246338 -1.1246338l1.1246338 3.0898438l1.1245728 -3.0898438z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m623.2572 704.6588l4.0 47.905518" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m623.2572 704.6588l3.5007324 41.92633" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m625.11194 746.72253l2.0236206 4.3849487l1.2684326 -4.65979z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m628.6562 1198.0052l0 25.002075l385.45148 0l0 -553.4745l-312.66412 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m628.6562 1198.0052l0 25.002075l385.45148 0l0 -553.4745l-309.237 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m704.87067 669.53284l1.1245728 -1.1246338l-3.0897827 1.1246338l3.0897827 1.1245728z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m701.4305 651.92975l522.5573 3.0775146l0 -581.44293l-519.1407 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m701.4304 651.92975l522.5575 3.0775146l0 -581.44293l-515.71375 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m708.2742 73.56431l1.1246338 -1.124588l-3.0897827 1.124588l3.0897827 1.1245804z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m808.0315 611.3517l466.8661 0l0 43.653564l-466.8661 0z" fill-rule="nonzero"></path><path fill="#000000" d="m818.5315 638.2717l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176086 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281921 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1135864 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9783325 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.547546 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm6.546875 2.109375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm10.366638 0l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020386 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm13.215271 5.15625l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm8.261414 -0.234375l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm18.394836 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.078857 5.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.265625 -1.3125q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281921 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290833 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.293396 9.65625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297607 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819214 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.62506104 -0.453125 0.85943604 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.093811 1.296875 -2.718811 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875061 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015686 0.5625 -2.500061 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921936 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.79693604 -0.921875 -1.921936 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047668 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm16.12146 5.875l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.6604 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7855225 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270996 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm14.887085 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.5042725 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm19.21521 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m767.6221 103.74803l179.27557 0l0 43.65355l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m778.1221 130.66803l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176025 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069824 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6657715 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.590271 2.015625l1.625 -0.21875q0.0625 1.546875 0.578125 2.125q0.53125 0.578125 1.4375 0.578125q0.6875 0 1.171875 -0.3125q0.5 -0.3125 0.671875 -0.84375q0.1875 -0.53125 0.1875 -1.703125l0 -9.359375l1.8125 0l0 9.265625q0 1.703125 -0.421875 2.640625q-0.40625 0.9375 -1.3125 1.4375q-0.890625 0.484375 -2.09375 0.484375q-1.796875 0 -2.75 -1.03125q-0.9375 -1.03125 -0.90625 -3.0625zm9.640625 -0.515625l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5061035 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625zm13.183289 6.59375l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m529.084 131.11548l-343.0866 -1.102356" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m529.084 131.11548l-337.08667 -1.0830841" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m192.00266 128.38068l-4.5433807 1.637146l4.5327606 1.6663055z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m258.7034 136.56955l156.34647 0l0 70.267715l-156.34647 0z" fill-rule="nonzero"></path><path fill="#000000" d="m269.17215 163.48955l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm16.865448 5.921875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0632324 4.9375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.556427 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.012146 5.875l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.021698 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.943573 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.835358 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438202 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.0 6.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625z" fill-rule="nonzero"></path><path fill="#000000" d="m276.73465 183.88017q-0.828125 0.921875 -1.8125 1.390625q-0.96875 0.453125 -2.09375 0.453125q-2.09375 0 -3.3125 -1.40625q-1.0 -1.15625 -1.0 -2.578125q0 -1.265625 0.8125 -2.28125q0.8125 -1.015625 2.421875 -1.78125q-0.90625 -1.0625 -1.21875 -1.71875q-0.296875 -0.65625 -0.296875 -1.265625q0 -1.234375 0.953125 -2.125q0.953125 -0.90625 2.421875 -0.90625q1.390625 0 2.265625 0.859375q0.890625 0.84375 0.890625 2.046875q0 1.9375 -2.5625 3.3125l2.4375 3.09375q0.421875 -0.8125 0.640625 -1.890625l1.734375 0.375q-0.4375 1.78125 -1.203125 2.9375q0.9375 1.234375 2.125 2.078125l-1.125 1.328125q-1.0 -0.640625 -2.078125 -1.921875zm-3.40625 -7.078125q1.09375 -0.640625 1.40625 -1.125q0.328125 -0.484375 0.328125 -1.0625q0 -0.703125 -0.453125 -1.140625q-0.4375 -0.4375 -1.09375 -0.4375q-0.671875 0 -1.125 0.4375q-0.453125 0.421875 -0.453125 1.0625q0 0.3125 0.15625 0.65625q0.171875 0.34375 0.5 0.734375l0.734375 0.875zm2.359375 5.765625l-3.0625 -3.796875q-1.359375 0.8125 -1.84375 1.5q-0.46875 0.6875 -0.46875 1.375q0 0.8125 0.65625 1.703125q0.671875 0.890625 1.875 0.890625q0.75 0 1.546875 -0.46875q0.8125 -0.46875 1.296875 -1.203125zm17.283142 2.921875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281952 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.4573364 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.0 6.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm8.828827 4.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.000702 8.734375l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm14.589569 -0.015625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297577 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819824 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047577 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#ffffff" d="m94.25984 75.59843l0 0c0 -12.054596 10.597107 -21.826775 23.669289 -21.826775l0 0c13.072197 0 23.669289 9.772179 23.669289 21.826775l0 0c0 12.054588 -10.597092 21.826767 -23.669289 21.826767l0 0c-13.072182 0 -23.669289 -9.772179 -23.669289 -21.826767z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m94.25984 75.59843l0 0c0 -12.054596 10.597107 -21.826775 23.669289 -21.826775l0 0c13.072197 0 23.669289 9.772179 23.669289 21.826775l0 0c0 12.054588 -10.597092 21.826767 -23.669289 21.826767l0 0c-13.072182 0 -23.669289 -9.772179 -23.669289 -21.826767z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m117.92913 97.42519l1.1653595 119.55906" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m117.92913 97.42519l1.1653595 119.55906" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m117.92913 128.50131l29.574806 42.48819" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m117.92913 128.50131l29.574806 42.48819" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m91.50131 170.50131l26.425194 -41.07086" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m91.50131 170.50131l26.425194 -41.07086" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m235.77428 40.0l179.27559 0l0 48.0l-179.27559 0z" fill-rule="nonzero"></path><path fill="#000000" d="m273.33563 65.59187l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm8.183289 5.328125l0 -13.59375l9.84375 0l0 1.59375l-8.046875 0l0 4.171875l7.53125 0l0 1.59375l-7.53125 0l0 4.625l8.359375 0l0 1.609375l-10.15625 0zm15.865448 0l0 -12.0l-4.46875 0l0 -1.59375l10.765625 0l0 1.59375l-4.5 0l0 12.0l-1.796875 0zm11.65741 0.234375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm6.417694 -0.234375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438202 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.906952 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.665802 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m119.125984 215.50131l-38.58268 53.07086" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m119.125984 215.50131l-38.58268 53.07086" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m119.62467 215.50131l42.99212 58.992126" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m119.62467 215.50131l42.99212 58.992126" fill-rule="nonzero"></path></g></svg>
-
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ version="1.1"
+ viewBox="0.0 0.0 1338.0 1283.0"
+ fill="none"
+ stroke="none"
+ stroke-linecap="square"
+ stroke-miterlimit="10"
+ id="svg269"
+ sodipodi:docname="Session_Establishment.svg"
+ inkscape:version="1.0.2 (e86c870879, 2021-01-15)">
+ <metadata
+ id="metadata275">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs273" />
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1918"
+ inkscape:window-height="1038"
+ id="namedview271"
+ showgrid="false"
+ inkscape:zoom="1.3858145"
+ inkscape:cx="1026.4779"
+ inkscape:cy="752.37863"
+ inkscape:window-x="0"
+ inkscape:window-y="20"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="svg269" />
+ <clipPath
+ id="p.0">
+ <path
+ d="m0 0l1338.0 0l0 1283.0l-1338.0 0l0 -1283.0z"
+ clip-rule="nonzero"
+ id="path2" />
+ </clipPath>
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 0,0 H 1338 V 1283 H 0 Z"
+ fill-rule="nonzero"
+ id="path5" />
+ <path
+ fill="#d9ead3"
+ d="M 529.084,59.792652 H 708.35957 V 154.43833 H 529.084 Z"
+ fill-rule="nonzero"
+ id="path7" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 529.084,59.792652 H 708.35957 V 154.43833 H 529.084 Z"
+ fill-rule="nonzero"
+ id="path9" />
+ <path
+ fill="#000000"
+ d="m 573.0276,114.03548 -3.60938,-13.59375 h 1.84375 l 2.0625,8.90625 q 0.34375,1.40625 0.57813,2.78125 0.51562,-2.17187 0.60937,-2.51562 l 2.59375,-9.17188 h 2.17188 l 1.95312,6.875 q 0.73438,2.5625 1.04688,4.8125 0.26562,-1.28125 0.6875,-2.95312 l 2.125,-8.73438 h 1.8125 l -3.73438,13.59375 h -1.73437 l -2.85938,-10.35937 q -0.35937,-1.29688 -0.42187,-1.59375 -0.21875,0.9375 -0.40625,1.59375 l -2.89063,10.35937 z m 14.38989,-4.92187 q 0,-2.73438 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32812 1.29688,3.67187 0,1.90625 -0.57813,3 -0.5625,1.07813 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82813,2.82812 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95312 0.82813,-2.89062 0,-1.82813 -0.82813,-2.76563 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82813 z m 9.26636,4.92187 v -9.85937 h 1.5 v 1.5 q 0.57812,-1.04688 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54687 l -0.57812,1.54688 q -0.60938,-0.35938 -1.23438,-0.35938 -0.54687,0 -0.98437,0.32813 -0.42188,0.32812 -0.60938,0.90625 -0.28125,0.89062 -0.28125,1.95312 v 5.15625 z m 6.2439,0 v -13.59375 h 1.67187 v 7.75 l 3.95313,-4.01562 h 2.15625 l -3.76563,3.65625 4.14063,6.20312 h -2.0625 l -3.25,-5.03125 -1.17188,1.125 v 3.90625 z m 10.85937,0 H 613.959 v -13.59375 h 1.65625 v 4.84375 q 1.0625,-1.32812 2.70312,-1.32812 0.90625,0 1.71875,0.375 0.8125,0.35937 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92187 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45312 z m -0.0156,-5 q 0,1.76563 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95312 0.79687,-2.84375 0,-1.92187 -0.76562,-2.84375 -0.76563,-0.92187 -1.84375,-0.92187 -1.09375,0 -1.89063,0.95312 -0.79687,0.95313 -0.79687,2.75 z m 15.59448,1.82813 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 9.11011,5.875 v -9.85937 h 1.5 v 1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32812 0.75,0.3125 1.10938,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67188 v -6 q 0,-1.01562 -0.20312,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17188,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76562,0.67187 -0.76562,2.57812 v 5.375 z m 16.81317,-3.60937 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95312 -2.73438,0.95312 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32812 -1.21875,-3.79687 0,-1.59375 0.51563,-2.78125 0.53125,-1.20313 1.60937,-1.79688 1.09375,-0.60937 2.35938,-0.60937 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85937 0,1.98438 0.76562,2.89063 0.76563,0.89062 1.98438,0.89062 0.98437,0 1.64062,-0.59375 0.65625,-0.60937 0.84375,-1.85937 z m 2.89063,3.60937 v -13.59375 h 1.67187 v 4.875 q 1.17188,-1.35937 2.95313,-1.35937 1.09375,0 1.89062,0.4375 0.8125,0.42187 1.15625,1.1875 0.35938,0.76562 0.35938,2.20312 v 6.25 h -1.67188 v -6.25 q 0,-1.25 -0.54687,-1.8125 -0.54688,-0.57812 -1.53125,-0.57812 -0.75,0 -1.40625,0.39062 -0.64063,0.375 -0.92188,1.04688 -0.28125,0.65625 -0.28125,1.8125 v 5.39062 z"
+ fill-rule="nonzero"
+ id="path11" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 186.2126,85.77165 342.2677,2.708664"
+ fill-rule="nonzero"
+ id="path13" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 186.2126,85.77165 336.26794,2.661186"
+ fill-rule="evenodd"
+ id="path15" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 522.4674,90.08451 4.55103,-1.615768 -4.52485,-1.687592 z"
+ fill-rule="evenodd"
+ id="path17" />
+ <path
+ fill="#d9ead3"
+ d="M 464.64304,281.8714 618.72181,199.39896 772.80055,281.8714 618.72181,364.34384 Z"
+ fill-rule="nonzero"
+ id="path19" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 464.64304,281.8714 618.72181,199.39896 772.80055,281.8714 618.72181,364.34384 Z"
+ fill-rule="nonzero"
+ id="path21" />
+ <path
+ fill="#000000"
+ d="m 550.6512,266.79138 5.23438,-13.59374 h 1.9375 l 5.5625,13.59374 h -2.04688 l -1.59375,-4.125 h -5.6875 l -1.48437,4.125 z m 3.92188,-5.57813 h 4.60937 L 557.7762,257.432 q -0.65625,-1.7031 -0.96875,-2.81248 -0.26562,1.3125 -0.73437,2.59373 z m 9.80291,5.57813 V 256.932 h 1.5 v 1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32813 0.75,0.3125 1.10938,0.84375 0.375,0.51562 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67188 v -6 q 0,-1.01563 -0.20312,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.17188,-0.29687 -1.0625,0 -1.84375,0.67187 -0.76562,0.67188 -0.76562,2.57813 v 5.375 z m 9.75073,-4.92188 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 9.28199,4.92188 V 256.932 h 1.5 v 1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32813 0.75,0.3125 1.10937,0.84375 0.375,0.51562 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67187 v -6 q 0,-1.01563 -0.20313,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.17187,-0.29687 -1.0625,0 -1.84375,0.67187 -0.76563,0.67188 -0.76563,2.57813 v 5.375 z m 10.29754,3.79687 -0.17187,-1.5625 q 0.54687,0.14063 0.95312,0.14063 0.54688,0 0.875,-0.1875 0.34375,-0.1875 0.5625,-0.51563 0.15625,-0.25 0.5,-1.25 0.0469,-0.14062 0.15625,-0.40625 l -3.73437,-9.875 h 1.79687 l 2.04688,5.71875 q 0.40625,1.07813 0.71875,2.28125 0.28125,-1.15625 0.6875,-2.25 l 2.09375,-5.75 h 1.67187 l -3.75,10.03125 q -0.59375,1.625 -0.9375,2.23438 -0.4375,0.82812 -1.01562,1.20312 -0.57813,0.39063 -1.375,0.39063 -0.48438,0 -1.07813,-0.20313 z m 9.40625,-3.79687 V 256.932 h 1.5 v 1.39063 q 0.45313,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76563,-0.45313 1.09375,0 1.79687,0.45313 0.70313,0.45312 0.98438,1.28125 1.17187,-1.73438 3.04687,-1.73438 1.46875,0 2.25,0.8125 0.79688,0.8125 0.79688,2.5 v 6.76563 h -1.67188 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42187,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67187 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70313,0 -1.3125,0.375 -0.59375,0.35937 -0.85938,1.07812 -0.26562,0.71875 -0.26562,2.0625 v 5.10938 z m 14.91583,-4.92188 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 15.73511,4.92188 v -1.45313 q -1.14062,1.67188 -3.125,1.67188 -0.85937,0 -1.625,-0.32813 -0.75,-0.34375 -1.125,-0.84375 -0.35937,-0.5 -0.51562,-1.23437 -0.0937,-0.5 -0.0937,-1.5625 V 256.932 h 1.67187 v 5.46875 q 0,1.3125 0.0937,1.76563 0.15625,0.65625 0.67188,1.03125 0.51562,0.375 1.26562,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39063 0.92188,-1.04688 0.28125,-0.67187 0.28125,-1.9375 V 256.932 h 1.67187 v 9.85938 z m 3.25067,-2.9375 1.65625,-0.26563 q 0.14063,1 0.76563,1.53125 0.64062,0.51563 1.78125,0.51563 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89063 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60938 -0.35938,-1.32813 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48437 0.67188,-0.20313 1.4375,-0.20313 1.17188,0 2.04688,0.34375 0.875,0.32813 1.28125,0.90625 0.42187,0.5625 0.57812,1.51563 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39063 -0.48438,0.375 -0.48438,0.875 0,0.32812 0.20313,0.59375 0.20312,0.26562 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76562 0.70313,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48437,1.57813 -0.48438,0.73437 -1.40625,1.14062 -0.92188,0.39063 -2.07813,0.39063 -1.92187,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z"
+ fill-rule="nonzero"
+ id="path23" />
+ <path
+ fill="#000000"
+ d="m 558.36993,287.57263 q -0.9375,0.79687 -1.79688,1.125 -0.85937,0.3125 -1.84375,0.3125 -1.60937,0 -2.48437,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32812,-1.32813 0.32813,-0.59375 0.85938,-0.95312 0.53125,-0.35938 1.20312,-0.54688 0.5,-0.14062 1.48438,-0.25 2.03125,-0.25 2.98437,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64062,-0.5625 -1.90625,-0.5625 -1.17187,0 -1.73437,0.40625 -0.5625,0.40625 -0.82813,1.46875 l -1.64062,-0.23438 q 0.23437,-1.04687 0.73437,-1.6875 0.51563,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26563,0 2.04688,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51562,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10938,0.60938 0.4375,1.17188 h -1.75 q -0.26562,-0.51563 -0.32812,-1.21875 z m -0.14063,-3.71875 q -0.90625,0.35937 -2.73437,0.625 -1.03125,0.14062 -1.45313,0.32812 -0.42187,0.1875 -0.65625,0.54688 -0.23437,0.35937 -0.23437,0.79687 0,0.67188 0.5,1.125 0.51562,0.4375 1.48437,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10938,-1.15625 0.26562,-0.57813 0.26562,-1.67188 z m 10.5163,1.32812 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95313 -2.73438,0.95313 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51563,-2.78125 0.53125,-1.20312 1.60937,-1.79687 1.09375,-0.60938 2.35938,-0.60938 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76562,2.89062 0.76563,0.89063 1.98438,0.89063 0.98437,0 1.64062,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 9.32813,0 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 9.64062,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 8.43823,2.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 9.32812,0 1.65625,-0.26563 q 0.14063,1 0.76563,1.53125 0.64062,0.51563 1.78125,0.51563 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89063 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60938 -0.35938,-1.32813 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48437 0.67188,-0.20313 1.4375,-0.20313 1.17188,0 2.04688,0.34375 0.875,0.32813 1.28125,0.90625 0.42187,0.5625 0.57812,1.51563 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39063 -0.48438,0.375 -0.48438,0.875 0,0.32812 0.20313,0.59375 0.20312,0.26562 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76562 0.70313,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48437,1.57813 -0.48438,0.73437 -1.40625,1.14062 -0.92188,0.39063 -2.07813,0.39063 -1.92187,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 21.93329,-0.23438 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.1101,5.875 V 278.932 h 1.5 v 1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32813 0.75,0.3125 1.10938,0.84375 0.375,0.51562 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67188 v -6 q 0,-1.01563 -0.20312,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.17188,-0.29687 -1.0625,0 -1.84375,0.67187 -0.76562,0.67188 -0.76562,2.57813 v 5.375 z m 16.81324,-1.21875 q -0.9375,0.79687 -1.79688,1.125 -0.85937,0.3125 -1.84375,0.3125 -1.60937,0 -2.48437,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32812,-1.32813 0.32813,-0.59375 0.85938,-0.95312 0.53125,-0.35938 1.20312,-0.54688 0.5,-0.14062 1.48438,-0.25 2.03125,-0.25 2.98437,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64062,-0.5625 -1.90625,-0.5625 -1.17187,0 -1.73437,0.40625 -0.5625,0.40625 -0.82813,1.46875 l -1.64062,-0.23438 q 0.23437,-1.04687 0.73437,-1.6875 0.51563,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26563,0 2.04688,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51562,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10938,0.60938 0.4375,1.17188 h -1.75 q -0.26562,-0.51563 -0.32812,-1.21875 z m -0.14063,-3.71875 q -0.90625,0.35937 -2.73437,0.625 -1.03125,0.14062 -1.45313,0.32812 -0.42187,0.1875 -0.65625,0.54688 -0.23437,0.35937 -0.23437,0.79687 0,0.67188 0.5,1.125 0.51562,0.4375 1.48437,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10938,-1.15625 0.26562,-0.57813 0.26562,-1.67188 z m 5.62573,4.9375 h -1.54687 v -13.59375 h 1.65625 v 4.84375 q 1.0625,-1.32813 2.70312,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32813,1.03125 0.53125,0.65625 0.82812,1.59375 0.29688,0.9375 0.29688,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73438,-1.45313 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79687,1.28125 2.14062,1.28125 1.09375,0 1.89063,-0.9375 0.79687,-0.95313 0.79687,-2.84375 0,-1.92188 -0.76562,-2.84375 -0.76563,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89063,0.95313 -0.79687,0.95312 -0.79687,2.75 z m 8.81317,5 v -13.59375 h 1.67187 v 13.59375 z m 10.92609,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 15.50073,5.875 v -1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17187,0 -2.17187,-0.64063 -0.98438,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48437,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98438,-0.64063 2.20313,-0.64063 0.89062,0 1.57812,0.375 0.70313,0.375 1.14063,0.98438 v -4.875 h 1.65625 v 13.59375 z m -5.28125,-4.92188 q 0,1.89063 0.79688,2.82813 0.8125,0.9375 1.89062,0.9375 1.09375,0 1.85938,-0.89063 0.76562,-0.89062 0.76562,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92187,-0.95312 -1.10938,0 -1.85938,0.90625 -0.75,0.90625 -0.75,2.85937 z"
+ fill-rule="nonzero"
+ id="path25" />
+ <path
+ fill="#000000"
+ d="m 559.7137,309.182 q -0.82812,0.92188 -1.8125,1.39063 -0.96875,0.45312 -2.09375,0.45312 -2.09375,0 -3.3125,-1.40625 -1,-1.15625 -1,-2.57812 0,-1.26563 0.8125,-2.28125 0.8125,-1.01563 2.42188,-1.78125 -0.90625,-1.0625 -1.21875,-1.71875 -0.29688,-0.65625 -0.29688,-1.26563 0,-1.23437 0.95313,-2.125 0.95312,-0.90625 2.42187,-0.90625 1.39063,0 2.26563,0.85938 0.89062,0.84375 0.89062,2.04687 0,1.9375 -2.5625,3.3125 l 2.4375,3.09375 q 0.42188,-0.8125 0.64063,-1.89062 l 1.73437,0.375 q -0.4375,1.78125 -1.20312,2.9375 0.9375,1.23437 2.125,2.07812 l -1.125,1.32813 q -1,-0.64063 -2.07813,-1.92188 z m -3.40625,-7.07812 q 1.09375,-0.64063 1.40625,-1.125 0.32813,-0.48438 0.32813,-1.0625 0,-0.70313 -0.45313,-1.14063 -0.4375,-0.4375 -1.09375,-0.4375 -0.67187,0 -1.125,0.4375 -0.45312,0.42188 -0.45312,1.0625 0,0.3125 0.15625,0.65625 0.17187,0.34375 0.5,0.73438 z m 2.35938,5.76562 -3.0625,-3.79687 q -1.35938,0.8125 -1.84375,1.5 -0.46875,0.6875 -0.46875,1.375 0,0.8125 0.65625,1.70312 0.67187,0.89063 1.875,0.89063 0.75,0 1.54687,-0.46875 0.8125,-0.46875 1.29688,-1.20313 z m 17.32995,1.70313 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 10.51635,1.32812 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95313 -2.73438,0.95313 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51563,-2.78125 0.53125,-1.20312 1.60937,-1.79687 1.09375,-0.60938 2.35938,-0.60938 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76562,2.89062 0.76563,0.89063 1.98438,0.89063 0.98437,0 1.64062,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 9.32813,0 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 9.64062,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 8.43823,2.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 9.32812,0 1.65625,-0.26563 q 0.14063,1 0.76563,1.53125 0.64062,0.51563 1.78125,0.51563 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89063 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60938 -0.35938,-1.32813 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48437 0.67188,-0.20313 1.4375,-0.20313 1.17188,0 2.04688,0.34375 0.875,0.32813 1.28125,0.90625 0.42187,0.5625 0.57812,1.51563 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39063 -0.48438,0.375 -0.48438,0.875 0,0.32812 0.20313,0.59375 0.20312,0.26562 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76562 0.70313,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48437,1.57813 -0.48438,0.73437 -1.40625,1.14062 -0.92188,0.39063 -2.07813,0.39063 -1.92187,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 10.01563,-8.75 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 V 300.932 h 1.67187 v 9.85938 z m 5.67609,0 h -1.54688 v -13.59375 h 1.65625 v 4.84375 q 1.0625,-1.32813 2.70313,-1.32813 0.90625,0 1.71875,0.375 0.8125,0.35938 1.32812,1.03125 0.53125,0.65625 0.82813,1.59375 0.29687,0.9375 0.29687,2 0,2.53125 -1.25,3.92188 -1.25,1.375 -3,1.375 -1.75,0 -2.73437,-1.45313 z m -0.0156,-5 q 0,1.76562 0.46875,2.5625 0.79688,1.28125 2.14063,1.28125 1.09375,0 1.89062,-0.9375 0.79688,-0.95313 0.79688,-2.84375 0,-1.92188 -0.76563,-2.84375 -0.76562,-0.92188 -1.84375,-0.92188 -1.09375,0 -1.89062,0.95313 -0.79688,0.95312 -0.79688,2.75 z m 8.81317,5 v -13.59375 h 1.67188 v 13.59375 z m 10.92609,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 12.23511,2.53125 q 0,-0.34375 0,-0.5 0,-0.98438 0.26563,-1.70313 0.21875,-0.54687 0.67187,-1.09375 0.32813,-0.39062 1.1875,-1.15625 0.875,-0.76562 1.125,-1.21875 0.26563,-0.45312 0.26563,-1 0,-0.96875 -0.76563,-1.70312 -0.75,-0.73438 -1.85937,-0.73438 -1.0625,0 -1.78125,0.67188 -0.70313,0.65625 -0.9375,2.07812 l -1.71875,-0.20312 q 0.23437,-1.90625 1.375,-2.90625 1.15625,-1.01563 3.03125,-1.01563 2,0 3.1875,1.09375 1.1875,1.07813 1.1875,2.60938 0,0.89062 -0.42188,1.64062 -0.40625,0.75 -1.625,1.82813 -0.8125,0.73437 -1.0625,1.07812 -0.25,0.34375 -0.375,0.79688 -0.125,0.4375 -0.14062,1.4375 z m -0.0937,3.34375 v -1.90625 h 1.89063 v 1.90625 z"
+ fill-rule="nonzero"
+ id="path27" />
+ <path
+ fill="#d9ead3"
+ d="m 848.9265,239.90552 h 156.3464 v 88.59842 H 848.9265 Z"
+ fill-rule="nonzero"
+ id="path29" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 848.9265,239.90552 h 156.3464 v 88.59842 H 848.9265 Z"
+ fill-rule="nonzero"
+ id="path31" />
+ <path
+ fill="#000000"
+ d="m 865.75464,274.7966 v -1.60938 h 5.76562 v 5.04688 q -1.32812,1.0625 -2.75,1.59375 -1.40625,0.53125 -2.89062,0.53125 -2,0 -3.64063,-0.85938 -1.625,-0.85937 -2.46875,-2.48437 -0.82812,-1.625 -0.82812,-3.625 0,-1.98438 0.82812,-3.70313 0.82813,-1.71875 2.39063,-2.54687 1.5625,-0.84375 3.59375,-0.84375 1.46875,0 2.65625,0.48437 1.20312,0.46875 1.875,1.32813 0.67187,0.84375 1.03125,2.21875 l -1.625,0.4375 q -0.3125,-1.03125 -0.76563,-1.625 -0.45312,-0.59375 -1.29687,-0.95313 -0.84375,-0.35937 -1.875,-0.35937 -1.23438,0 -2.14063,0.375 -0.89062,0.375 -1.45312,1 -0.54688,0.60937 -0.84375,1.34375 -0.53125,1.25 -0.53125,2.73437 0,1.8125 0.625,3.04688 0.64062,1.21875 1.82812,1.8125 1.20313,0.59375 2.54688,0.59375 1.17187,0 2.28125,-0.45313 1.10937,-0.45312 1.6875,-0.95312 v -2.53125 z m 14.68329,2.15625 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 12.76636,4.375 0.23438,1.48438 q -0.70313,0.14062 -1.26563,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98437 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29687,0.32813 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 6.6947,1.5 v -9.85937 h 1.5 v 1.5 q 0.57813,-1.04688 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54687 l -0.57813,1.54688 q -0.60937,-0.35938 -1.23437,-0.35938 -0.54688,0 -0.98438,0.32813 -0.42187,0.32812 -0.60937,0.90625 -0.28125,0.89062 -0.28125,1.95312 v 5.15625 z m 12.97834,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.43823,2.9375 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 9.375,-1.98437 q 0,-2.73438 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32812 1.29688,3.67187 0,1.90625 -0.57813,3 -0.5625,1.07813 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82813,2.82812 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95312 0.82813,-2.89062 0,-1.82813 -0.82813,-2.76563 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82813 z m 15.73505,4.92187 v -1.45312 q -1.14063,1.67187 -3.125,1.67187 -0.85938,0 -1.625,-0.32812 -0.75,-0.34375 -1.125,-0.84375 -0.35938,-0.5 -0.51563,-1.23438 -0.0937,-0.5 -0.0937,-1.5625 v -6.10937 h 1.67188 v 5.46875 q 0,1.3125 0.0937,1.76562 0.15625,0.65625 0.67187,1.03125 0.51563,0.375 1.26563,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39062 0.92187,-1.04687 0.28125,-0.67188 0.28125,-1.9375 v -5.28125 h 1.67188 v 9.85937 z m 3.90698,0 v -9.85937 h 1.5 v 1.5 q 0.57813,-1.04688 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54687 l -0.57813,1.54688 q -0.60937,-0.35938 -1.23437,-0.35938 -0.54688,0 -0.98438,0.32813 -0.42187,0.32812 -0.60937,0.90625 -0.28125,0.89062 -0.28125,1.95312 v 5.15625 z m 12.66583,-3.60937 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95312 -2.73438,0.95312 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32812 -1.21875,-3.79687 0,-1.59375 0.51563,-2.78125 0.53125,-1.20313 1.60937,-1.79688 1.09375,-0.60937 2.35938,-0.60937 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85937 0,1.98438 0.76562,2.89063 0.76563,0.89062 1.98438,0.89062 0.98437,0 1.64062,-0.59375 0.65625,-0.60937 0.84375,-1.85937 z m 9.64063,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z"
+ fill-rule="nonzero"
+ id="path33" />
+ <path
+ fill="#000000"
+ d="m 859.58276,302.12473 v -8.54688 h -1.48438 v -1.3125 h 1.48438 v -1.04687 q 0,-0.98438 0.17187,-1.46875 0.23438,-0.65625 0.84375,-1.04688 0.60938,-0.40625 1.70313,-0.40625 0.70312,0 1.5625,0.15625 l -0.25,1.46875 q -0.51563,-0.0937 -0.98438,-0.0937 -0.76562,0 -1.07812,0.32813 -0.3125,0.3125 -0.3125,1.20312 v 0.90625 h 1.92187 v 1.3125 h -1.92187 v 8.54688 z m 4.76141,0 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 5.60334,-4.92188 q 0,-2.73437 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32813 1.29687,3.67188 0,1.90625 -0.57812,3 -0.5625,1.07812 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82812,2.82813 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95313 0.82812,-2.89063 0,-1.82812 -0.82812,-2.76562 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82812 z m 9.28192,4.92188 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 19.44287,0 5.23437,-13.59375 h 1.9375 l 5.5625,13.59375 h -2.04687 l -1.59375,-4.125 h -5.6875 l -1.48438,4.125 z m 3.92187,-5.57813 h 4.60938 l -1.40625,-3.78125 q -0.65625,-1.70312 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 z m 10.02173,5.57813 v -13.59375 h 5.125 q 1.35938,0 2.07813,0.125 1,0.17187 1.67187,0.64062 0.67188,0.46875 1.07813,1.3125 0.42187,0.84375 0.42187,1.84375 0,1.73438 -1.10937,2.9375 -1.09375,1.20313 -3.98438,1.20313 h -3.48437 v 5.53125 z m 1.79688,-7.14063 h 3.51562 q 1.75,0 2.46875,-0.64062 0.73438,-0.65625 0.73438,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.42188,-0.60937 -1.125,-0.79687 -0.45313,-0.125 -1.67188,-0.125 h -3.48437 z m 10.94354,7.14063 v -13.59375 h 1.8125 v 13.59375 z m 9.46039,-4.375 1.6875,-0.14063 q 0.125,1.01563 0.5625,1.67188 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64063 -0.375,-1.10938 -0.375,-0.48437 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45312 -2.625,-0.85937 -0.96875,-0.51563 -1.45313,-1.26563 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92187 0.59375,-0.90625 1.70312,-1.35938 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48438 1.15625,0.48437 1.76562,1.4375 0.625,0.9375 0.67188,2.14062 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67187 -2.35937,-0.67187 -1.625,0 -2.375,0.60937 -0.75,0.59375 -0.75,1.4375 0,0.73438 0.53125,1.20313 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54687 1.75,1.39062 0.57812,0.82813 0.57812,1.92188 0,1.09375 -0.625,2.0625 -0.625,0.95312 -1.79687,1.48437 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375,-0.53125 -1.25,-0.54687 -1.96875,-1.625 -0.70312,-1.07812 -0.73437,-2.45312 z m 19.58416,1.20312 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.09448,5.875 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 8.96265,0 -3.75,-9.85938 h 1.76562 l 2.125,5.90625 q 0.34375,0.95313 0.625,1.98438 0.21875,-0.78125 0.625,-1.875 l 2.1875,-6.01563 h 1.71875 l -3.73437,9.85938 z m 13.34375,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 H 976.458 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 9.09448,5.875 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z"
+ fill-rule="nonzero"
+ id="path35" />
+ <path
+ fill="#d9ead3"
+ d="M 467.042,484.1076 621.12074,409.30447 775.19948,484.1076 621.12074,558.91076 Z"
+ fill-rule="nonzero"
+ id="path37" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 467.042,484.1076 621.12074,409.30447 775.19948,484.1076 621.12074,558.91076 Z"
+ fill-rule="nonzero"
+ id="path39" />
+ <path
+ fill="#000000"
+ d="m 553.94073,486.65262 1.6875,-0.14062 q 0.125,1.01562 0.5625,1.67187 0.4375,0.65625 1.35938,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79687,-0.3125 1.1875,-0.84375 0.39062,-0.53125 0.39062,-1.15625 0,-0.64062 -0.375,-1.10937 -0.375,-0.48438 -1.23437,-0.8125 -0.54688,-0.21875 -2.42188,-0.65625 -1.875,-0.45313 -2.625,-0.85938 -0.96875,-0.51562 -1.45312,-1.26562 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57812,-1.92188 0.59375,-0.90625 1.70313,-1.35937 1.125,-0.46875 2.5,-0.46875 1.51562,0 2.67187,0.48437 1.15625,0.48438 1.76563,1.4375 0.625,0.9375 0.67187,2.14063 l -1.71875,0.125 q -0.14062,-1.28125 -0.95312,-1.9375 -0.79688,-0.67188 -2.35938,-0.67188 -1.625,0 -2.375,0.60938 -0.75,0.59375 -0.75,1.4375 0,0.73437 0.53125,1.20312 0.51563,0.46875 2.70313,0.96875 2.20312,0.5 3.01562,0.875 1.1875,0.54688 1.75,1.39063 0.57813,0.82812 0.57813,1.92187 0,1.09375 -0.625,2.0625 -0.625,0.95313 -1.79688,1.48438 -1.15625,0.53125 -2.60937,0.53125 -1.84375,0 -3.09375,-0.53125 -1.25,-0.54688 -1.96875,-1.625 -0.70313,-1.07813 -0.73438,-2.45313 z m 19.58423,1.20313 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.43818,2.9375 1.65625,-0.26562 q 0.14062,1 0.76562,1.53125 0.64063,0.51562 1.78125,0.51562 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89062 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60937 -0.35937,-1.32812 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48438 0.67187,-0.20312 1.4375,-0.20312 1.17187,0 2.04687,0.34375 0.875,0.32812 1.28125,0.90625 0.42188,0.5625 0.57813,1.51562 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39062 -0.48437,0.375 -0.48437,0.875 0,0.32813 0.20312,0.59375 0.20313,0.26563 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76563 0.70312,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48438,1.57812 -0.48437,0.73438 -1.40625,1.14063 -0.92187,0.39062 -2.07812,0.39062 -1.92188,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 9.32812,0 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 10.01563,-8.75 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 v -9.85937 h 1.67187 v 9.85937 z m 3.50421,-4.92187 q 0,-2.73438 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32812 1.29687,3.67187 0,1.90625 -0.57812,3 -0.5625,1.07813 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82812,2.82812 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95312 0.82812,-2.89062 0,-1.82813 -0.82812,-2.76563 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82813 z m 9.28198,4.92187 v -9.85937 h 1.5 v 1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32812 0.75,0.3125 1.10937,0.84375 0.375,0.51563 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67187 v -6 q 0,-1.01562 -0.20313,-1.51562 -0.1875,-0.51563 -0.6875,-0.8125 -0.5,-0.29688 -1.17187,-0.29688 -1.0625,0 -1.84375,0.67188 -0.76563,0.67187 -0.76563,2.57812 v 5.375 z m 22.30902,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.00074,5.875 3.59375,-5.125 -3.32813,-4.73437 h 2.09375 l 1.51563,2.3125 q 0.42187,0.65625 0.67187,1.10937 0.42188,-0.60937 0.76563,-1.09375 l 1.65625,-2.32812 h 1.98437 l -3.39062,4.64062 3.65625,5.21875 h -2.04688 l -2.03125,-3.0625 -0.53125,-0.82812 -2.59375,3.89062 z m 10.45312,-11.6875 v -1.90625 h 1.67188 v 1.90625 z m 0,11.6875 v -9.85937 h 1.67188 v 9.85937 z m 3.45728,-2.9375 1.65625,-0.26562 q 0.14062,1 0.76562,1.53125 0.64063,0.51562 1.78125,0.51562 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89062 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60937 -0.35937,-1.32812 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48438 0.67187,-0.20312 1.4375,-0.20312 1.17187,0 2.04687,0.34375 0.875,0.32812 1.28125,0.90625 0.42188,0.5625 0.57813,1.51562 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39062 -0.48437,0.375 -0.48437,0.875 0,0.32813 0.20312,0.59375 0.20313,0.26563 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76563 0.70312,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48438,1.57812 -0.48437,0.73438 -1.40625,1.14063 -0.92187,0.39062 -2.07812,0.39062 -1.92188,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 13.65625,1.4375 0.23437,1.48438 q -0.70312,0.14062 -1.26562,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98437 v -5.65625 h -1.23438 v -1.3125 h 1.23438 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29688,0.32813 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z m 0.85522,-1.4375 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 13.125,-0.40625 q 0,-0.34375 0,-0.5 0,-0.98437 0.26563,-1.70312 0.21875,-0.54688 0.67187,-1.09375 0.32813,-0.39063 1.1875,-1.15625 0.875,-0.76563 1.125,-1.21875 0.26563,-0.45313 0.26563,-1 0,-0.96875 -0.76563,-1.70313 -0.75,-0.73437 -1.85937,-0.73437 -1.0625,0 -1.78125,0.67187 -0.70313,0.65625 -0.9375,2.07813 l -1.71875,-0.20313 q 0.23437,-1.90625 1.375,-2.90625 1.15625,-1.01562 3.03125,-1.01562 2,0 3.1875,1.09375 1.1875,1.07812 1.1875,2.60937 0,0.89063 -0.42188,1.64063 -0.40625,0.75 -1.625,1.82812 -0.8125,0.73438 -1.0625,1.07813 -0.25,0.34375 -0.375,0.79687 -0.125,0.4375 -0.14062,1.4375 z m -0.0937,3.34375 v -1.90625 h 1.89063 v 1.90625 z"
+ fill-rule="nonzero"
+ id="path41" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 618.7218,154.43832 1.19684,48"
+ fill-rule="nonzero"
+ id="path43" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 618.7218,154.43832 1.04724,42.00186"
+ fill-rule="evenodd"
+ id="path45" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 618.11786,196.48135 1.76434,4.49551 1.53809,-4.57785 z"
+ fill-rule="evenodd"
+ id="path47" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 896.65094,455.34122 2.39368,43.65353"
+ fill-rule="nonzero"
+ id="path49" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 896.65094,455.34122 2.06519,37.66251"
+ fill-rule="evenodd"
+ id="path51" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 897.06683,493.09418 1.89777,4.44086 1.40075,-4.62171 z"
+ fill-rule="evenodd"
+ id="path53" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 772.80054,281.8714 76.12598,1.66928"
+ fill-rule="nonzero"
+ id="path55" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 772.80054,281.8714 70.12744,1.53775"
+ fill-rule="evenodd"
+ id="path57" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 842.8917,285.0605 4.57324,-1.55185 -4.50079,-1.75082 z"
+ fill-rule="evenodd"
+ id="path59" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 620.52234,360.3176 1.19684,48"
+ fill-rule="nonzero"
+ id="path61" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 620.52234,360.3176 1.04724,42.00183"
+ fill-rule="evenodd"
+ id="path63" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 619.9184,402.36063 1.76434,4.49551 1.53809,-4.57785 z"
+ fill-rule="evenodd"
+ id="path65" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 585.021,367.1076 h 58.80316 v 34.4252 H 585.021 Z"
+ fill-rule="nonzero"
+ id="path67" />
+ <path
+ fill="#000000"
+ d="m 595.4741,394.02762 v -13.59375 h 1.84375 l 7.14063,10.67188 v -10.67188 h 1.71875 v 13.59375 h -1.84375 l -7.14063,-10.6875 v 10.6875 z m 12.64484,-4.92187 q 0,-2.73438 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32812 1.29687,3.67187 0,1.90625 -0.57812,3 -0.5625,1.07813 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82812,2.82812 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95312 0.82812,-2.89062 0,-1.82813 -0.82812,-2.76563 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82813 z"
+ fill-rule="nonzero"
+ id="path69" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 788.84515,248.8924 h 58.80316 v 34.4252 h -58.80316 z"
+ fill-rule="nonzero"
+ id="path71" />
+ <path
+ fill="#000000"
+ d="m 803.142,275.81238 v -5.76562 l -5.23437,-7.82813 h 2.1875 l 2.67187,4.09375 q 0.75,1.15625 1.39063,2.29688 0.60937,-1.0625 1.48437,-2.40625 l 2.625,-3.98438 h 2.10938 l -5.4375,7.82813 v 5.76562 z m 15.14667,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.43823,2.9375 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z"
+ fill-rule="nonzero"
+ id="path73" />
+ <path
+ fill="#d9ead3"
+ d="m 845.084,442.14172 h 156.3464 v 88.59845 H 845.084 Z"
+ fill-rule="nonzero"
+ id="path75" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 845.084,442.14172 h 156.3464 v 88.59845 H 845.084 Z"
+ fill-rule="nonzero"
+ id="path77" />
+ <path
+ fill="#000000"
+ d="m 861.9121,477.0328 v -1.60938 h 5.76562 v 5.04688 q -1.32812,1.0625 -2.75,1.59375 -1.40625,0.53125 -2.89062,0.53125 -2,0 -3.64063,-0.85938 -1.625,-0.85937 -2.46875,-2.48437 -0.82812,-1.625 -0.82812,-3.625 0,-1.98438 0.82812,-3.70313 0.82813,-1.71875 2.39063,-2.54687 1.5625,-0.84375 3.59375,-0.84375 1.46875,0 2.65625,0.48437 1.20312,0.46875 1.875,1.32813 0.67187,0.84375 1.03125,2.21875 l -1.625,0.4375 q -0.3125,-1.03125 -0.76563,-1.625 -0.45312,-0.59375 -1.29687,-0.95313 -0.84375,-0.35937 -1.875,-0.35937 -1.23438,0 -2.14063,0.375 -0.89062,0.375 -1.45312,1 -0.54688,0.60937 -0.84375,1.34375 -0.53125,1.25 -0.53125,2.73437 0,1.8125 0.625,3.04688 0.64062,1.21875 1.82812,1.8125 1.20313,0.59375 2.54688,0.59375 1.17187,0 2.28125,-0.45313 1.10937,-0.45312 1.6875,-0.95312 v -2.53125 z m 14.68329,2.15625 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 12.76636,4.375 0.23438,1.48438 q -0.70313,0.14062 -1.26563,0.14062 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29687 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98437 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92187 0.0937,0.20313 0.29687,0.32813 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 6.69476,1.5 v -9.85937 h 1.5 v 1.5 q 0.57813,-1.04688 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54687 l -0.57813,1.54688 q -0.60937,-0.35938 -1.23437,-0.35938 -0.54688,0 -0.98438,0.32813 -0.42187,0.32812 -0.60937,0.90625 -0.28125,0.89062 -0.28125,1.95312 v 5.15625 z m 12.97828,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.43823,2.9375 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 9.375,-1.98437 q 0,-2.73438 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32812 1.29688,3.67187 0,1.90625 -0.57813,3 -0.5625,1.07813 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82813,2.82812 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95312 0.82813,-2.89062 0,-1.82813 -0.82813,-2.76563 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82813 z m 15.73511,4.92187 v -1.45312 q -1.14063,1.67187 -3.125,1.67187 -0.85938,0 -1.625,-0.32812 -0.75,-0.34375 -1.125,-0.84375 -0.35938,-0.5 -0.51563,-1.23438 -0.0937,-0.5 -0.0937,-1.5625 v -6.10937 h 1.67188 v 5.46875 q 0,1.3125 0.0937,1.76562 0.15625,0.65625 0.67187,1.03125 0.51563,0.375 1.26563,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39062 0.92187,-1.04687 0.28125,-0.67188 0.28125,-1.9375 v -5.28125 h 1.67188 v 9.85937 z m 3.90692,0 v -9.85937 h 1.5 v 1.5 q 0.57813,-1.04688 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54687 l -0.57813,1.54688 q -0.60937,-0.35938 -1.23437,-0.35938 -0.54688,0 -0.98438,0.32813 -0.42187,0.32812 -0.60937,0.90625 -0.28125,0.89062 -0.28125,1.95312 v 5.15625 z m 12.66583,-3.60937 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95312 -2.73438,0.95312 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32812 -1.21875,-3.79687 0,-1.59375 0.51563,-2.78125 0.53125,-1.20313 1.60937,-1.79688 1.09375,-0.60937 2.35938,-0.60937 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85937 0,1.98438 0.76562,2.89063 0.76563,0.89062 1.98438,0.89062 0.98437,0 1.64062,-0.59375 0.65625,-0.60937 0.84375,-1.85937 z m 9.64063,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z"
+ fill-rule="nonzero"
+ id="path79" />
+ <path
+ fill="#000000"
+ d="m 855.74023,504.36093 v -8.54688 h -1.48438 v -1.3125 h 1.48438 v -1.04687 q 0,-0.98438 0.17187,-1.46875 0.23438,-0.65625 0.84375,-1.04688 0.60938,-0.40625 1.70313,-0.40625 0.70312,0 1.5625,0.15625 l -0.25,1.46875 q -0.51563,-0.0937 -0.98438,-0.0937 -0.76562,0 -1.07812,0.32813 -0.3125,0.3125 -0.3125,1.20312 v 0.90625 h 1.92187 v 1.3125 h -1.92187 v 8.54688 z m 4.76141,0 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 5.60334,-4.92188 q 0,-2.73437 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32813 1.29687,3.67188 0,1.90625 -0.57812,3 -0.5625,1.07812 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82812,2.82813 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95313 0.82812,-2.89063 0,-1.82812 -0.82812,-2.76562 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82812 z m 9.28198,4.92188 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 19.44281,0 5.23437,-13.59375 h 1.9375 l 5.5625,13.59375 h -2.04687 l -1.59375,-4.125 h -5.6875 l -1.48438,4.125 z m 3.92187,-5.57813 h 4.60938 l -1.40625,-3.78125 q -0.65625,-1.70312 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 z m 10.02173,5.57813 v -13.59375 h 5.125 q 1.35938,0 2.07813,0.125 1,0.17187 1.67187,0.64062 0.67188,0.46875 1.07813,1.3125 0.42187,0.84375 0.42187,1.84375 0,1.73438 -1.10937,2.9375 -1.09375,1.20313 -3.98438,1.20313 H 912.289 v 5.53125 z m 1.79688,-7.14063 h 3.51562 q 1.75,0 2.46875,-0.64062 0.73438,-0.65625 0.73438,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.42188,-0.60937 -1.125,-0.79687 -0.45313,-0.125 -1.67188,-0.125 H 912.289 Z m 10.94354,7.14063 v -13.59375 h 1.8125 v 13.59375 z m 9.46039,-4.375 1.6875,-0.14063 q 0.125,1.01563 0.5625,1.67188 0.4375,0.65625 1.35937,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79688,-0.3125 1.1875,-0.84375 0.39063,-0.53125 0.39063,-1.15625 0,-0.64063 -0.375,-1.10938 -0.375,-0.48437 -1.23438,-0.8125 -0.54687,-0.21875 -2.42187,-0.65625 -1.875,-0.45312 -2.625,-0.85937 -0.96875,-0.51563 -1.45313,-1.26563 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57813,-1.92187 0.59375,-0.90625 1.70312,-1.35938 1.125,-0.46875 2.5,-0.46875 1.51563,0 2.67188,0.48438 1.15625,0.48437 1.76562,1.4375 0.625,0.9375 0.67188,2.14062 l -1.71875,0.125 q -0.14063,-1.28125 -0.95313,-1.9375 -0.79687,-0.67187 -2.35937,-0.67187 -1.625,0 -2.375,0.60937 -0.75,0.59375 -0.75,1.4375 0,0.73438 0.53125,1.20313 0.51562,0.46875 2.70312,0.96875 2.20313,0.5 3.01563,0.875 1.1875,0.54687 1.75,1.39062 0.57812,0.82813 0.57812,1.92188 0,1.09375 -0.625,2.0625 -0.625,0.95312 -1.79687,1.48437 -1.15625,0.53125 -2.60938,0.53125 -1.84375,0 -3.09375,-0.53125 -1.25,-0.54687 -1.96875,-1.625 -0.70312,-1.07812 -0.73437,-2.45312 z m 19.58416,1.20312 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.09448,5.875 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 8.96271,0 -3.75,-9.85938 h 1.76562 l 2.125,5.90625 q 0.34375,0.95313 0.625,1.98438 0.21875,-0.78125 0.625,-1.875 l 2.1875,-6.01563 h 1.71875 l -3.73437,9.85938 z m 13.34375,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 9.09442,5.875 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z"
+ fill-rule="nonzero"
+ id="path81" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 768.958,484.1076 76.12598,1.66931"
+ fill-rule="nonzero"
+ id="path83" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 768.958,484.1076 70.12744,1.53778"
+ fill-rule="evenodd"
+ id="path85" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 839.0492,487.2967 4.57324,-1.55185 -4.50079,-1.75082 z"
+ fill-rule="evenodd"
+ id="path87" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 785.0026,451.1286 h 58.80316 v 34.4252 H 785.0026 Z"
+ fill-rule="nonzero"
+ id="path89" />
+ <path
+ fill="#000000"
+ d="m 799.2995,478.0486 v -5.76562 l -5.23438,-7.82813 h 2.1875 l 2.67188,4.09375 q 0.75,1.15625 1.39062,2.29688 0.60938,-1.0625 1.48438,-2.40625 l 2.625,-3.98438 h 2.10937 l -5.4375,7.82813 v 5.76562 z m 15.14673,-3.17187 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z m 8.43817,2.9375 1.65625,-0.26562 q 0.14063,1 0.76563,1.53125 0.64062,0.51562 1.78125,0.51562 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89062 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60937 -0.35938,-1.32812 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48438 0.67188,-0.20312 1.4375,-0.20312 1.17188,0 2.04688,0.34375 0.875,0.32812 1.28125,0.90625 0.42187,0.5625 0.57812,1.51562 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39062 -0.48438,0.375 -0.48438,0.875 0,0.32813 0.20313,0.59375 0.20312,0.26563 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76563 0.70313,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48437,1.57812 -0.48438,0.73438 -1.40625,1.14063 -0.92188,0.39062 -2.07813,0.39062 -1.92187,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z"
+ fill-rule="nonzero"
+ id="path91" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 1093.5826,486.44095 3.4646,-377.88977"
+ fill-rule="nonzero"
+ id="path93" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 1093.5826,486.44095 3.4646,-377.88977"
+ fill-rule="nonzero"
+ id="path95" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 1005.273,284.2047 89.6063,1.63782"
+ fill-rule="nonzero"
+ id="path97" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 1005.273,284.2047 83.6073,1.52817"
+ fill-rule="evenodd"
+ id="path99" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 1088.8501,287.38434 4.5675,-1.56854 -4.5072,-1.73438 z"
+ fill-rule="evenodd"
+ id="path101" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 1099.9213,111.42519 708.36222,108.55905"
+ fill-rule="nonzero"
+ id="path103" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 1099.9213,111.42519 714.3621,108.60297"
+ fill-rule="evenodd"
+ id="path105" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 714.37415,106.95129 -4.55005,1.61847 4.52588,1.68491 z"
+ fill-rule="evenodd"
+ id="path107" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 1001.4304,485.62204 89.6063,1.63782"
+ fill-rule="nonzero"
+ id="path109" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 1001.4304,485.62204 83.6073,1.52814"
+ fill-rule="evenodd"
+ id="path111" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 1085.0076,488.80167 4.5675,-1.56854 -4.5071,-1.73438 z"
+ fill-rule="evenodd"
+ id="path113" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 621.1207,558.91077 0.12598,76.81891"
+ fill-rule="nonzero"
+ id="path115" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 621.1207,558.91077 0.11615,70.81891"
+ fill-rule="evenodd"
+ id="path117" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 619.58514,629.73236 1.65918,4.5354 1.64429,-4.54083 z"
+ fill-rule="evenodd"
+ id="path119" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 579.0289,573.6352 h 47.33856 v 34.42517 H 579.0289 Z"
+ fill-rule="nonzero"
+ id="path121" />
+ <path
+ fill="#000000"
+ d="m 589.482,600.5552 v -13.59375 h 1.84375 l 7.14062,10.67188 v -10.67188 h 1.71875 v 13.59375 h -1.84375 l -7.14062,-10.6875 v 10.6875 z m 12.64484,-4.92187 q 0,-2.73438 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32812 1.29687,3.67187 0,1.90625 -0.57812,3 -0.5625,1.07813 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32812 -1.28125,-1.32813 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89062 0.82812,2.82812 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95312 0.82812,-2.89062 0,-1.82813 -0.82812,-2.76563 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82813 z"
+ fill-rule="nonzero"
+ id="path123" />
+ <path
+ fill="#ead1dc"
+ d="m 545.084,634.39105 h 156.34644 v 70.26776 H 545.084 Z"
+ fill-rule="nonzero"
+ id="path125" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 545.084,634.39105 h 156.34644 v 70.26776 H 545.084 Z"
+ fill-rule="nonzero"
+ id="path127" />
+ <path
+ fill="#000000"
+ d="m 557.92773,654.44495 -3.60938,-13.59375 h 1.84375 l 2.0625,8.90625 q 0.34375,1.40625 0.57813,2.78125 0.51562,-2.17188 0.60937,-2.51563 l 2.59375,-9.17187 h 2.17188 l 1.95312,6.875 q 0.73438,2.5625 1.04688,4.8125 0.26562,-1.28125 0.6875,-2.95313 l 2.125,-8.73437 h 1.8125 l -3.73438,13.59375 h -1.73437 l -2.85938,-10.35938 q -0.35937,-1.29687 -0.42187,-1.59375 -0.21875,0.9375 -0.40625,1.59375 l -2.89063,10.35938 z m 21.76489,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.07885,5.875 V 640.8512 h 1.67188 v 13.59375 z m 10.61359,-3.60938 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.95313 -2.73438,0.95313 -2.01562,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51563,-2.78125 0.53125,-1.20312 1.60937,-1.79687 1.09375,-0.60938 2.35938,-0.60938 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76562,2.89062 0.76563,0.89063 1.98438,0.89063 0.98437,0 1.64062,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 2.26563,-1.3125 q 0,-2.73437 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32813 1.29687,3.67188 0,1.90625 -0.57812,3 -0.5625,1.07812 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82812,2.82813 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95313 0.82812,-2.89063 0,-1.82812 -0.82812,-2.76562 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82812 z m 9.28198,4.92188 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 22.29077,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 14.2934,9.65625 v -13.64063 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20312,-1.125 0.6875,-0.375 1.64063,-0.375 1.26562,0 2.23437,0.65625 0.96875,0.64063 1.45313,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.54688,2.73437 -0.54687,1.20313 -1.57812,1.84375 -1.03125,0.64063 -2.17188,0.64063 -0.84375,0 -1.51562,-0.34375 -0.65625,-0.35938 -1.07813,-0.89063 v 4.79688 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78125,0.90625 1.875,0.90625 1.10937,0 1.89062,-0.9375 0.79688,-0.9375 0.79688,-2.92188 0,-1.875 -0.78125,-2.8125 -0.76563,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89063,1 -0.8125,1 -0.8125,2.89063 z m 15.29761,3.65625 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 3.78198,5.75 1.60937,0.25 q 0.10938,0.75 0.57813,1.09375 0.60937,0.45312 1.6875,0.45312 1.17187,0 1.79687,-0.46875 0.625,-0.45312 0.85938,-1.28125 0.125,-0.51562 0.10937,-2.15625 -1.09375,1.29688 -2.71875,1.29688 -2.03125,0 -3.15625,-1.46875 -1.10937,-1.46875 -1.10937,-3.51563 0,-1.40625 0.51562,-2.59375 0.51563,-1.20312 1.48438,-1.84375 0.96875,-0.65625 2.26562,-0.65625 1.75,0 2.875,1.40625 v -1.1875 h 1.54688 v 8.51563 q 0,2.3125 -0.46875,3.26562 -0.46875,0.96875 -1.48438,1.51563 -1.01562,0.5625 -2.5,0.5625 -1.76562,0 -2.85937,-0.79688 -1.07813,-0.79687 -1.03125,-2.39062 z m 1.375,-5.92188 q 0,1.95313 0.76562,2.84375 0.78125,0.89063 1.9375,0.89063 1.14063,0 1.92188,-0.89063 0.78125,-0.89062 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79688,-0.92187 -1.92188,-0.92187 -1.10937,0 -1.89062,0.90625 -0.78125,0.89062 -0.78125,2.67187 z m 16.04755,1.9375 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z"
+ fill-rule="nonzero"
+ id="path129" />
+ <path
+ fill="#000000"
+ d="m 557.1621,676.44495 -3.01563,-9.85938 h 1.71875 l 1.5625,5.6875 0.59375,2.125 q 0.0312,-0.15625 0.5,-2.03125 l 1.57813,-5.78125 h 1.71875 l 1.46875,5.71875 0.48437,1.89063 0.57813,-1.90625 1.6875,-5.70313 h 1.625 l -3.07813,9.85938 h -1.73437 l -1.57813,-5.90625 -0.375,-1.67188 -2,7.57813 z m 11.66046,-11.6875 v -1.90625 h 1.67188 v 1.90625 z m 0,11.6875 v -9.85938 h 1.67188 v 9.85938 z m 7.78546,-1.5 0.23438,1.48437 q -0.70313,0.14063 -1.26563,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98438 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29687,0.32812 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 1.52704,1.5 V 662.8512 h 1.67188 v 4.875 q 1.17187,-1.35938 2.95312,-1.35938 1.09375,0 1.89063,0.4375 0.8125,0.42188 1.15625,1.1875 0.35937,0.76563 0.35937,2.20313 v 6.25 h -1.67187 v -6.25 q 0,-1.25 -0.54688,-1.8125 -0.54687,-0.57813 -1.53125,-0.57813 -0.75,0 -1.40625,0.39063 -0.64062,0.375 -0.92187,1.04687 -0.28125,0.65625 -0.28125,1.8125 v 5.39063 z m 19.21527,-1.5 0.23438,1.48437 q -0.70313,0.14063 -1.26563,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98438 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29687,0.32812 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 7.96454,0.28125 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 4.06323,4.9375 v -9.85938 h 1.5 v 1.5 q 0.57812,-1.04687 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54688 l -0.57812,1.54687 q -0.60938,-0.35937 -1.23438,-0.35937 -0.54687,0 -0.98437,0.32812 -0.42188,0.32813 -0.60938,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 5.93139,0.8125 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45312 1.6875,0.45312 1.17188,0 1.79688,-0.46875 0.625,-0.45312 0.85937,-1.28125 0.125,-0.51562 0.10938,-2.15625 -1.09375,1.29688 -2.71875,1.29688 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51563 0,-1.40625 0.51563,-2.59375 0.51562,-1.20312 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 v -1.1875 h 1.54687 v 8.51563 q 0,2.3125 -0.46875,3.26562 -0.46875,0.96875 -1.48437,1.51563 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79688 -1.07812,-0.79687 -1.03125,-2.39062 z m 1.375,-5.92188 q 0,1.95313 0.76563,2.84375 0.78125,0.89063 1.9375,0.89063 1.14062,0 1.92187,-0.89063 0.78125,-0.89062 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92187 -1.92187,-0.92187 -1.10938,0 -1.89063,0.90625 -0.78125,0.89062 -0.78125,2.67187 z m 16.04761,1.9375 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 12.76635,4.375 0.23438,1.48437 q -0.70313,0.14063 -1.26563,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98438 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29687,0.32812 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z"
+ fill-rule="nonzero"
+ id="path131" />
+ <path
+ fill="#000000"
+ d="m 554.05273,698.44495 5.23437,-13.59375 h 1.9375 l 5.5625,13.59375 h -2.04687 l -1.59375,-4.125 h -5.6875 l -1.48438,4.125 z m 3.92187,-5.57813 h 4.60938 l -1.40625,-3.78125 q -0.65625,-1.70312 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 z m 10.02173,5.57813 V 684.8512 h 5.125 q 1.35938,0 2.07813,0.125 1,0.17187 1.67187,0.64062 0.67188,0.46875 1.07813,1.3125 0.42187,0.84375 0.42187,1.84375 0,1.73438 -1.10937,2.9375 -1.09375,1.20313 -3.98438,1.20313 h -3.48437 v 5.53125 z m 1.79688,-7.14063 h 3.51562 q 1.75,0 2.46875,-0.64062 0.73438,-0.65625 0.73438,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.42188,-0.60937 -1.125,-0.79687 -0.45313,-0.125 -1.67188,-0.125 h -3.48437 z m 10.94354,7.14063 V 684.8512 h 1.8125 v 13.59375 z m 8.60101,0.23437 3.9375,-14.0625 h 1.34375 l -3.9375,14.0625 z m 11.58533,-0.23437 V 684.8512 h 1.67188 v 13.59375 z m 3.55109,-4.92188 q 0,-2.73437 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32813 1.29687,3.67188 0,1.90625 -0.57812,3 -0.5625,1.07812 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82812,2.82813 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95313 0.82812,-2.89063 0,-1.82812 -0.82812,-2.76562 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82812 z m 8.9851,5.73438 1.60938,0.25 q 0.10937,0.75 0.57812,1.09375 0.60938,0.45312 1.6875,0.45312 1.17188,0 1.79688,-0.46875 0.625,-0.45312 0.85937,-1.28125 0.125,-0.51562 0.10938,-2.15625 -1.09375,1.29688 -2.71875,1.29688 -2.03125,0 -3.15625,-1.46875 -1.10938,-1.46875 -1.10938,-3.51563 0,-1.40625 0.51563,-2.59375 0.51562,-1.20312 1.48437,-1.84375 0.96875,-0.65625 2.26563,-0.65625 1.75,0 2.875,1.40625 v -1.1875 h 1.54687 v 8.51563 q 0,2.3125 -0.46875,3.26562 -0.46875,0.96875 -1.48437,1.51563 -1.01563,0.5625 -2.5,0.5625 -1.76563,0 -2.85938,-0.79688 -1.07812,-0.79687 -1.03125,-2.39062 z m 1.375,-5.92188 q 0,1.95313 0.76563,2.84375 0.78125,0.89063 1.9375,0.89063 1.14062,0 1.92187,-0.89063 0.78125,-0.89062 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79687,-0.92187 -1.92187,-0.92187 -1.10938,0 -1.89063,0.90625 -0.78125,0.89062 -0.78125,2.67187 z m 9.31318,-6.57812 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 v -9.85938 h 1.67187 v 9.85938 z m 4.12921,0 v -9.85938 h 1.5 v 1.40625 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.32813 0.75,0.3125 1.10937,0.84375 0.375,0.51562 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67187 v -6 q 0,-1.01563 -0.20313,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.17187,-0.29687 -1.0625,0 -1.84375,0.67187 -0.76563,0.67188 -0.76563,2.57813 v 5.375 z"
+ fill-rule="nonzero"
+ id="path133" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 603.2966,782.2992 2.39374,43.65356"
+ fill-rule="nonzero"
+ id="path135" />
+ <path
+ fill="#f1c232"
+ d="m 677.6772,786.07751 h 179.27557 v 94.64566 H 677.6772 Z"
+ fill-rule="nonzero"
+ id="path147" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 677.6772,786.07751 h 179.27557 v 94.64566 H 677.6772 Z"
+ fill-rule="nonzero"
+ id="path149" />
+ <path
+ fill="#000000"
+ d="m 725.6051,834.99217 v -1.60937 h 5.76562 v 5.04687 q -1.32812,1.0625 -2.75,1.59375 -1.40625,0.53125 -2.89062,0.53125 -2,0 -3.64063,-0.85937 -1.625,-0.85938 -2.46875,-2.48438 -0.82812,-1.625 -0.82812,-3.625 0,-1.98437 0.82812,-3.70312 0.82813,-1.71875 2.39063,-2.54688 1.5625,-0.84375 3.59375,-0.84375 1.46875,0 2.65625,0.48438 1.20312,0.46875 1.875,1.32812 0.67187,0.84375 1.03125,2.21875 l -1.625,0.4375 q -0.3125,-1.03125 -0.76563,-1.625 -0.45312,-0.59375 -1.29687,-0.95312 -0.84375,-0.35938 -1.875,-0.35938 -1.23438,0 -2.14063,0.375 -0.89062,0.375 -1.45312,1 -0.54688,0.60938 -0.84375,1.34375 -0.53125,1.25 -0.53125,2.73438 0,1.8125 0.625,3.04687 0.64062,1.21875 1.82812,1.8125 1.20313,0.59375 2.54688,0.59375 1.17187,0 2.28125,-0.45312 1.10937,-0.45313 1.6875,-0.95313 v -2.53125 z m 7.93329,5.32813 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 21.97833,-1.21875 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 4.09448,-6.75 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 v -9.85938 h 1.67187 v 9.85938 z m 4.0979,0 v -13.59375 h 1.67187 v 13.59375 z m 15.79687,-1.21875 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 10.53198,4.9375 v -1.45313 q -1.14062,1.67188 -3.125,1.67188 -0.85937,0 -1.625,-0.32813 -0.75,-0.34375 -1.125,-0.84375 -0.35937,-0.5 -0.51562,-1.23437 -0.0937,-0.5 -0.0937,-1.5625 v -6.10938 h 1.67187 v 5.46875 q 0,1.3125 0.0937,1.76563 0.15625,0.65625 0.67188,1.03125 0.51562,0.375 1.26562,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39063 0.92188,-1.04688 0.28125,-0.67187 0.28125,-1.9375 v -5.28125 h 1.67187 v 9.85938 z m 7.57886,-1.5 0.23437,1.48437 q -0.70312,0.14063 -1.26562,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70313,-0.75 -0.20312,-0.46875 -0.20312,-1.98438 v -5.65625 h -1.23438 v -1.3125 h 1.23438 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29688,0.32812 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.0781 z m 1.52704,1.5 v -13.59375 h 1.67187 v 4.875 q 1.17188,-1.35938 2.95313,-1.35938 1.09375,0 1.89062,0.4375 0.8125,0.42188 1.15625,1.1875 0.35938,0.76563 0.35938,2.20313 v 6.25 h -1.67188 v -6.25 q 0,-1.25 -0.54687,-1.8125 -0.54688,-0.57813 -1.53125,-0.57813 -0.75,0 -1.40625,0.39063 -0.64063,0.375 -0.92188,1.04687 -0.28125,0.65625 -0.28125,1.8125 v 5.39063 z"
+ fill-rule="nonzero"
+ id="path151" />
+ <path
+ fill="#ffd966"
+ d="m 400.60892,786.07751 h 179.2756 v 94.64566 h -179.2756 z"
+ fill-rule="nonzero"
+ id="path153" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 400.60892,786.07751 h 179.2756 v 94.64566 h -179.2756 z"
+ fill-rule="nonzero"
+ id="path155" />
+ <path
+ fill="#000000"
+ d="m 422.49536,840.32031 v -1.45313 q -1.14063,1.67188 -3.125,1.67188 -0.85938,0 -1.625,-0.32813 -0.75,-0.34375 -1.125,-0.84375 -0.35938,-0.5 -0.51563,-1.23437 -0.0937,-0.5 -0.0937,-1.5625 v -6.10938 h 1.67188 v 5.46875 q 0,1.3125 0.0937,1.76563 0.15625,0.65625 0.67187,1.03125 0.51563,0.375 1.26563,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39063 0.92187,-1.04688 0.28125,-0.67187 0.28125,-1.9375 v -5.28125 h 1.67188 v 9.85938 z m 3.2507,-2.9375 1.65625,-0.26563 q 0.14063,1 0.76563,1.53125 0.64062,0.51563 1.78125,0.51563 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89063 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60938 -0.35938,-1.32813 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48437 0.67188,-0.20313 1.4375,-0.20313 1.17188,0 2.04688,0.34375 0.875,0.32813 1.28125,0.90625 0.42187,0.5625 0.57812,1.51563 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39063 -0.48438,0.375 -0.48438,0.875 0,0.32812 0.20313,0.59375 0.20312,0.26562 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76562 0.70313,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48437,1.57813 -0.48438,0.73437 -1.40625,1.14062 -0.92188,0.39063 -2.07813,0.39063 -1.92187,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 16.75,-0.23438 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.09448,5.875 v -9.85938 h 1.5 v 1.5 q 0.57812,-1.04687 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54688 l -0.57812,1.54687 q -0.60938,-0.35937 -1.23438,-0.35937 -0.54687,0 -0.98437,0.32812 -0.42188,0.32813 -0.60938,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 6.2283,0 v -9.85938 h 1.5 v 1.40625 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.32813 0.75,0.3125 1.10938,0.84375 0.375,0.51562 0.53125,1.21875 0.0937,0.46875 0.0937,1.625 v 6.0625 h -1.67188 v -6 q 0,-1.01563 -0.20312,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.17188,-0.29687 -1.0625,0 -1.84375,0.67187 -0.76562,0.67188 -0.76562,2.57813 v 5.375 z m 16.8132,-1.21875 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 4.07886,4.9375 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 22.2908,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 13.0434,6.10937 3.9375,-14.0625 h 1.34375 l -3.9375,14.0625 z m 11.61658,3.54688 v -13.64063 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20312,-1.125 0.6875,-0.375 1.64063,-0.375 1.26562,0 2.23437,0.65625 0.96875,0.64063 1.45313,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.54688,2.73437 -0.54687,1.20313 -1.57812,1.84375 -1.03125,0.64063 -2.17188,0.64063 -0.84375,0 -1.51562,-0.34375 -0.65625,-0.35938 -1.07813,-0.89063 v 4.79688 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78125,0.90625 1.875,0.90625 1.10937,0 1.89062,-0.9375 0.79688,-0.9375 0.79688,-2.92188 0,-1.875 -0.78125,-2.8125 -0.76563,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89063,1 -0.8125,1 -0.8125,2.89063 z m 8.18823,1.9375 1.65625,-0.26563 q 0.14063,1 0.76563,1.53125 0.64062,0.51563 1.78125,0.51563 1.15625,0 1.70312,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48437,-0.89063 -0.34375,-0.21875 -1.70313,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70312,-0.34375 -1.07812,-0.9375 -0.35938,-0.60938 -0.35938,-1.32813 0,-0.65625 0.29688,-1.21875 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.28125 1.0625,-0.48437 0.67188,-0.20313 1.4375,-0.20313 1.17188,0 2.04688,0.34375 0.875,0.32813 1.28125,0.90625 0.42187,0.5625 0.57812,1.51563 l -1.625,0.21875 q -0.10937,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.39063 -0.48438,0.375 -0.48438,0.875 0,0.32812 0.20313,0.59375 0.20312,0.26562 0.64062,0.4375 0.25,0.0937 1.46875,0.4375 1.76563,0.46875 2.46875,0.76562 0.70313,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48437,1.57813 -0.48438,0.73437 -1.40625,1.14062 -0.92188,0.39063 -2.07813,0.39063 -1.92187,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 11.82813,2.9375 -3.01563,-9.85938 h 1.71875 l 1.5625,5.6875 0.59375,2.125 q 0.0312,-0.15625 0.5,-2.03125 l 1.57813,-5.78125 h 1.71875 l 1.46875,5.71875 0.48437,1.89063 0.57813,-1.90625 1.6875,-5.70313 h 1.625 l -3.07813,9.85938 h -1.73437 l -1.57813,-5.90625 -0.375,-1.67188 -2,7.57813 z m 18.03546,0 v -1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17188,0 -2.17188,-0.64063 -0.98437,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48438,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98437,-0.64063 2.20312,-0.64063 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.98438 v -4.875 h 1.65625 v 13.59375 z m -5.28125,-4.92188 q 0,1.89063 0.79687,2.82813 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.89063 0.76563,-0.89062 0.76563,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92188,-0.95312 -1.10937,0 -1.85937,0.90625 -0.75,0.90625 -0.75,2.85937 z"
+ fill-rule="nonzero"
+ id="path157" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 627.2572,719.0512 v 33.51318 H 490.24933 v 33.5105"
+ fill-rule="nonzero"
+ id="path159" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 627.2572,719.0512 v 33.51312 H 490.24933 v 30.08344"
+ fill-rule="evenodd"
+ id="path161" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 490.24933,782.64777 -1.12457,-1.12457 1.12457,3.08978 1.1246,-3.08978 z"
+ fill-rule="evenodd"
+ id="path163" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 627.2572,719.0512 v 33.51318 h 140.06299 v 33.5105"
+ fill-rule="nonzero"
+ id="path165" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 627.2572,719.0512 v 33.51312 h 140.06299 v 30.08344"
+ fill-rule="evenodd"
+ id="path167" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 767.3202,782.64777 -1.12457,-1.12457 1.12457,3.08978 1.12457,-3.08978 z"
+ fill-rule="evenodd"
+ id="path169" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 733.7454,912.70487 h 137.00787 v 48 H 733.7454 Z"
+ fill-rule="nonzero"
+ id="path171" />
+ <path
+ fill="#000000"
+ d="m 742.7142,939.62477 5.23437,-13.5937 h 1.9375 l 5.5625,13.5937 h -2.04687 l -1.59375,-4.125 h -5.6875 l -1.48438,4.125 z m 3.92187,-5.5781 h 4.60938 l -1.40625,-3.7813 q -0.65625,-1.7031 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.5938 z m 16.25605,5.5781 v -1.4531 q -1.14063,1.6719 -3.125,1.6719 -0.85938,0 -1.625,-0.3282 -0.75,-0.3437 -1.125,-0.8437 -0.35938,-0.5 -0.51563,-1.2344 -0.0937,-0.5 -0.0937,-1.5625 v -6.1094 h 1.67188 v 5.4688 q 0,1.3125 0.0937,1.7656 0.15625,0.6563 0.67187,1.0313 0.51563,0.375 1.26563,0.375 0.75,0 1.40625,-0.375 0.65625,-0.3907 0.92187,-1.0469 0.28125,-0.6719 0.28125,-1.9375 v -5.2813 h 1.67188 v 9.8594 z m 7.57885,-1.5 0.23438,1.4844 q -0.70313,0.1406 -1.26563,0.1406 -0.90625,0 -1.40625,-0.2812 -0.5,-0.2969 -0.70312,-0.75 -0.20313,-0.4688 -0.20313,-1.9844 v -5.6563 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.7188 0.0781,0.9219 0.0937,0.2031 0.29687,0.3281 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.078 z m 1.52704,1.5 v -13.5937 h 1.67188 v 4.875 q 1.17187,-1.3594 2.95312,-1.3594 1.09375,0 1.89063,0.4375 0.8125,0.4219 1.15625,1.1875 0.35937,0.7656 0.35937,2.2031 v 6.25 h -1.67187 v -6.25 q 0,-1.25 -0.54688,-1.8125 -0.54687,-0.5781 -1.53125,-0.5781 -0.75,0 -1.40625,0.3906 -0.64062,0.375 -0.92187,1.0469 -0.28125,0.6562 -0.28125,1.8125 v 5.3906 z m 19.21527,-1.5 0.23438,1.4844 q -0.70313,0.1406 -1.26563,0.1406 -0.90625,0 -1.40625,-0.2812 -0.5,-0.2969 -0.70312,-0.75 -0.20313,-0.4688 -0.20313,-1.9844 v -5.6563 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.7188 0.0781,0.9219 0.0937,0.2031 0.29687,0.3281 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.078 z m 0.90204,-3.4219 q 0,-2.7343 1.53125,-4.0625 1.26563,-1.0937 3.09375,-1.0937 2.03125,0 3.3125,1.3437 1.29688,1.3282 1.29688,3.6719 0,1.9063 -0.57813,3 -0.5625,1.0781 -1.65625,1.6875 -1.07812,0.5938 -2.375,0.5938 -2.0625,0 -3.34375,-1.3282 -1.28125,-1.3281 -1.28125,-3.8125 z m 1.71875,0 q 0,1.8907 0.82813,2.8282 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.9532 0.82813,-2.8907 0,-1.8281 -0.82813,-2.7656 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.8281 z m 9.29761,4.9219 v -13.5937 h 1.67187 v 7.75 l 3.95313,-4.0157 h 2.15625 l -3.76563,3.6563 4.14063,6.2031 h -2.0625 l -3.25,-5.0312 -1.17188,1.125 v 3.9062 z m 16.0625,-3.1719 1.71875,0.2188 q -0.40625,1.5 -1.51563,2.3437 -1.09375,0.8282 -2.8125,0.8282 -2.15625,0 -3.42187,-1.3282 -1.26563,-1.3281 -1.26563,-3.7343 0,-2.4844 1.26563,-3.8594 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.3437 1.25,1.3438 1.25,3.7969 0,0.1406 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.4844 0.82812,0.8594 2.0625,0.8594 0.90625,0 1.54687,-0.4688 0.65625,-0.4844 1.04688,-1.5469 z m -5.48438,-2.7031 h 5.5 q -0.10937,-1.2344 -0.625,-1.8594 -0.79687,-0.9687 -2.07812,-0.9687 -1.14063,0 -1.9375,0.7812 -0.78125,0.7657 -0.85938,2.0469 z m 9.11011,5.875 v -9.8594 h 1.5 v 1.4063 q 1.09375,-1.625 3.14063,-1.625 0.89062,0 1.64062,0.3281 0.75,0.3125 1.10938,0.8438 0.375,0.5156 0.53125,1.2187 0.0937,0.4688 0.0937,1.625 v 6.0625 h -1.67188 v -6 q 0,-1.0156 -0.20312,-1.5156 -0.1875,-0.5156 -0.6875,-0.8125 -0.5,-0.2969 -1.17188,-0.2969 -1.0625,0 -1.84375,0.6719 -0.76562,0.6719 -0.76562,2.5781 v 5.375 z"
+ fill-rule="nonzero"
+ id="path173" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 1027.8976,907.0079 h 229.4804 v 94.6457 h -229.4804 z"
+ fill-rule="nonzero"
+ id="path175" />
+ <path
+ fill="#000000"
+ d="m 926.73278,824.15778 v -13.59375 h 6.0313 q 1.8125,0 2.75,0.35937 0.9531,0.35938 1.5156,1.29688 0.5625,0.92187 0.5625,2.04687 0,1.45313 -0.9375,2.45313 -0.9219,0.98437 -2.8906,1.25 0.7187,0.34375 1.0937,0.67187 0.7813,0.73438 1.4844,1.8125 l 2.375,3.70313 h -2.2656 l -1.7969,-2.82813 q -0.7969,-1.21875 -1.3125,-1.875 -0.5,-0.65625 -0.9063,-0.90625 -0.4062,-0.26562 -0.8125,-0.35937 -0.3125,-0.0781 -1.0156,-0.0781 h -2.0781 v 6.04688 z m 1.7969,-7.59375 h 3.8594 q 1.2343,0 1.9218,-0.25 0.7032,-0.26563 1.0625,-0.82813 0.375,-0.5625 0.375,-1.21875 0,-0.96875 -0.7031,-1.57812 -0.7031,-0.625 -2.2187,-0.625 h -4.2969 z m 18.1761,4.42187 1.7188,0.21875 q -0.4063,1.5 -1.5157,2.34375 -1.0937,0.82813 -2.8125,0.82813 -2.1562,0 -3.4218,-1.32813 -1.2657,-1.32812 -1.2657,-3.73437 0,-2.48438 1.2657,-3.85938 1.2812,-1.375 3.3281,-1.375 1.9844,0 3.2344,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.016,0.4375 h -7.3437 q 0.094,1.625 0.9219,2.48437 0.8281,0.85938 2.0625,0.85938 0.9062,0 1.5468,-0.46875 0.6563,-0.48438 1.0469,-1.54688 z m -5.4844,-2.70312 h 5.5 q -0.1093,-1.23438 -0.625,-1.85938 -0.7968,-0.96875 -2.0781,-0.96875 -1.1406,0 -1.9375,0.78125 -0.7812,0.76563 -0.8594,2.04688 z m 15.5008,5.875 v -1.25 q -0.9375,1.46875 -2.75,1.46875 -1.1719,0 -2.1719,-0.64063 -0.9844,-0.65625 -1.5312,-1.8125 -0.5313,-1.17187 -0.5313,-2.6875 0,-1.46875 0.4844,-2.67187 0.5,-1.20313 1.4687,-1.84375 0.9844,-0.64063 2.2032,-0.64063 0.8906,0 1.5781,0.375 0.7031,0.375 1.1406,0.98438 v -4.875 h 1.6563 v 13.59375 z m -5.2813,-4.92188 q 0,1.89063 0.7969,2.82813 0.8125,0.9375 1.8906,0.9375 1.0938,0 1.8594,-0.89063 0.7656,-0.89062 0.7656,-2.73437 0,-2.01563 -0.7812,-2.95313 -0.7813,-0.95312 -1.9219,-0.95312 -1.1094,0 -1.8594,0.90625 -0.75,0.90625 -0.75,2.85937 z m 9.282,-6.76562 v -1.90625 h 1.6719 v 1.90625 z m 0,11.6875 v -9.85938 h 1.6719 v 9.85938 z m 4.1135,0 v -9.85938 h 1.5 v 1.5 q 0.5782,-1.04687 1.0625,-1.375 0.4844,-0.34375 1.0782,-0.34375 0.8437,0 1.7187,0.54688 l -0.5781,1.54687 q -0.6094,-0.35937 -1.2344,-0.35937 -0.5469,0 -0.9844,0.32812 -0.4218,0.32813 -0.6093,0.90625 -0.2813,0.89063 -0.2813,1.95313 v 5.15625 z m 12.9783,-3.17188 1.7188,0.21875 q -0.4063,1.5 -1.5157,2.34375 -1.0937,0.82813 -2.8125,0.82813 -2.1562,0 -3.4218,-1.32813 -1.2657,-1.32812 -1.2657,-3.73437 0,-2.48438 1.2657,-3.85938 1.2812,-1.375 3.3281,-1.375 1.9844,0 3.2344,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.016,0.4375 h -7.3437 q 0.094,1.625 0.9219,2.48437 0.8281,0.85938 2.0625,0.85938 0.9062,0 1.5468,-0.46875 0.6563,-0.48438 1.0469,-1.54688 z m -5.4844,-2.70312 h 5.5 q -0.1093,-1.23438 -0.625,-1.85938 -0.7968,-0.96875 -2.0781,-0.96875 -1.1406,0 -1.9375,0.78125 -0.7812,0.76563 -0.8594,2.04688 z m 15.5476,2.26562 1.6407,0.21875 q -0.2657,1.6875 -1.375,2.65625 -1.1094,0.95313 -2.7344,0.95313 -2.0156,0 -3.25,-1.3125 -1.2188,-1.32813 -1.2188,-3.79688 0,-1.59375 0.5157,-2.78125 0.5312,-1.20312 1.6093,-1.79687 1.0938,-0.60938 2.3594,-0.60938 1.6094,0 2.625,0.8125 1.0156,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.2344,-1 -0.8281,-1.5 -0.5938,-0.5 -1.4219,-0.5 -1.2656,0 -2.0625,0.90625 -0.7812,0.90625 -0.7812,2.85938 0,1.98437 0.7656,2.89062 0.7656,0.89063 1.9844,0.89063 0.9843,0 1.6406,-0.59375 0.6562,-0.60938 0.8437,-1.85938 z m 6.5469,2.10938 0.2344,1.48437 q -0.7031,0.14063 -1.2656,0.14063 -0.9063,0 -1.4063,-0.28125 -0.5,-0.29688 -0.7031,-0.75 -0.2031,-0.46875 -0.2031,-1.98438 v -5.65625 h -1.2344 v -1.3125 h 1.2344 v -2.4375 l 1.6562,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.078,0.92188 0.094,0.20312 0.2969,0.32812 0.2031,0.125 0.5781,0.125 0.2657,0 0.7344,-0.0781 z m 10.36662,0 0.2344,1.48437 q -0.7032,0.14063 -1.2657,0.14063 -0.9062,0 -1.4062,-0.28125 -0.5,-0.29688 -0.7031,-0.75 -0.2032,-0.46875 -0.2032,-1.98438 v -5.65625 h -1.2343 v -1.3125 h 1.2343 v -2.4375 l 1.6563,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.078,0.92188 0.094,0.20312 0.2969,0.32812 0.2031,0.125 0.5781,0.125 0.2656,0 0.7344,-0.0781 z m 0.9021,-3.42188 q 0,-2.73437 1.5312,-4.0625 1.2657,-1.09375 3.0938,-1.09375 2.0312,0 3.3125,1.34375 1.2969,1.32813 1.2969,3.67188 0,1.90625 -0.5782,3 -0.5625,1.07812 -1.6562,1.6875 -1.0781,0.59375 -2.375,0.59375 -2.0625,0 -3.3438,-1.32813 -1.2812,-1.32812 -1.2812,-3.8125 z m 1.7187,0 q 0,1.89063 0.8282,2.82813 0.8281,0.9375 2.0781,0.9375 1.25,0 2.0625,-0.9375 0.8281,-0.95313 0.8281,-2.89063 0,-1.82812 -0.8281,-2.76562 -0.8281,-0.9375 -2.0625,-0.9375 -1.25,0 -2.0781,0.9375 -0.8282,0.9375 -0.8282,2.82812 z m 13.184,4.92188 5.2344,-13.59375 h 1.9375 l 5.5625,13.59375 h -2.0469 l -1.5937,-4.125 h -5.6875 l -1.4844,4.125 z m 3.9219,-5.57813 h 4.6094 l -1.4063,-3.78125 q -0.6562,-1.70312 -0.9687,-2.8125 -0.2657,1.3125 -0.7344,2.59375 z m 10.0217,5.57813 v -13.59375 h 5.125 q 1.3594,0 2.0781,0.125 1,0.17187 1.6719,0.64062 0.6719,0.46875 1.0781,1.3125 0.4219,0.84375 0.4219,1.84375 0,1.73438 -1.1094,2.9375 -1.0937,1.20313 -3.9843,1.20313 h -3.4844 v 5.53125 z m 1.7969,-7.14063 h 3.5156 q 1.75,0 2.4688,-0.64062 0.7343,-0.65625 0.7343,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.4218,-0.60937 -1.125,-0.79687 -0.4531,-0.125 -1.6718,-0.125 h -3.4844 z m 10.9436,7.14063 v -13.59375 h 1.8125 v 13.59375 z"
+ fill-rule="nonzero"
+ id="path177" />
+ <path
+ fill="#bf9000"
+ d="M 550.4829,965.75207 H 706.8294 V 1042.571 H 550.4829 Z"
+ fill-rule="nonzero"
+ id="path183" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 550.4829,965.75207 H 706.8294 V 1042.571 H 550.4829 Z"
+ fill-rule="nonzero"
+ id="path185" />
+ <path
+ fill="#000000"
+ d="m 571.6152,1011.0814 v -13.59373 h 1.8125 v 13.59373 z m 11.05829,0 v -1.25 q -0.9375,1.4688 -2.75,1.4688 -1.17188,0 -2.17188,-0.6407 -0.98437,-0.6562 -1.53125,-1.8125 -0.53125,-1.1718 -0.53125,-2.6875 0,-1.4687 0.48438,-2.6718 0.5,-1.2032 1.46875,-1.8438 0.98437,-0.6406 2.20312,-0.6406 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.9844 v -4.87503 h 1.65625 v 13.59373 z m -5.28125,-4.9219 q 0,1.8907 0.79687,2.8282 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.8907 0.76563,-0.8906 0.76563,-2.7343 0,-2.0157 -0.78125,-2.9532 -0.78125,-0.9531 -1.92188,-0.9531 -1.10937,0 -1.85937,0.9063 -0.75,0.9062 -0.75,2.8593 z m 16.01636,1.75 1.71875,0.2188 q -0.40625,1.5 -1.51563,2.3437 -1.09375,0.8282 -2.8125,0.8282 -2.15625,0 -3.42187,-1.3282 -1.26563,-1.3281 -1.26563,-3.7343 0,-2.4844 1.26563,-3.8594 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.3437 1.25,1.3438 1.25,3.7969 0,0.1406 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.4844 0.82812,0.8594 2.0625,0.8594 0.90625,0 1.54687,-0.4688 0.65625,-0.4844 1.04688,-1.5469 z m -5.48438,-2.7031 h 5.5 q -0.10937,-1.2344 -0.625,-1.8594 -0.79687,-0.9687 -2.07812,-0.9687 -1.14063,0 -1.9375,0.7812 -0.78125,0.7657 -0.85938,2.0469 z m 9.11011,5.875 v -9.8594 h 1.5 v 1.4063 q 1.09375,-1.625 3.14062,-1.625 0.89063,0 1.64063,0.3281 0.75,0.3125 1.10937,0.8438 0.375,0.5156 0.53125,1.2187 0.0937,0.4688 0.0937,1.625 v 6.0625 h -1.67187 v -6 q 0,-1.0156 -0.20313,-1.5156 -0.1875,-0.5156 -0.6875,-0.8125 -0.5,-0.2969 -1.17187,-0.2969 -1.0625,0 -1.84375,0.6719 -0.76563,0.6719 -0.76563,2.5781 v 5.375 z m 14.03192,-1.5 0.23437,1.4844 q -0.70312,0.1406 -1.26562,0.1406 -0.90625,0 -1.40625,-0.2812 -0.5,-0.2969 -0.70313,-0.75 -0.20312,-0.4688 -0.20312,-1.9844 v -5.6563 h -1.23438 v -1.3125 h 1.23438 v -2.43753 l 1.65625,-1 v 3.43753 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.7188 0.0781,0.9219 0.0937,0.2031 0.29688,0.3281 0.20312,0.125 0.57812,0.125 0.26563,0 0.73438,-0.078 z m 1.54272,-10.18753 v -1.9062 h 1.67188 v 1.9062 z m 0,11.68753 v -9.8594 h 1.67188 v 9.8594 z m 4.5354,0 v -8.5469 H 615.66 v -1.3125 h 1.48437 v -1.0468 q 0,-0.98443 0.17188,-1.46883 0.23437,-0.6562 0.84375,-1.0469 0.60937,-0.4062 1.70312,-0.4062 0.70313,0 1.5625,0.1562 l -0.25,1.4688 q -0.51562,-0.094 -0.98437,-0.094 -0.76563,0 -1.07813,0.3282 -0.3125,0.3125 -0.3125,1.20313 v 0.9062 h 1.92188 v 1.3125 h -1.92188 v 8.5469 z m 4.69898,3.7969 -0.17188,-1.5625 q 0.54688,0.1406 0.95313,0.1406 0.54687,0 0.875,-0.1875 0.34375,-0.1875 0.5625,-0.5156 0.15625,-0.25 0.5,-1.25 0.0469,-0.1406 0.15625,-0.4063 l -3.73438,-9.875 h 1.79688 l 2.04687,5.7188 q 0.40625,1.0781 0.71875,2.2812 0.28125,-1.1562 0.6875,-2.25 l 2.09375,-5.75 h 1.67188 l -3.75,10.0313 q -0.59375,1.625 -0.9375,2.2344 -0.4375,0.8281 -1.01563,1.2031 -0.57812,0.3906 -1.375,0.3906 -0.48437,0 -1.07812,-0.2031 z m 21.04266,-3.7969 v -1.4531 q -1.14062,1.6719 -3.125,1.6719 -0.85937,0 -1.625,-0.3282 -0.75,-0.3437 -1.125,-0.8437 -0.35937,-0.5 -0.51562,-1.2344 -0.0937,-0.5 -0.0937,-1.5625 v -6.1094 h 1.67187 v 5.4688 q 0,1.3125 0.0937,1.7656 0.15625,0.6563 0.67188,1.0313 0.51562,0.375 1.26562,0.375 0.75,0 1.40625,-0.375 0.65625,-0.3907 0.92188,-1.0469 0.28125,-0.6719 0.28125,-1.9375 v -5.2813 h 1.67187 v 9.8594 z m 3.25073,-2.9375 1.65625,-0.2656 q 0.14063,1 0.76563,1.5312 0.64062,0.5157 1.78125,0.5157 1.15625,0 1.70312,-0.4688 0.5625,-0.4687 0.5625,-1.0937 0,-0.5625 -0.48437,-0.8907 -0.34375,-0.2187 -1.70313,-0.5625 -1.84375,-0.4687 -2.5625,-0.7968 -0.70312,-0.3438 -1.07812,-0.9375 -0.35938,-0.6094 -0.35938,-1.3282 0,-0.6562 0.29688,-1.2187 0.3125,-0.5625 0.82812,-0.9375 0.39063,-0.2813 1.0625,-0.4844 0.67188,-0.2031 1.4375,-0.2031 1.17188,0 2.04688,0.3437 0.875,0.3282 1.28125,0.9063 0.42187,0.5625 0.57812,1.5156 l -1.625,0.2188 q -0.10937,-0.75 -0.65625,-1.1719 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64062,0.3906 -0.48438,0.375 -0.48438,0.875 0,0.3281 0.20313,0.5938 0.20312,0.2656 0.64062,0.4375 0.25,0.094 1.46875,0.4375 1.76563,0.4687 2.46875,0.7656 0.70313,0.2969 1.09375,0.875 0.40625,0.5781 0.40625,1.4375 0,0.8281 -0.48437,1.5781 -0.48438,0.7344 -1.40625,1.1406 -0.92188,0.3907 -2.07813,0.3907 -1.92187,0 -2.9375,-0.7969 -1,-0.7969 -1.28125,-2.3594 z m 16.75,-0.2344 1.71875,0.2188 q -0.40625,1.5 -1.51562,2.3437 -1.09375,0.8282 -2.8125,0.8282 -2.15625,0 -3.42188,-1.3282 -1.26562,-1.3281 -1.26562,-3.7343 0,-2.4844 1.26562,-3.8594 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.3437 1.25,1.3438 1.25,3.7969 0,0.1406 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.4844 0.82813,0.8594 2.0625,0.8594 0.90625,0 1.54688,-0.4688 0.65625,-0.4844 1.04687,-1.5469 z m -5.48437,-2.7031 h 5.5 q -0.10938,-1.2344 -0.625,-1.8594 -0.79688,-0.9687 -2.07813,-0.9687 -1.14062,0 -1.9375,0.7812 -0.78125,0.7657 -0.85937,2.0469 z m 9.09442,5.875 v -9.8594 h 1.5 v 1.5 q 0.57813,-1.0468 1.0625,-1.375 0.48438,-0.3437 1.07813,-0.3437 0.84375,0 1.71875,0.5469 l -0.57813,1.5468 q -0.60937,-0.3593 -1.23437,-0.3593 -0.54688,0 -0.98438,0.3281 -0.42187,0.3281 -0.60937,0.9062 -0.28125,0.8907 -0.28125,1.9532 v 5.1562 z"
+ fill-rule="nonzero"
+ id="path187" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 490.2467,880.72317 v 42.5147 h 138.42523 v 42.5247"
+ fill-rule="nonzero"
+ id="path189" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 490.2467,880.72317 v 42.5147 h 138.42523 v 39.0976"
+ fill-rule="evenodd"
+ id="path191" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 628.67194,962.33547 -1.12463,-1.1246 1.12463,3.0898 1.12457,-3.0898 z"
+ fill-rule="evenodd"
+ id="path193" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 767.31494,880.72317 v 42.5147 H 628.66931 v 42.5247"
+ fill-rule="nonzero"
+ id="path195" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 767.31494,880.72317 v 42.5147 H 628.66931 v 39.0976"
+ fill-rule="evenodd"
+ id="path197" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 628.6693,962.33547 -1.12463,-1.1246 1.12463,3.0898 1.12457,-3.0898 z"
+ fill-rule="evenodd"
+ id="path199" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 623.2572,704.6588 4,47.90552"
+ fill-rule="nonzero"
+ id="path201" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 628.6562,1198.0052 v 25.0021 h 385.4515 V 669.53278 H 701.44356"
+ fill-rule="nonzero"
+ id="path207" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 704.87067,669.53284 1.12457,-1.12463 -3.08978,1.12463 3.08978,1.12457 z"
+ fill-rule="evenodd"
+ id="path211" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 701.4305,651.92975 522.5573,3.07751 V 73.564335 H 704.8471"
+ fill-rule="nonzero"
+ id="path213" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 701.4304,651.92975 522.5575,3.07751 V 73.564335 H 708.27415"
+ fill-rule="evenodd"
+ id="path215" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 708.2742,73.56431 1.12463,-1.124588 -3.08978,1.124588 3.08978,1.12458 z"
+ fill-rule="evenodd"
+ id="path217" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 808.0315,611.3517 h 466.8661 v 43.65356 H 808.0315 Z"
+ fill-rule="nonzero"
+ id="path219" />
+ <path
+ fill="#000000"
+ d="m 818.5315,638.2717 v -13.59375 h 6.03125 q 1.8125,0 2.75,0.35937 0.95313,0.35938 1.51563,1.29688 0.5625,0.92187 0.5625,2.04687 0,1.45313 -0.9375,2.45313 -0.92188,0.98437 -2.89063,1.25 0.71875,0.34375 1.09375,0.67187 0.78125,0.73438 1.48438,1.8125 l 2.375,3.70313 h -2.26563 l -1.79687,-2.82813 q -0.79688,-1.21875 -1.3125,-1.875 -0.5,-0.65625 -0.90625,-0.90625 -0.40625,-0.26562 -0.8125,-0.35937 -0.3125,-0.0781 -1.01563,-0.0781 h -2.07812 v 6.04688 z m 1.79688,-7.59375 h 3.85937 q 1.23438,0 1.92188,-0.25 0.70312,-0.26563 1.0625,-0.82813 0.375,-0.5625 0.375,-1.21875 0,-0.96875 -0.70313,-1.57812 -0.70312,-0.625 -2.21875,-0.625 h -4.29687 z m 18.17608,4.42187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 15.50073,5.875 v -1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17188,0 -2.17188,-0.64063 -0.98437,-0.65625 -1.53125,-1.8125 -0.53125,-1.17187 -0.53125,-2.6875 0,-1.46875 0.48438,-2.67187 0.5,-1.20313 1.46875,-1.84375 0.98437,-0.64063 2.20312,-0.64063 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.98438 v -4.875 h 1.65625 v 13.59375 z m -5.28125,-4.92188 q 0,1.89063 0.79687,2.82813 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.89063 0.76563,-0.89062 0.76563,-2.73437 0,-2.01563 -0.78125,-2.95313 -0.78125,-0.95312 -1.92188,-0.95312 -1.10937,0 -1.85937,0.90625 -0.75,0.90625 -0.75,2.85937 z m 9.28192,-6.76562 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 v -9.85938 h 1.67187 v 9.85938 z m 4.11359,0 v -9.85938 h 1.5 v 1.5 q 0.57812,-1.04687 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54688 l -0.57812,1.54687 q -0.60938,-0.35937 -1.23438,-0.35937 -0.54687,0 -0.98437,0.32812 -0.42188,0.32813 -0.60938,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 12.97833,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 15.54755,2.26562 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 6.54687,2.10938 0.23438,1.48437 q -0.70313,0.14063 -1.26563,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98438 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29687,0.32812 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 10.36664,0 0.23438,1.48437 q -0.70313,0.14063 -1.26563,0.14063 -0.90625,0 -1.40625,-0.28125 -0.5,-0.29688 -0.70312,-0.75 -0.20313,-0.46875 -0.20313,-1.98438 v -5.65625 h -1.23437 v -1.3125 h 1.23437 v -2.4375 l 1.65625,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.0781,0.92188 0.0937,0.20312 0.29687,0.32812 0.20313,0.125 0.57813,0.125 0.26562,0 0.73437,-0.0781 z m 0.90204,-3.42188 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 13.21527,5.15625 3.9375,-14.0625 h 1.34375 l -3.9375,14.0625 z m 8.26142,-0.23437 -3.01563,-9.85938 h 1.71875 l 1.5625,5.6875 0.59375,2.125 q 0.0312,-0.15625 0.5,-2.03125 l 1.57813,-5.78125 h 1.71875 l 1.46875,5.71875 0.48437,1.89063 0.57813,-1.90625 1.6875,-5.70313 h 1.625 l -3.07813,9.85938 h -1.73437 l -1.57813,-5.90625 -0.375,-1.67188 -2,7.57813 z m 18.39483,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 9.07885,5.875 v -13.59375 h 1.67188 v 13.59375 z m 10.61359,-3.60938 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 2.26562,-1.3125 q 0,-2.73437 1.53125,-4.0625 1.26563,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29688,1.32813 1.29688,3.67188 0,1.90625 -0.57813,3 -0.5625,1.07812 -1.65625,1.6875 -1.07812,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82813,2.82813 0.82812,0.9375 2.07812,0.9375 1.25,0 2.0625,-0.9375 0.82813,-0.95313 0.82813,-2.89063 0,-1.82812 -0.82813,-2.76562 -0.82812,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07812,0.9375 -0.82813,0.9375 -0.82813,2.82812 z m 9.28193,4.92188 v -9.85938 h 1.5 v 1.39063 q 0.45312,-0.71875 1.21875,-1.15625 0.78125,-0.45313 1.76562,-0.45313 1.09375,0 1.79688,0.45313 0.70312,0.45312 0.98437,1.28125 1.17188,-1.73438 3.04688,-1.73438 1.46875,0 2.25,0.8125 0.79687,0.8125 0.79687,2.5 v 6.76563 h -1.67187 v -6.20313 q 0,-1 -0.15625,-1.4375 -0.15625,-0.45312 -0.59375,-0.71875 -0.42188,-0.26562 -1,-0.26562 -1.03125,0 -1.71875,0.6875 -0.6875,0.6875 -0.6875,2.21875 v 5.71875 h -1.67188 v -6.40625 q 0,-1.10938 -0.40625,-1.65625 -0.40625,-0.5625 -1.34375,-0.5625 -0.70312,0 -1.3125,0.375 -0.59375,0.35937 -0.85937,1.07812 -0.26563,0.71875 -0.26563,2.0625 v 5.10938 z m 22.29083,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 14.2934,9.65625 v -13.64063 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20313,-1.125 0.6875,-0.375 1.6406,-0.375 1.2656,0 2.2344,0.65625 0.9687,0.64063 1.4531,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.5469,2.73437 -0.5468,1.20313 -1.5781,1.84375 -1.0312,0.64063 -2.1719,0.64063 -0.8437,0 -1.5156,-0.34375 -0.65623,-0.35938 -1.07811,-0.89063 v 4.79688 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78123,0.90625 1.87503,0.90625 1.1093,0 1.8906,-0.9375 0.7969,-0.9375 0.7969,-2.92188 0,-1.875 -0.7813,-2.8125 -0.7656,-0.9375 -1.8437,-0.9375 -1.0625,0 -1.89066,1 -0.8125,1 -0.8125,2.89063 z m 15.29766,3.65625 q -0.9375,0.79687 -1.7969,1.125 -0.8594,0.3125 -1.8438,0.3125 -1.6093,0 -2.4843,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.3281,-1.32813 0.3281,-0.59375 0.8594,-0.95312 0.5312,-0.35938 1.2031,-0.54688 0.5,-0.14062 1.4844,-0.25 2.0312,-0.25 2.9843,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.4687,-1.4375 -0.6406,-0.5625 -1.9063,-0.5625 -1.1718,0 -1.7343,0.40625 -0.5625,0.40625 -0.8282,1.46875 l -1.6406,-0.23438 q 0.2344,-1.04687 0.7344,-1.6875 0.5156,-0.64062 1.4687,-0.98437 0.9688,-0.35938 2.25,-0.35938 1.2657,0 2.0469,0.29688 0.7813,0.29687 1.1563,0.75 0.375,0.45312 0.5156,1.14062 0.094,0.42188 0.094,1.53125 v 2.23438 q 0,2.32812 0.094,2.95312 0.1094,0.60938 0.4375,1.17188 h -1.75 q -0.2656,-0.51563 -0.3281,-1.21875 z m -0.1407,-3.71875 q -0.9062,0.35937 -2.7343,0.625 -1.0313,0.14062 -1.4532,0.32812 -0.4218,0.1875 -0.6562,0.54688 -0.2344,0.35937 -0.2344,0.79687 0,0.67188 0.5,1.125 0.5156,0.4375 1.4844,0.4375 0.9687,0 1.7187,-0.42187 0.75,-0.4375 1.1094,-1.15625 0.2656,-0.57813 0.2656,-1.67188 z m 3.7819,5.75 1.6094,0.25 q 0.1094,0.75 0.5781,1.09375 0.6094,0.45312 1.6875,0.45312 1.1719,0 1.7969,-0.46875 0.6251,-0.45312 0.8595,-1.28125 0.125,-0.51562 0.1093,-2.15625 -1.0938,1.29688 -2.7188,1.29688 -2.0312,0 -3.1562,-1.46875 -1.1094,-1.46875 -1.1094,-3.51563 0,-1.40625 0.5156,-2.59375 0.5156,-1.20312 1.4844,-1.84375 0.9687,-0.65625 2.2656,-0.65625 1.75,0 2.8751,1.40625 v -1.1875 h 1.5469 v 8.51563 q 0,2.3125 -0.4688,3.26562 -0.4687,0.96875 -1.4844,1.51563 -1.0157,0.5625 -2.5,0.5625 -1.7657,0 -2.8594,-0.79688 -1.0781,-0.79687 -1.0313,-2.39062 z m 1.375,-5.92188 q 0,1.95313 0.7657,2.84375 0.7812,0.89063 1.9375,0.89063 1.1406,0 1.9219,-0.89063 0.7813,-0.89062 0.7813,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.797,-0.92187 -1.922,-0.92187 -1.1094,0 -1.8906,0.90625 -0.7813,0.89062 -0.7813,2.67187 z m 16.0477,1.9375 1.7188,0.21875 q -0.4063,1.5 -1.5157,2.34375 -1.0937,0.82813 -2.8125,0.82813 -2.1562,0 -3.4218,-1.32813 -1.2657,-1.32812 -1.2657,-3.73437 0,-2.48438 1.2657,-3.85938 1.2812,-1.375 3.3281,-1.375 1.9844,0 3.2344,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.016,0.4375 h -7.3437 q 0.094,1.625 0.9219,2.48437 0.8281,0.85938 2.0625,0.85938 0.9062,0 1.5468,-0.46875 0.6563,-0.48438 1.0469,-1.54688 z m -5.4844,-2.70312 h 5.5 q -0.1093,-1.23438 -0.625,-1.85938 -0.7968,-0.96875 -2.0781,-0.96875 -1.1406,0 -1.9375,0.78125 -0.7812,0.76563 -0.8594,2.04688 z m 16.1215,5.875 -3.0156,-9.85938 h 1.7187 l 1.5625,5.6875 0.5938,2.125 q 0.031,-0.15625 0.5,-2.03125 l 1.5781,-5.78125 h 1.7188 l 1.4687,5.71875 0.4844,1.89063 0.5781,-1.90625 1.6875,-5.70313 h 1.625 l -3.0781,9.85938 h -1.7344 l -1.5781,-5.90625 -0.375,-1.67188 -2,7.57813 z m 11.6604,-11.6875 v -1.90625 h 1.6719 v 1.90625 z m 0,11.6875 v -9.85938 h 1.6719 v 9.85938 z m 7.7855,-1.5 0.2344,1.48437 q -0.7031,0.14063 -1.2656,0.14063 -0.9063,0 -1.4063,-0.28125 -0.5,-0.29688 -0.7031,-0.75 -0.2031,-0.46875 -0.2031,-1.98438 v -5.65625 h -1.2344 v -1.3125 h 1.2344 v -2.4375 l 1.6562,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.078,0.92188 0.094,0.20312 0.2969,0.32812 0.2031,0.125 0.5781,0.125 0.2657,0 0.7344,-0.0781 z m 1.5271,1.5 v -13.59375 h 1.6719 v 4.875 q 1.1719,-1.35938 2.9531,-1.35938 1.0938,0 1.8906,0.4375 0.8125,0.42188 1.1563,1.1875 0.3594,0.76563 0.3594,2.20313 v 6.25 h -1.6719 v -6.25 q 0,-1.25 -0.5469,-1.8125 -0.5469,-0.57813 -1.5312,-0.57813 -0.75,0 -1.4063,0.39063 -0.6406,0.375 -0.9219,1.04687 -0.2812,0.65625 -0.2812,1.8125 v 5.39063 z m 14.8871,-2.9375 1.6563,-0.26563 q 0.1406,1 0.7656,1.53125 0.6406,0.51563 1.7812,0.51563 1.1563,0 1.7032,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.4844,-0.89063 -0.3438,-0.21875 -1.7031,-0.5625 -1.8438,-0.46875 -2.5625,-0.79687 -0.7032,-0.34375 -1.0782,-0.9375 -0.3593,-0.60938 -0.3593,-1.32813 0,-0.65625 0.2968,-1.21875 0.3125,-0.5625 0.8282,-0.9375 0.3906,-0.28125 1.0625,-0.48437 0.6718,-0.20313 1.4375,-0.20313 1.1718,0 2.0468,0.34375 0.875,0.32813 1.2813,0.90625 0.4219,0.5625 0.5781,1.51563 l -1.625,0.21875 q -0.1094,-0.75 -0.6562,-1.17188 -0.5313,-0.4375 -1.5,-0.4375 -1.1563,0 -1.6407,0.39063 -0.4843,0.375 -0.4843,0.875 0,0.32812 0.2031,0.59375 0.2031,0.26562 0.6406,0.4375 0.25,0.0937 1.4688,0.4375 1.7656,0.46875 2.4687,0.76562 0.7031,0.29688 1.0938,0.875 0.4062,0.57813 0.4062,1.4375 0,0.82813 -0.4844,1.57813 -0.4843,0.73437 -1.4062,1.14062 -0.9219,0.39063 -2.0781,0.39063 -1.9219,0 -2.9375,-0.79688 -1,-0.79687 -1.2813,-2.35937 z m 16.75,-0.23438 1.7188,0.21875 q -0.4063,1.5 -1.5157,2.34375 -1.0937,0.82813 -2.8125,0.82813 -2.1562,0 -3.4218,-1.32813 -1.2657,-1.32812 -1.2657,-3.73437 0,-2.48438 1.2657,-3.85938 1.2812,-1.375 3.3281,-1.375 1.9844,0 3.2344,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.016,0.4375 h -7.3437 q 0.094,1.625 0.9219,2.48437 0.8281,0.85938 2.0625,0.85938 0.9062,0 1.5468,-0.46875 0.6563,-0.48438 1.0469,-1.54688 z m -5.4844,-2.70312 h 5.5 q -0.1093,-1.23438 -0.625,-1.85938 -0.7968,-0.96875 -2.0781,-0.96875 -1.1406,0 -1.9375,0.78125 -0.7812,0.76563 -0.8594,2.04688 z m 8.4383,2.9375 1.6562,-0.26563 q 0.1406,1 0.7656,1.53125 0.6407,0.51563 1.7813,0.51563 1.1562,0 1.7031,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.4844,-0.89063 -0.3437,-0.21875 -1.7031,-0.5625 -1.8437,-0.46875 -2.5625,-0.79687 -0.7031,-0.34375 -1.0781,-0.9375 -0.3594,-0.60938 -0.3594,-1.32813 0,-0.65625 0.2969,-1.21875 0.3125,-0.5625 0.8281,-0.9375 0.3906,-0.28125 1.0625,-0.48437 0.6719,-0.20313 1.4375,-0.20313 1.1719,0 2.0469,0.34375 0.875,0.32813 1.2812,0.90625 0.4219,0.5625 0.5782,1.51563 l -1.625,0.21875 q -0.1094,-0.75 -0.6563,-1.17188 -0.5312,-0.4375 -1.5,-0.4375 -1.1562,0 -1.6406,0.39063 -0.4844,0.375 -0.4844,0.875 0,0.32812 0.2031,0.59375 0.2032,0.26562 0.6407,0.4375 0.25,0.0937 1.4687,0.4375 1.7656,0.46875 2.4688,0.76562 0.7031,0.29688 1.0937,0.875 0.4063,0.57813 0.4063,1.4375 0,0.82813 -0.4844,1.57813 -0.4844,0.73437 -1.4063,1.14062 -0.9218,0.39063 -2.0781,0.39063 -1.9219,0 -2.9375,-0.79688 -1,-0.79687 -1.2812,-2.35937 z m 9.3281,0 1.6562,-0.26563 q 0.1407,1 0.7657,1.53125 0.6406,0.51563 1.7812,0.51563 1.1563,0 1.7031,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.4843,-0.89063 -0.3438,-0.21875 -1.7032,-0.5625 -1.8437,-0.46875 -2.5625,-0.79687 -0.7031,-0.34375 -1.0781,-0.9375 -0.3594,-0.60938 -0.3594,-1.32813 0,-0.65625 0.2969,-1.21875 0.3125,-0.5625 0.8281,-0.9375 0.3907,-0.28125 1.0625,-0.48437 0.6719,-0.20313 1.4375,-0.20313 1.1719,0 2.0469,0.34375 0.875,0.32813 1.2813,0.90625 0.4218,0.5625 0.5781,1.51563 l -1.625,0.21875 q -0.1094,-0.75 -0.6563,-1.17188 -0.5312,-0.4375 -1.5,-0.4375 -1.1562,0 -1.6406,0.39063 -0.4844,0.375 -0.4844,0.875 0,0.32812 0.2032,0.59375 0.2031,0.26562 0.6406,0.4375 0.25,0.0937 1.4687,0.4375 1.7657,0.46875 2.4688,0.76562 0.7031,0.29688 1.0937,0.875 0.4063,0.57813 0.4063,1.4375 0,0.82813 -0.4844,1.57813 -0.4844,0.73437 -1.4062,1.14062 -0.9219,0.39063 -2.0782,0.39063 -1.9218,0 -2.9375,-0.79688 -1,-0.79687 -1.2812,-2.35937 z m 10.0156,-8.75 v -1.90625 h 1.6719 v 1.90625 z m 0,11.6875 v -9.85938 h 1.6719 v 9.85938 z m 3.5043,-4.92188 q 0,-2.73437 1.5312,-4.0625 1.2657,-1.09375 3.0938,-1.09375 2.0312,0 3.3125,1.34375 1.2969,1.32813 1.2969,3.67188 0,1.90625 -0.5782,3 -0.5625,1.07812 -1.6562,1.6875 -1.0781,0.59375 -2.375,0.59375 -2.0625,0 -3.3438,-1.32813 -1.2812,-1.32812 -1.2812,-3.8125 z m 1.7187,0 q 0,1.89063 0.8282,2.82813 0.8281,0.9375 2.0781,0.9375 1.25,0 2.0625,-0.9375 0.8281,-0.95313 0.8281,-2.89063 0,-1.82812 -0.8281,-2.76562 -0.8281,-0.9375 -2.0625,-0.9375 -1.25,0 -2.0781,0.9375 -0.8282,0.9375 -0.8282,2.82812 z m 9.282,4.92188 v -9.85938 h 1.5 v 1.40625 q 1.0938,-1.625 3.1406,-1.625 0.8907,0 1.6407,0.32813 0.75,0.3125 1.1093,0.84375 0.375,0.51562 0.5313,1.21875 0.094,0.46875 0.094,1.625 v 6.0625 h -1.6718 v -6 q 0,-1.01563 -0.2032,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.1718,-0.29687 -1.0625,0 -1.8438,0.67187 -0.7656,0.67188 -0.7656,2.57813 v 5.375 z m 19.2152,-1.5 0.2344,1.48437 q -0.7031,0.14063 -1.2656,0.14063 -0.9063,0 -1.4063,-0.28125 -0.5,-0.29688 -0.7031,-0.75 -0.2031,-0.46875 -0.2031,-1.98438 v -5.65625 h -1.2344 v -1.3125 h 1.2344 v -2.4375 l 1.6562,-1 v 3.4375 h 1.6875 v 1.3125 h -1.6875 v 5.75 q 0,0.71875 0.078,0.92188 0.094,0.20312 0.2968,0.32812 0.2032,0.125 0.5782,0.125 0.2656,0 0.7343,-0.0781 z m 0.9021,-3.42188 q 0,-2.73437 1.5313,-4.0625 1.2656,-1.09375 3.0937,-1.09375 2.0313,0 3.3125,1.34375 1.2969,1.32813 1.2969,3.67188 0,1.90625 -0.5781,3 -0.5625,1.07812 -1.6563,1.6875 -1.0781,0.59375 -2.375,0.59375 -2.0625,0 -3.3437,-1.32813 -1.2813,-1.32812 -1.2813,-3.8125 z m 1.7188,0 q 0,1.89063 0.8281,2.82813 0.8281,0.9375 2.0781,0.9375 1.25,0 2.0625,-0.9375 0.8282,-0.95313 0.8282,-2.89063 0,-1.82812 -0.8282,-2.76562 -0.8281,-0.9375 -2.0625,-0.9375 -1.25,0 -2.0781,0.9375 -0.8281,0.9375 -0.8281,2.82812 z m 9.2976,4.92188 v -13.59375 h 1.6719 v 7.75 l 3.9531,-4.01563 h 2.1562 l -3.7656,3.65625 4.1406,6.20313 h -2.0625 l -3.25,-5.03125 -1.1718,1.125 v 3.90625 z m 16.0625,-3.17188 1.7187,0.21875 q -0.4062,1.5 -1.5156,2.34375 -1.0937,0.82813 -2.8125,0.82813 -2.1562,0 -3.4219,-1.32813 -1.2656,-1.32812 -1.2656,-3.73437 0,-2.48438 1.2656,-3.85938 1.2813,-1.375 3.3282,-1.375 1.9843,0 3.2343,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.016,0.4375 h -7.3437 q 0.094,1.625 0.9218,2.48437 0.8282,0.85938 2.0625,0.85938 0.9063,0 1.5469,-0.46875 0.6563,-0.48438 1.0469,-1.54688 z m -5.4844,-2.70312 h 5.5 q -0.1094,-1.23438 -0.625,-1.85938 -0.7969,-0.96875 -2.0781,-0.96875 -1.1406,0 -1.9375,0.78125 -0.7813,0.76563 -0.8594,2.04688 z m 9.1101,5.875 v -9.85938 h 1.5 v 1.40625 q 1.0938,-1.625 3.1406,-1.625 0.8907,0 1.6407,0.32813 0.75,0.3125 1.1093,0.84375 0.375,0.51562 0.5313,1.21875 0.094,0.46875 0.094,1.625 v 6.0625 h -1.6718 v -6 q 0,-1.01563 -0.2032,-1.51563 -0.1875,-0.51562 -0.6875,-0.8125 -0.5,-0.29687 -1.1718,-0.29687 -1.0625,0 -1.8438,0.67187 -0.7656,0.67188 -0.7656,2.57813 v 5.375 z"
+ fill-rule="nonzero"
+ id="path221" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 767.6221,103.74803 h 179.27557 v 43.65355 H 767.6221 Z"
+ fill-rule="nonzero"
+ id="path223" />
+ <path
+ fill="#000000"
+ d="m 778.1221,130.66803 v -13.59375 h 6.03125 q 1.8125,0 2.75,0.35937 0.95313,0.35938 1.51563,1.29688 0.5625,0.92187 0.5625,2.04687 0,1.45313 -0.9375,2.45313 -0.92188,0.98437 -2.89063,1.25 0.71875,0.34375 1.09375,0.67187 0.78125,0.73438 1.48438,1.8125 l 2.375,3.70313 h -2.26563 l -1.79687,-2.82813 q -0.79688,-1.21875 -1.3125,-1.875 -0.5,-0.65625 -0.90625,-0.90625 -0.40625,-0.26562 -0.8125,-0.35937 -0.3125,-0.0781 -1.01563,-0.0781 h -2.07812 v 6.04688 z m 1.79688,-7.59375 h 3.85937 q 1.23438,0 1.92188,-0.25 0.70312,-0.26563 1.0625,-0.82813 0.375,-0.5625 0.375,-1.21875 0,-0.96875 -0.70313,-1.57812 -0.70312,-0.625 -2.21875,-0.625 h -4.29687 z m 18.17602,4.42187 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 8.43823,2.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 9.375,-1.98438 q 0,-2.73437 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.32813 1.29687,3.67188 0,1.90625 -0.57812,3 -0.5625,1.07812 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.32813 -1.28125,-1.32812 -1.28125,-3.8125 z m 1.71875,0 q 0,1.89063 0.82812,2.82813 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.95313 0.82812,-2.89063 0,-1.82812 -0.82812,-2.76562 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.82812 z m 15.7351,4.92188 v -1.45313 q -1.14062,1.67188 -3.125,1.67188 -0.85937,0 -1.625,-0.32813 -0.75,-0.34375 -1.125,-0.84375 -0.35937,-0.5 -0.51562,-1.23437 -0.0937,-0.5 -0.0937,-1.5625 v -6.10938 h 1.67187 v 5.46875 q 0,1.3125 0.0937,1.76563 0.15625,0.65625 0.67188,1.03125 0.51562,0.375 1.26562,0.375 0.75,0 1.40625,-0.375 0.65625,-0.39063 0.92188,-1.04688 0.28125,-0.67187 0.28125,-1.9375 v -5.28125 h 1.67187 v 9.85938 z m 3.90699,0 v -9.85938 h 1.5 v 1.5 q 0.57812,-1.04687 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54688 l -0.57812,1.54687 q -0.60938,-0.35937 -1.23438,-0.35937 -0.54687,0 -0.98437,0.32812 -0.42188,0.32813 -0.60938,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 12.66577,-3.60938 1.64062,0.21875 q -0.26562,1.6875 -1.375,2.65625 -1.10937,0.95313 -2.73437,0.95313 -2.01563,0 -3.25,-1.3125 -1.21875,-1.32813 -1.21875,-3.79688 0,-1.59375 0.51562,-2.78125 0.53125,-1.20312 1.60938,-1.79687 1.09375,-0.60938 2.35937,-0.60938 1.60938,0 2.625,0.8125 1.01563,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23437,-1 -0.82812,-1.5 -0.59375,-0.5 -1.42188,-0.5 -1.26562,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.85938 0,1.98437 0.76563,2.89062 0.76562,0.89063 1.98437,0.89063 0.98438,0 1.64063,-0.59375 0.65625,-0.60938 0.84375,-1.85938 z m 9.64062,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42188,-1.32813 -1.26562,-1.32812 -1.26562,-3.73437 0,-2.48438 1.26562,-3.85938 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.48437 0.82813,0.85938 2.0625,0.85938 0.90625,0 1.54688,-0.46875 0.65625,-0.48438 1.04687,-1.54688 z m -5.48437,-2.70312 h 5.5 q -0.10938,-1.23438 -0.625,-1.85938 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.76563 -0.85937,2.04688 z m 13.59027,2.01562 1.625,-0.21875 q 0.0625,1.54688 0.57812,2.125 0.53125,0.57813 1.4375,0.57813 0.6875,0 1.17188,-0.3125 0.5,-0.3125 0.67187,-0.84375 0.1875,-0.53125 0.1875,-1.70313 v -9.35937 h 1.8125 v 9.26562 q 0,1.70313 -0.42187,2.64063 -0.40625,0.9375 -1.3125,1.4375 -0.89063,0.48437 -2.09375,0.48437 -1.79688,0 -2.75,-1.03125 -0.9375,-1.03125 -0.90625,-3.0625 z m 9.64062,-0.51562 1.6875,-0.14063 q 0.125,1.01563 0.5625,1.67188 0.4375,0.65625 1.35938,1.0625 0.9375,0.40625 2.09375,0.40625 1.03125,0 1.8125,-0.3125 0.79687,-0.3125 1.1875,-0.84375 0.39062,-0.53125 0.39062,-1.15625 0,-0.64063 -0.375,-1.10938 -0.375,-0.48437 -1.23437,-0.8125 -0.54688,-0.21875 -2.42188,-0.65625 -1.875,-0.45312 -2.625,-0.85937 -0.96875,-0.51563 -1.45312,-1.26563 -0.46875,-0.75 -0.46875,-1.6875 0,-1.03125 0.57812,-1.92187 0.59375,-0.90625 1.70313,-1.35938 1.125,-0.46875 2.5,-0.46875 1.51562,0 2.67187,0.48438 1.15625,0.48437 1.76563,1.4375 0.625,0.9375 0.67187,2.14062 l -1.71875,0.125 q -0.14062,-1.28125 -0.95312,-1.9375 -0.79688,-0.67187 -2.35938,-0.67187 -1.625,0 -2.375,0.60937 -0.75,0.59375 -0.75,1.4375 0,0.73438 0.53125,1.20313 0.51563,0.46875 2.70313,0.96875 2.20312,0.5 3.01562,0.875 1.1875,0.54687 1.75,1.39062 0.57813,0.82813 0.57813,1.92188 0,1.09375 -0.625,2.0625 -0.625,0.95312 -1.79688,1.48437 -1.15625,0.53125 -2.60937,0.53125 -1.84375,0 -3.09375,-0.53125 -1.25,-0.54687 -1.96875,-1.625 -0.70313,-1.07812 -0.73438,-2.45312 z m 12.50611,-2.25 q 0,-3.39063 1.8125,-5.29688 1.82812,-1.92187 4.70312,-1.92187 1.875,0 3.39063,0.90625 1.51562,0.89062 2.29687,2.5 0.79688,1.60937 0.79688,3.65625 0,2.0625 -0.84375,3.70312 -0.82813,1.625 -2.35938,2.46875 -1.53125,0.84375 -3.29687,0.84375 -1.92188,0 -3.4375,-0.92187 -1.5,-0.9375 -2.28125,-2.53125 -0.78125,-1.60938 -0.78125,-3.40625 z m 1.85937,0.0312 q 0,2.45312 1.3125,3.875 1.32813,1.40625 3.3125,1.40625 2.03125,0 3.34375,-1.42188 1.3125,-1.4375 1.3125,-4.0625 0,-1.65625 -0.5625,-2.89062 -0.54687,-1.23438 -1.64062,-1.92188 -1.07813,-0.6875 -2.42188,-0.6875 -1.90625,0 -3.28125,1.3125 -1.375,1.3125 -1.375,4.39063 z m 13.18329,6.59375 v -13.59375 h 1.84375 l 7.14063,10.67187 v -10.67187 h 1.71875 v 13.59375 h -1.84375 l -7.14063,-10.6875 v 10.6875 z"
+ fill-rule="nonzero"
+ id="path225" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 529.084,131.11548 185.9974,130.01312"
+ fill-rule="nonzero"
+ id="path227" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 529.084,131.11548 191.99733,130.0324"
+ fill-rule="evenodd"
+ id="path229" />
+ <path
+ fill="#000000"
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linecap="butt"
+ d="m 192.00266,128.38068 -4.54338,1.63715 4.53276,1.6663 z"
+ fill-rule="evenodd"
+ id="path231" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 258.7034,136.56955 h 156.34647 v 70.26771 H 258.7034 Z"
+ fill-rule="nonzero"
+ id="path233" />
+ <path
+ fill="#000000"
+ d="M 269.17215,163.48955 V 149.8958 h 5.125 q 1.35937,0 2.07812,0.125 1,0.17187 1.67188,0.64062 0.67187,0.46875 1.07812,1.3125 0.42188,0.84375 0.42188,1.84375 0,1.73438 -1.10938,2.9375 -1.09375,1.20313 -3.98437,1.20313 h -3.48438 v 5.53125 z m 1.79687,-7.14063 h 3.51563 q 1.75,0 2.46875,-0.64062 0.73437,-0.65625 0.73437,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.42187,-0.60937 -1.125,-0.79687 -0.45312,-0.125 -1.67187,-0.125 h -3.48438 z m 16.86545,5.92188 q -0.9375,0.79687 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79688 -0.875,-2.03125 0,-0.73438 0.32813,-1.32813 0.32812,-0.59375 0.85937,-0.95312 0.53125,-0.35938 1.20313,-0.54688 0.5,-0.14062 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57812 0,-0.34375 0,-0.4375 0,-1.01563 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23438 q 0.23438,-1.04687 0.73438,-1.6875 0.51562,-0.64062 1.46875,-0.98437 0.96875,-0.35938 2.25,-0.35938 1.26562,0 2.04687,0.29688 0.78125,0.29687 1.15625,0.75 0.375,0.45312 0.51563,1.14062 0.0937,0.42188 0.0937,1.53125 v 2.23438 q 0,2.32812 0.0937,2.95312 0.10937,0.60938 0.4375,1.17188 h -1.75 q -0.26563,-0.51563 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35937 -2.73438,0.625 -1.03125,0.14062 -1.45312,0.32812 -0.42188,0.1875 -0.65625,0.54688 -0.23438,0.35937 -0.23438,0.79687 0,0.67188 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42187 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57813 0.26563,-1.67188 z m 4.06323,4.9375 v -9.85938 h 1.5 v 1.5 q 0.57813,-1.04687 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.54688 l -0.57813,1.54687 q -0.60937,-0.35937 -1.23437,-0.35937 -0.54688,0 -0.98438,0.32812 -0.42187,0.32813 -0.60937,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 5.55643,-2.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 16.75,-0.23438 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 13.01215,5.875 5.23437,-13.59375 h 1.9375 l 5.5625,13.59375 h -2.04687 l -1.59375,-4.125 h -5.6875 l -1.48438,4.125 z m 3.92187,-5.57813 h 4.60938 l -1.40625,-3.78125 q -0.65625,-1.70312 -0.96875,-2.8125 -0.26563,1.3125 -0.73438,2.59375 z m 10.0217,5.57813 V 149.8958 h 5.125 q 1.35938,0 2.07813,0.125 1,0.17187 1.67187,0.64062 0.67188,0.46875 1.07813,1.3125 0.42187,0.84375 0.42187,1.84375 0,1.73438 -1.10937,2.9375 -1.09375,1.20313 -3.98438,1.20313 h -3.48437 v 5.53125 z m 1.79688,-7.14063 h 3.51562 q 1.75,0 2.46875,-0.64062 0.73438,-0.65625 0.73438,-1.82813 0,-0.85937 -0.4375,-1.46875 -0.42188,-0.60937 -1.125,-0.79687 -0.45313,-0.125 -1.67188,-0.125 h -3.48437 z m 10.94357,7.14063 V 149.8958 h 1.8125 v 13.59375 z m 9.83536,0 v -9.85938 h 1.5 v 1.5 q 0.57812,-1.04687 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.54688 l -0.57812,1.54687 q -0.60938,-0.35937 -1.23438,-0.35937 -0.54687,0 -0.98437,0.32812 -0.42188,0.32813 -0.60938,0.90625 -0.28125,0.89063 -0.28125,1.95313 v 5.15625 z m 12.9783,-3.17188 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82813 -2.8125,0.82813 -2.15625,0 -3.42187,-1.32813 -1.26563,-1.32812 -1.26563,-3.73437 0,-2.48438 1.26563,-3.85938 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79688 0,0.14062 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48437 0.82812,0.85938 2.0625,0.85938 0.90625,0 1.54687,-0.46875 0.65625,-0.48438 1.04688,-1.54688 z m -5.48438,-2.70312 h 5.5 q -0.10937,-1.23438 -0.625,-1.85938 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76563 -0.85938,2.04688 z m 8.43821,2.9375 1.65625,-0.26563 q 0.14062,1 0.76562,1.53125 0.64063,0.51563 1.78125,0.51563 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89063 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79687 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60938 -0.35937,-1.32813 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48437 0.67187,-0.20313 1.4375,-0.20313 1.17187,0 2.04687,0.34375 0.875,0.32813 1.28125,0.90625 0.42188,0.5625 0.57813,1.51563 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17188 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39063 -0.48437,0.375 -0.48437,0.875 0,0.32812 0.20312,0.59375 0.20313,0.26562 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76562 0.70312,0.29688 1.09375,0.875 0.40625,0.57813 0.40625,1.4375 0,0.82813 -0.48438,1.57813 -0.48437,0.73437 -1.40625,1.14062 -0.92187,0.39063 -2.07812,0.39063 -1.92188,0 -2.9375,-0.79688 -1,-0.79687 -1.28125,-2.35937 z m 10,6.71875 v -13.64063 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20312,-1.125 0.6875,-0.375 1.64063,-0.375 1.26562,0 2.23437,0.65625 0.96875,0.64063 1.45313,1.82813 0.5,1.1875 0.5,2.59375 0,1.51562 -0.54688,2.73437 -0.54687,1.20313 -1.57812,1.84375 -1.03125,0.64063 -2.17188,0.64063 -0.84375,0 -1.51562,-0.34375 -0.65625,-0.35938 -1.07813,-0.89063 v 4.79688 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78125,0.90625 1.875,0.90625 1.10937,0 1.89062,-0.9375 0.79688,-0.9375 0.79688,-2.92188 0,-1.875 -0.78125,-2.8125 -0.76563,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89063,1 -0.8125,1 -0.8125,2.89063 z"
+ fill-rule="nonzero"
+ id="path235" />
+ <path
+ fill="#000000"
+ d="m 276.73465,183.88017 q -0.82813,0.92187 -1.8125,1.39062 -0.96875,0.45313 -2.09375,0.45313 -2.09375,0 -3.3125,-1.40625 -1,-1.15625 -1,-2.57813 0,-1.26562 0.8125,-2.28125 0.8125,-1.01562 2.42187,-1.78125 -0.90625,-1.0625 -1.21875,-1.71875 -0.29687,-0.65625 -0.29687,-1.26562 0,-1.23438 0.95312,-2.125 0.95313,-0.90625 2.42188,-0.90625 1.39062,0 2.26562,0.85937 0.89063,0.84375 0.89063,2.04688 0,1.9375 -2.5625,3.3125 l 2.4375,3.09375 q 0.42187,-0.8125 0.64062,-1.89063 l 1.73438,0.375 q -0.4375,1.78125 -1.20313,2.9375 0.9375,1.23438 2.125,2.07813 l -1.125,1.32812 q -1,-0.64062 -2.07812,-1.92187 z m -3.40625,-7.07813 q 1.09375,-0.64062 1.40625,-1.125 0.32812,-0.48437 0.32812,-1.0625 0,-0.70312 -0.45312,-1.14062 -0.4375,-0.4375 -1.09375,-0.4375 -0.67188,0 -1.125,0.4375 -0.45313,0.42187 -0.45313,1.0625 0,0.3125 0.15625,0.65625 0.17188,0.34375 0.5,0.73437 z m 2.35937,5.76563 -3.0625,-3.79688 q -1.35937,0.8125 -1.84375,1.5 -0.46875,0.6875 -0.46875,1.375 0,0.8125 0.65625,1.70313 0.67188,0.89062 1.875,0.89062 0.75,0 1.54688,-0.46875 0.8125,-0.46875 1.29687,-1.20312 z m 17.28315,2.92187 v -1.25 q -0.9375,1.46875 -2.75,1.46875 -1.17188,0 -2.17188,-0.64062 -0.98437,-0.65625 -1.53125,-1.8125 -0.53125,-1.17188 -0.53125,-2.6875 0,-1.46875 0.48438,-2.67188 0.5,-1.20312 1.46875,-1.84375 0.98437,-0.64062 2.20312,-0.64062 0.89063,0 1.57813,0.375 0.70312,0.375 1.14062,0.98437 v -4.875 h 1.65625 v 13.59375 z m -5.28125,-4.92187 q 0,1.89062 0.79687,2.82812 0.8125,0.9375 1.89063,0.9375 1.09375,0 1.85937,-0.89062 0.76563,-0.89063 0.76563,-2.73438 0,-2.01562 -0.78125,-2.95312 -0.78125,-0.95313 -1.92188,-0.95313 -1.10937,0 -1.85937,0.90625 -0.75,0.90625 -0.75,2.85938 z m 9.28195,-6.76563 v -1.90625 h 1.67187 v 1.90625 z m 0,11.6875 v -9.85937 h 1.67187 v 9.85937 z m 3.45734,-2.9375 1.65625,-0.26562 q 0.14062,1 0.76562,1.53125 0.64063,0.51562 1.78125,0.51562 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.89062 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.79688 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.60937 -0.35937,-1.32812 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.48438 0.67187,-0.20312 1.4375,-0.20312 1.17187,0 2.04687,0.34375 0.875,0.32812 1.28125,0.90625 0.42188,0.5625 0.57813,1.51562 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.17187 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.39062 -0.48437,0.375 -0.48437,0.875 0,0.32813 0.20312,0.59375 0.20313,0.26563 0.64063,0.4375 0.25,0.0937 1.46875,0.4375 1.76562,0.46875 2.46875,0.76563 0.70312,0.29687 1.09375,0.875 0.40625,0.57812 0.40625,1.4375 0,0.82812 -0.48438,1.57812 -0.48437,0.73438 -1.40625,1.14063 -0.92187,0.39062 -2.07812,0.39062 -1.92188,0 -2.9375,-0.79687 -1,-0.79688 -1.28125,-2.35938 z m 10,6.71875 v -13.64062 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20312,-1.125 0.6875,-0.375 1.64063,-0.375 1.26562,0 2.23437,0.65625 0.96875,0.64062 1.45313,1.82812 0.5,1.1875 0.5,2.59375 0,1.51563 -0.54688,2.73438 -0.54687,1.20312 -1.57812,1.84375 -1.03125,0.64062 -2.17188,0.64062 -0.84375,0 -1.51562,-0.34375 -0.65625,-0.35937 -1.07813,-0.89062 v 4.79687 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78125,0.90625 1.875,0.90625 1.10937,0 1.89062,-0.9375 0.79688,-0.9375 0.79688,-2.92187 0,-1.875 -0.78125,-2.8125 -0.76563,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89063,1 -0.8125,1 -0.8125,2.89062 z m 8.82883,4.875 v -13.59375 h 1.67187 v 13.59375 z m 10.61358,-1.21875 q -0.9375,0.79688 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79687 -0.875,-2.03125 0,-0.73437 0.32813,-1.32812 0.32812,-0.59375 0.85937,-0.95313 0.53125,-0.35937 1.20313,-0.54687 0.5,-0.14063 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57813 0,-0.34375 0,-0.4375 0,-1.01562 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23437 q 0.23438,-1.04688 0.73438,-1.6875 0.51562,-0.64063 1.46875,-0.98438 0.96875,-0.35937 2.25,-0.35937 1.26562,0 2.04687,0.29687 0.78125,0.29688 1.15625,0.75 0.375,0.45313 0.51563,1.14063 0.0937,0.42187 0.0937,1.53125 v 2.23437 q 0,2.32813 0.0937,2.95313 0.10937,0.60937 0.4375,1.17187 h -1.75 q -0.26563,-0.51562 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35938 -2.73438,0.625 -1.03125,0.14063 -1.45312,0.32813 -0.42188,0.1875 -0.65625,0.54687 -0.23438,0.35938 -0.23438,0.79688 0,0.67187 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42188 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57812 0.26563,-1.67187 z m 4.0007,8.73438 -0.17187,-1.5625 q 0.54687,0.14062 0.95312,0.14062 0.54688,0 0.875,-0.1875 0.34375,-0.1875 0.5625,-0.51562 0.15625,-0.25 0.5,-1.25 0.0469,-0.14063 0.15625,-0.40625 l -3.73437,-9.875 h 1.79687 l 2.04688,5.71875 q 0.40625,1.07812 0.71875,2.28125 0.28125,-1.15625 0.6875,-2.25 l 2.09375,-5.75 h 1.67187 l -3.75,10.03125 q -0.59375,1.625 -0.9375,2.23437 -0.4375,0.82813 -1.01562,1.20313 -0.57813,0.39062 -1.375,0.39062 -0.48438,0 -1.07813,-0.20312 z m 14.58957,-0.0156 V 175.6302 h 1.53125 v 1.28125 q 0.53125,-0.75 1.20312,-1.125 0.6875,-0.375 1.64063,-0.375 1.26562,0 2.23437,0.65625 0.96875,0.64062 1.45313,1.82812 0.5,1.1875 0.5,2.59375 0,1.51563 -0.54688,2.73438 -0.54687,1.20312 -1.57812,1.84375 -1.03125,0.64062 -2.17188,0.64062 -0.84375,0 -1.51562,-0.34375 -0.65625,-0.35937 -1.07813,-0.89062 v 4.79687 z m 1.51562,-8.65625 q 0,1.90625 0.76563,2.8125 0.78125,0.90625 1.875,0.90625 1.10937,0 1.89062,-0.9375 0.79688,-0.9375 0.79688,-2.92187 0,-1.875 -0.78125,-2.8125 -0.76563,-0.9375 -1.84375,-0.9375 -1.0625,0 -1.89063,1 -0.8125,1 -0.8125,2.89062 z m 15.29758,3.65625 q -0.9375,0.79688 -1.79687,1.125 -0.85938,0.3125 -1.84375,0.3125 -1.60938,0 -2.48438,-0.78125 -0.875,-0.79687 -0.875,-2.03125 0,-0.73437 0.32813,-1.32812 0.32812,-0.59375 0.85937,-0.95313 0.53125,-0.35937 1.20313,-0.54687 0.5,-0.14063 1.48437,-0.25 2.03125,-0.25 2.98438,-0.57813 0,-0.34375 0,-0.4375 0,-1.01562 -0.46875,-1.4375 -0.64063,-0.5625 -1.90625,-0.5625 -1.17188,0 -1.73438,0.40625 -0.5625,0.40625 -0.82812,1.46875 l -1.64063,-0.23437 q 0.23438,-1.04688 0.73438,-1.6875 0.51562,-0.64063 1.46875,-0.98438 0.96875,-0.35937 2.25,-0.35937 1.26562,0 2.04687,0.29687 0.78125,0.29688 1.15625,0.75 0.375,0.45313 0.51563,1.14063 0.0937,0.42187 0.0937,1.53125 v 2.23437 q 0,2.32813 0.0937,2.95313 0.10937,0.60937 0.4375,1.17187 h -1.75 q -0.26563,-0.51562 -0.32813,-1.21875 z m -0.14062,-3.71875 q -0.90625,0.35938 -2.73438,0.625 -1.03125,0.14063 -1.45312,0.32813 -0.42188,0.1875 -0.65625,0.54687 -0.23438,0.35938 -0.23438,0.79688 0,0.67187 0.5,1.125 0.51563,0.4375 1.48438,0.4375 0.96875,0 1.71875,-0.42188 0.75,-0.4375 1.10937,-1.15625 0.26563,-0.57812 0.26563,-1.67187 z m 3.78198,5.75 1.60937,0.25 q 0.10938,0.75 0.57813,1.09375 0.60937,0.45313 1.6875,0.45313 1.17187,0 1.79687,-0.46875 0.625,-0.45313 0.85938,-1.28125 0.125,-0.51563 0.10937,-2.15625 -1.09375,1.29687 -2.71875,1.29687 -2.03125,0 -3.15625,-1.46875 -1.10937,-1.46875 -1.10937,-3.51562 0,-1.40625 0.51562,-2.59375 0.51563,-1.20313 1.48438,-1.84375 0.96875,-0.65625 2.26562,-0.65625 1.75,0 2.875,1.40625 v -1.1875 h 1.54688 v 8.51562 q 0,2.3125 -0.46875,3.26563 -0.46875,0.96875 -1.48438,1.51562 -1.01562,0.5625 -2.5,0.5625 -1.76562,0 -2.85937,-0.79687 -1.07813,-0.79688 -1.03125,-2.39063 z m 1.375,-5.92187 q 0,1.95312 0.76562,2.84375 0.78125,0.89062 1.9375,0.89062 1.14063,0 1.92188,-0.89062 0.78125,-0.89063 0.78125,-2.78125 0,-1.8125 -0.8125,-2.71875 -0.79688,-0.92188 -1.92188,-0.92188 -1.10937,0 -1.89062,0.90625 -0.78125,0.89063 -0.78125,2.67188 z m 16.04758,1.9375 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.82812 -2.8125,0.82812 -2.15625,0 -3.42187,-1.32812 -1.26563,-1.32813 -1.26563,-3.73438 0,-2.48437 1.26563,-3.85937 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.79687 0,0.14063 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.48438 0.82812,0.85937 2.0625,0.85937 0.90625,0 1.54687,-0.46875 0.65625,-0.48437 1.04688,-1.54687 z m -5.48438,-2.70313 h 5.5 q -0.10937,-1.23437 -0.625,-1.85937 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.76562 -0.85938,2.04687 z"
+ fill-rule="nonzero"
+ id="path237" />
+ <path
+ fill="#ffffff"
+ d="m 94.25984,75.59843 v 0 c 0,-12.054596 10.59711,-21.826775 23.66929,-21.826775 v 0 c 13.0722,0 23.66929,9.772179 23.66929,21.826775 v 0 c 0,12.054588 -10.59709,21.826767 -23.66929,21.826767 v 0 c -13.07218,0 -23.66929,-9.772179 -23.66929,-21.826767 z"
+ fill-rule="nonzero"
+ id="path239" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 94.25984,75.59843 v 0 c 0,-12.054596 10.59711,-21.826775 23.66929,-21.826775 v 0 c 13.0722,0 23.66929,9.772179 23.66929,21.826775 v 0 c 0,12.054588 -10.59709,21.826767 -23.66929,21.826767 v 0 c -13.07218,0 -23.66929,-9.772179 -23.66929,-21.826767 z"
+ fill-rule="nonzero"
+ id="path241" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 117.92913,97.42519 1.16536,119.55906"
+ fill-rule="nonzero"
+ id="path243" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 117.92913,97.42519 1.16536,119.55906"
+ fill-rule="nonzero"
+ id="path245" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 117.92913,128.50131 29.57481,42.48819"
+ fill-rule="nonzero"
+ id="path247" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 117.92913,128.50131 29.57481,42.48819"
+ fill-rule="nonzero"
+ id="path249" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 91.50131,170.50131 117.9265,129.43045"
+ fill-rule="nonzero"
+ id="path251" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 91.50131,170.50131 117.9265,129.43045"
+ fill-rule="nonzero"
+ id="path253" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 235.77428,40 H 415.04987 V 88 H 235.77428 Z"
+ fill-rule="nonzero"
+ id="path255" />
+ <path
+ fill="#000000"
+ d="m 273.33563,65.59187 v -1.609375 h 5.76562 v 5.046875 q -1.32812,1.0625 -2.75,1.59375 -1.40625,0.53125 -2.89062,0.53125 -2,0 -3.64063,-0.859375 -1.625,-0.859375 -2.46875,-2.484375 -0.82812,-1.625 -0.82812,-3.625 0,-1.984375 0.82812,-3.703125 0.82813,-1.71875 2.39063,-2.546875 1.5625,-0.84375 3.59375,-0.84375 1.46875,0 2.65625,0.484375 1.20312,0.46875 1.875,1.328125 0.67187,0.84375 1.03125,2.21875 l -1.625,0.4375 q -0.3125,-1.03125 -0.76563,-1.625 -0.45312,-0.59375 -1.29687,-0.953125 -0.84375,-0.359375 -1.875,-0.359375 -1.23438,0 -2.14063,0.375 -0.89062,0.375 -1.45312,1 -0.54688,0.609375 -0.84375,1.34375 -0.53125,1.25 -0.53125,2.734375 0,1.8125 0.625,3.046875 0.64062,1.21875 1.82812,1.8125 1.20313,0.59375 2.54688,0.59375 1.17187,0 2.28125,-0.453125 1.10937,-0.453125 1.6875,-0.953125 v -2.53125 z m 8.18329,5.328125 v -13.59375 h 9.84375 v 1.59375 h -8.04688 v 4.171875 h 7.53125 v 1.59375 h -7.53125 v 4.625 h 8.35938 v 1.609375 z m 15.86545,0 v -12 h -4.46875 v -1.59375 h 10.76562 v 1.59375 h -4.5 v 12 z m 11.65741,0.234375 3.9375,-14.0625 h 1.34375 l -3.9375,14.0625 z m 6.41769,-0.234375 V 61.06062 h 1.5 v 1.5 q 0.57813,-1.046875 1.0625,-1.375 0.48438,-0.34375 1.07813,-0.34375 0.84375,0 1.71875,0.546875 l -0.57813,1.546875 q -0.60937,-0.359375 -1.23437,-0.359375 -0.54688,0 -0.98438,0.328125 -0.42187,0.328125 -0.60937,0.90625 -0.28125,0.890625 -0.28125,1.953125 v 5.15625 z m 12.9783,-3.171875 1.71875,0.21875 q -0.40625,1.5 -1.51562,2.34375 -1.09375,0.828125 -2.8125,0.828125 -2.15625,0 -3.42188,-1.328125 -1.26562,-1.328125 -1.26562,-3.734375 0,-2.484375 1.26562,-3.859375 1.28125,-1.375 3.32813,-1.375 1.98437,0 3.23437,1.34375 1.25,1.34375 1.25,3.796875 0,0.140625 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92187,2.484375 0.82813,0.859375 2.0625,0.859375 0.90625,0 1.54688,-0.46875 0.65625,-0.484375 1.04687,-1.546875 z m -5.48437,-2.703125 h 5.5 q -0.10938,-1.234375 -0.625,-1.859375 -0.79688,-0.96875 -2.07813,-0.96875 -1.14062,0 -1.9375,0.78125 -0.78125,0.765625 -0.85937,2.046875 z m 8.4382,2.9375 1.65625,-0.265625 q 0.14062,1 0.76562,1.53125 0.64063,0.515625 1.78125,0.515625 1.15625,0 1.70313,-0.46875 0.5625,-0.46875 0.5625,-1.09375 0,-0.5625 -0.48438,-0.890625 -0.34375,-0.21875 -1.70312,-0.5625 -1.84375,-0.46875 -2.5625,-0.796875 -0.70313,-0.34375 -1.07813,-0.9375 -0.35937,-0.609375 -0.35937,-1.328125 0,-0.65625 0.29687,-1.21875 0.3125,-0.5625 0.82813,-0.9375 0.39062,-0.28125 1.0625,-0.484375 0.67187,-0.203125 1.4375,-0.203125 1.17187,0 2.04687,0.34375 0.875,0.328125 1.28125,0.90625 0.42188,0.5625 0.57813,1.515625 l -1.625,0.21875 q -0.10938,-0.75 -0.65625,-1.171875 -0.53125,-0.4375 -1.5,-0.4375 -1.15625,0 -1.64063,0.390625 -0.48437,0.375 -0.48437,0.875 0,0.328125 0.20312,0.59375 0.20313,0.265625 0.64063,0.4375 0.25,0.09375 1.46875,0.4375 1.76562,0.46875 2.46875,0.765625 0.70312,0.296875 1.09375,0.875 0.40625,0.578125 0.40625,1.4375 0,0.828125 -0.48438,1.578125 -0.48437,0.734375 -1.40625,1.140625 -0.92187,0.390625 -2.07812,0.390625 -1.92188,0 -2.9375,-0.796875 -1,-0.796875 -1.28125,-2.359375 z m 9.375,-1.984375 q 0,-2.734375 1.53125,-4.0625 1.26562,-1.09375 3.09375,-1.09375 2.03125,0 3.3125,1.34375 1.29687,1.328125 1.29687,3.671875 0,1.90625 -0.57812,3 -0.5625,1.078125 -1.65625,1.6875 -1.07813,0.59375 -2.375,0.59375 -2.0625,0 -3.34375,-1.328125 -1.28125,-1.328125 -1.28125,-3.8125 z m 1.71875,0 q 0,1.890625 0.82812,2.828125 0.82813,0.9375 2.07813,0.9375 1.25,0 2.0625,-0.9375 0.82812,-0.953125 0.82812,-2.890625 0,-1.828125 -0.82812,-2.765625 -0.82813,-0.9375 -2.0625,-0.9375 -1.25,0 -2.07813,0.9375 -0.82812,0.9375 -0.82812,2.828125 z m 15.73511,4.921875 V 69.46687 q -1.14063,1.671875 -3.125,1.671875 -0.85938,0 -1.625,-0.328125 -0.75,-0.34375 -1.125,-0.84375 -0.35938,-0.5 -0.51563,-1.234375 -0.0937,-0.5 -0.0937,-1.5625 V 61.06062 h 1.67188 v 5.46875 q 0,1.3125 0.0937,1.765625 0.15625,0.65625 0.67187,1.03125 0.51563,0.375 1.26563,0.375 0.75,0 1.40625,-0.375 0.65625,-0.390625 0.92187,-1.046875 0.28125,-0.671875 0.28125,-1.9375 v -5.28125 h 1.67188 v 9.859375 z m 3.90695,0 V 61.06062 h 1.5 v 1.5 q 0.57812,-1.046875 1.0625,-1.375 0.48437,-0.34375 1.07812,-0.34375 0.84375,0 1.71875,0.546875 l -0.57812,1.546875 q -0.60938,-0.359375 -1.23438,-0.359375 -0.54687,0 -0.98437,0.328125 -0.42188,0.328125 -0.60938,0.90625 -0.28125,0.890625 -0.28125,1.953125 v 5.15625 z m 12.6658,-3.609375 1.64063,0.21875 q -0.26563,1.6875 -1.375,2.65625 -1.10938,0.953125 -2.73438,0.953125 -2.01562,0 -3.25,-1.3125 -1.21875,-1.328125 -1.21875,-3.796875 0,-1.59375 0.51563,-2.78125 0.53125,-1.203125 1.60937,-1.796875 1.09375,-0.609375 2.35938,-0.609375 1.60937,0 2.625,0.8125 1.01562,0.8125 1.3125,2.3125 l -1.625,0.25 q -0.23438,-1 -0.82813,-1.5 -0.59375,-0.5 -1.42187,-0.5 -1.26563,0 -2.0625,0.90625 -0.78125,0.90625 -0.78125,2.859375 0,1.984375 0.76562,2.890625 0.76563,0.890625 1.98438,0.890625 0.98437,0 1.64062,-0.59375 0.65625,-0.609375 0.84375,-1.859375 z m 9.64063,0.4375 1.71875,0.21875 q -0.40625,1.5 -1.51563,2.34375 -1.09375,0.828125 -2.8125,0.828125 -2.15625,0 -3.42187,-1.328125 -1.26563,-1.328125 -1.26563,-3.734375 0,-2.484375 1.26563,-3.859375 1.28125,-1.375 3.32812,-1.375 1.98438,0 3.23438,1.34375 1.25,1.34375 1.25,3.796875 0,0.140625 -0.0156,0.4375 h -7.34375 q 0.0937,1.625 0.92188,2.484375 0.82812,0.859375 2.0625,0.859375 0.90625,0 1.54687,-0.46875 0.65625,-0.484375 1.04688,-1.546875 z m -5.48438,-2.703125 h 5.5 q -0.10937,-1.234375 -0.625,-1.859375 -0.79687,-0.96875 -2.07812,-0.96875 -1.14063,0 -1.9375,0.78125 -0.78125,0.765625 -0.85938,2.046875 z"
+ fill-rule="nonzero"
+ id="path257" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="M 119.12598,215.50131 80.543304,268.57217"
+ fill-rule="nonzero"
+ id="path259" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="M 119.12598,215.50131 80.543304,268.57217"
+ fill-rule="nonzero"
+ id="path261" />
+ <path
+ fill="#000000"
+ fill-opacity="0"
+ d="m 119.62467,215.50131 42.99212,58.99213"
+ fill-rule="nonzero"
+ id="path263" />
+ <path
+ stroke="#000000"
+ stroke-width="1"
+ stroke-linejoin="round"
+ stroke-linecap="butt"
+ d="m 119.62467,215.50131 42.99212,58.99213"
+ fill-rule="nonzero"
+ id="path265" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ d="M 704.87067,669.53284 H 902.36618 V 1090.4084 H 631.94626 v -48.6129"
+ id="path1231" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+ d="M 627.2572,719.0512 V 703.99292"
+ id="path1233" />
+ <text
+ xml:space="preserve"
+ style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
+ x="1053.9216"
+ y="822.65356"
+ id="text1241"><tspan
+ sodipodi:role="line"
+ id="tspan1239"
+ x="1053.9216"
+ y="822.65356">server</tspan></text>
+</svg>
--- /dev/null
+<?xml version="1.0" standalone="yes"?>
+<!-- Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0 -->
+
+<svg version="1.1" viewBox="0.0 0.0 1338.0 1283.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l1338.0 0l0 1283.0l-1338.0 0l0 -1283.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l1338.0 0l0 1283.0l-1338.0 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m529.084 59.792652l179.27557 0l0 94.645676l-179.27557 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m529.084 59.792652l179.27557 0l0 94.645676l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m573.0276 114.035484l-3.609375 -13.59375l1.84375 0l2.0625 8.90625q0.34375 1.40625 0.578125 2.78125q0.515625 -2.171875 0.609375 -2.515625l2.59375 -9.171875l2.171875 0l1.953125 6.875q0.734375 2.5625 1.046875 4.8125q0.265625 -1.28125 0.6875 -2.953125l2.125 -8.734375l1.8125 0l-3.734375 13.59375l-1.734375 0l-2.859375 -10.359375q-0.359375 -1.296875 -0.421875 -1.59375q-0.21875 0.9375 -0.40625 1.59375l-2.890625 10.359375l-1.828125 0zm14.389893 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.266357 4.921875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm6.2438965 0l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm10.859375 0l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm15.594482 1.828125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813171 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.890625 3.609375l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m186.2126 85.77165l342.2677 2.708664" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m186.2126 85.77165l336.26794 2.6611862" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m522.4674 90.08451l4.5510254 -1.6157684l-4.5248413 -1.6875916z" fill-rule="evenodd"></path><path fill="#d9ead3" d="m464.64304 281.8714l154.07877 -82.47244l154.07874 82.47244l-154.07874 82.47244z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m464.64304 281.8714l154.07877 -82.47244l154.07874 82.47244l-154.07874 82.47244z" fill-rule="nonzero"></path><path fill="#000000" d="m550.6512 266.79138l5.234375 -13.593735l1.9375 0l5.5625 13.593735l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.7031097 -0.96875 -2.8124847q-0.265625 1.3125 -0.734375 2.5937347l-1.5 4.0zm9.8029175 5.578125l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm9.750732 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.297546 3.796875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm9.40625 -3.796875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm14.9158325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2506714 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#000000" d="m558.36993 287.57263q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.516296 1.328125l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.328125 0l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm21.933289 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813232 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm5.6257324 4.9375l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm8.813171 5.0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.926086 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375z" fill-rule="nonzero"></path><path fill="#000000" d="m559.7137 309.182q-0.828125 0.921875 -1.8125 1.390625q-0.96875 0.453125 -2.09375 0.453125q-2.09375 0 -3.3125 -1.40625q-1.0 -1.15625 -1.0 -2.578125q0 -1.265625 0.8125 -2.28125q0.8125 -1.015625 2.421875 -1.78125q-0.90625 -1.0625 -1.21875 -1.71875q-0.296875 -0.65625 -0.296875 -1.265625q0 -1.234375 0.953125 -2.125q0.953125 -0.90625 2.421875 -0.90625q1.390625 0 2.265625 0.859375q0.890625 0.84375 0.890625 2.046875q0 1.9375 -2.5625 3.3125l2.4375 3.09375q0.421875 -0.8125 0.640625 -1.890625l1.734375 0.375q-0.4375 1.78125 -1.203125 2.9375q0.9375 1.234375 2.125 2.078125l-1.125 1.328125q-1.0 -0.640625 -2.078125 -1.921875zm-3.40625 -7.078125q1.09375 -0.640625 1.40625 -1.125q0.328125 -0.484375 0.328125 -1.0625q0 -0.703125 -0.453125 -1.140625q-0.4375 -0.4375 -1.09375 -0.4375q-0.671875 0 -1.125 0.4375q-0.453125 0.421875 -0.453125 1.0625q0 0.3125 0.15625 0.65625q0.171875 0.34375 0.5 0.734375l0.734375 0.875zm2.359375 5.765625l-3.0625 -3.796875q-1.359375 0.8125 -1.84375 1.5q-0.46875 0.6875 -0.46875 1.375q0 0.8125 0.65625 1.703125q0.671875 0.890625 1.875 0.890625q0.75 0 1.546875 -0.46875q0.8125 -0.46875 1.296875 -1.203125zm17.329956 1.703125q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.516357 1.328125l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.328125 0l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm5.6760864 0l-1.546875 0l0 -13.59375l1.65625 0l0 4.84375q1.0625 -1.328125 2.703125 -1.328125q0.90625 0 1.71875 0.375q0.8125 0.359375 1.328125 1.03125q0.53125 0.65625 0.828125 1.59375q0.296875 0.9375 0.296875 2.0q0 2.53125 -1.25 3.921875q-1.25 1.375 -3.0 1.375q-1.75 0 -2.734375 -1.453125l0 1.234375zm-0.015625 -5.0q0 1.765625 0.46875 2.5625q0.796875 1.28125 2.140625 1.28125q1.09375 0 1.890625 -0.9375q0.796875 -0.953125 0.796875 -2.84375q0 -1.921875 -0.765625 -2.84375q-0.765625 -0.921875 -1.84375 -0.921875q-1.09375 0 -1.890625 0.953125q-0.796875 0.953125 -0.796875 2.75zm8.813171 5.0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.926086 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.235107 2.53125q0 -0.34375 0 -0.5q0 -0.984375 0.265625 -1.703125q0.21875 -0.546875 0.671875 -1.09375q0.328125 -0.390625 1.1875 -1.15625q0.875 -0.765625 1.125 -1.21875q0.265625 -0.453125 0.265625 -1.0q0 -0.96875 -0.765625 -1.703125q-0.75 -0.734375 -1.859375 -0.734375q-1.0625 0 -1.78125 0.671875q-0.703125 0.65625 -0.9375 2.078125l-1.71875 -0.203125q0.234375 -1.90625 1.375 -2.90625q1.15625 -1.015625 3.03125 -1.015625q2.0 0 3.1875 1.09375q1.1875 1.078125 1.1875 2.609375q0 0.890625 -0.421875 1.640625q-0.40625 0.75 -1.625 1.828125q-0.8125 0.734375 -1.0625 1.078125q-0.25 0.34375 -0.375 0.796875q-0.125 0.4375 -0.140625 1.4375l-1.609375 0zm-0.09375 3.34375l0 -1.90625l1.890625 0l0 1.90625l-1.890625 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m848.9265 239.90552l156.34644 0l0 88.59842l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m848.9265 239.90552l156.34644 0l0 88.59842l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m865.75464 274.7966l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm14.683289 2.15625l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm6.694702 1.5l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9783325 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735046 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069824 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6658325 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m859.58276 302.12473l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.7614136 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.6033325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281921 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm19.442871 0l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.460388 -4.375l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584167 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9626465 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m467.042 484.1076l154.07874 -74.80313l154.07874 74.80313l-154.07874 74.80316z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m467.042 484.1076l154.07874 -74.80313l154.07874 74.80313l-154.07874 74.80316z" fill-rule="nonzero"></path><path fill="#000000" d="m553.94073 486.65262l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584229 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438171 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.5042114 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm22.309021 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.000732 5.875l3.59375 -5.125l-3.328125 -4.734375l2.09375 0l1.515625 2.3125q0.421875 0.65625 0.671875 1.109375q0.421875 -0.609375 0.765625 -1.09375l1.65625 -2.328125l1.984375 0l-3.390625 4.640625l3.65625 5.21875l-2.046875 0l-2.03125 -3.0625l-0.53125 -0.828125l-2.59375 3.890625l-2.015625 0zm10.453125 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.4572754 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm13.65625 1.4375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.8552246 -1.4375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm13.125 -0.40625q0 -0.34375 0 -0.5q0 -0.984375 0.265625 -1.703125q0.21875 -0.546875 0.671875 -1.09375q0.328125 -0.390625 1.1875 -1.15625q0.875 -0.765625 1.125 -1.21875q0.265625 -0.453125 0.265625 -1.0q0 -0.96875 -0.765625 -1.703125q-0.75 -0.734375 -1.859375 -0.734375q-1.0625 0 -1.78125 0.671875q-0.703125 0.65625 -0.9375 2.078125l-1.71875 -0.203125q0.234375 -1.90625 1.375 -2.90625q1.15625 -1.015625 3.03125 -1.015625q2.0 0 3.1875 1.09375q1.1875 1.078125 1.1875 2.609375q0 0.890625 -0.421875 1.640625q-0.40625 0.75 -1.625 1.828125q-0.8125 0.734375 -1.0625 1.078125q-0.25 0.34375 -0.375 0.796875q-0.125 0.4375 -0.140625 1.4375l-1.609375 0zm-0.09375 3.34375l0 -1.90625l1.890625 0l0 1.90625l-1.890625 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m618.7218 154.43832l1.1968384 48.0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m618.7218 154.43832l1.0472412 42.00186" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m618.11786 196.48135l1.7643433 4.495514l1.5380859 -4.5778503z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m896.65094 455.34122l2.3936768 43.653534" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m896.65094 455.34122l2.0651855 37.662506" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m897.06683 493.09418l1.8977661 4.440857l1.4007568 -4.621704z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m772.80054 281.8714l76.12598 1.669281" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m772.80054 281.8714l70.12744 1.5377502" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m842.8917 285.0605l4.573242 -1.5518494l-4.5007935 -1.750824z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m620.52234 360.3176l1.1968384 48.0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m620.52234 360.3176l1.0472412 42.00183" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m619.9184 402.36063l1.7643433 4.495514l1.5380859 -4.5778503z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m585.021 367.1076l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m595.4741 394.02762l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0zm12.644836 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m788.84515 248.8924l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m803.142 275.81238l0 -5.765625l-5.234375 -7.828125l2.1875 0l2.671875 4.09375q0.75 1.15625 1.390625 2.296875q0.609375 -1.0625 1.484375 -2.40625l2.625 -3.984375l2.109375 0l-5.4375 7.828125l0 5.765625l-1.796875 0zm15.1466675 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#d9ead3" d="m845.084 442.14172l156.34644 0l0 88.59845l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m845.084 442.14172l156.34644 0l0 88.59845l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m861.9121 477.0328l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm14.683289 2.15625l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm6.694763 1.5l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9782715 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069214 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6658325 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m855.74023 504.36093l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.7614136 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.6033325 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm19.44281 0l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.460388 -4.375l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584167 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9627075 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094421 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m768.958 484.1076l76.12598 1.6693115" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m768.958 484.1076l70.12744 1.5377808" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m839.0492 487.2967l4.573242 -1.5518494l-4.5007935 -1.750824z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m785.0026 451.1286l58.80316 0l0 34.4252l-58.80316 0z" fill-rule="nonzero"></path><path fill="#000000" d="m799.2995 478.0486l0 -5.765625l-5.234375 -7.828125l2.1875 0l2.671875 4.09375q0.75 1.15625 1.390625 2.296875q0.609375 -1.0625 1.484375 -2.40625l2.625 -3.984375l2.109375 0l-5.4375 7.828125l0 5.765625l-1.796875 0zm15.1467285 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438171 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1093.5826 486.44095l3.4645996 -377.88977" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1093.5826 486.44095l3.4645996 -377.88977" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1005.27295 284.2047l89.60632 1.6378174" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1005.27295 284.2047l83.6073 1.5281677" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m1088.8501 287.38434l4.567505 -1.5685425l-4.507202 -1.734375z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m1099.9213 111.42519l-391.55908 -2.8661423" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1099.9213 111.42519l-385.5592 -2.8222198" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m714.37415 106.95129l-4.550049 1.6184692l4.525879 1.684906z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m1001.4304 485.62204l89.60632 1.6378174" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m1001.4304 485.62204l83.6073 1.5281372" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m1085.0076 488.80167l4.567505 -1.5685425l-4.50708 -1.734375z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m621.1207 558.91077l0.12597656 76.81891" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m621.1207 558.91077l0.1161499 70.81891" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m619.58514 629.73236l1.6591797 4.5354004l1.6442871 -4.5408325z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m579.0289 573.6352l47.338562 0l0 34.42517l-47.338562 0z" fill-rule="nonzero"></path><path fill="#000000" d="m589.482 600.5552l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0zm12.644836 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125z" fill-rule="nonzero"></path><path fill="#ead1dc" d="m545.084 634.39105l156.34644 0l0 70.26776l-156.34644 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m545.084 634.39105l156.34644 0l0 70.26776l-156.34644 0z" fill-rule="nonzero"></path><path fill="#000000" d="m557.92773 654.44495l-3.609375 -13.59375l1.84375 0l2.0625 8.90625q0.34375 1.40625 0.578125 2.78125q0.515625 -2.171875 0.609375 -2.515625l2.59375 -9.171875l2.171875 0l1.953125 6.875q0.734375 2.5625 1.046875 4.8125q0.265625 -1.28125 0.6875 -2.953125l2.125 -8.734375l1.8125 0l-3.734375 13.59375l-1.734375 0l-2.859375 -10.359375q-0.359375 -1.296875 -0.421875 -1.59375q-0.21875 0.9375 -0.40625 1.59375l-2.890625 10.359375l-1.828125 0zm21.764893 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.078857 5.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.265625 -1.3125q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290771 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.293396 9.65625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297607 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819824 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047546 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" d="m557.1621 676.44495l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.660461 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7854614 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm19.215271 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm7.9645386 0.28125q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0632324 4.9375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.9313965 0.8125l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047607 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm12.766357 4.375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125z" fill-rule="nonzero"></path><path fill="#000000" d="m554.05273 698.44495l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9435425 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm8.601013 0.234375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm11.585327 -0.234375l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm3.5510864 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm8.985107 5.734375l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm9.313171 -6.578125l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m603.2966 782.2992l2.3937378 43.653564" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m603.2966 782.2992l2.0652466 37.662598" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m603.7125 820.0522l1.8977051 4.440857l1.4008179 -4.621704z" fill-rule="evenodd"></path><path fill="#bf9000" d="m512.5171 813.52496l114.74011 -60.960632l114.74017 60.960632l-114.74017 60.96057z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m512.5171 813.52496l114.74011 -60.960632l114.74017 60.960632l-114.74017 60.96057z" fill-rule="nonzero"></path><path fill="#000000" d="m605.663 816.06995l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.4436035 0l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5060425 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625z" fill-rule="nonzero"></path><path fill="#f1c232" d="m677.6772 941.51184l179.27557 0l0 94.64563l-179.27557 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m677.6772 941.51184l179.27557 0l0 94.64563l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m725.6051 990.4265l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm7.9332886 5.328125l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm21.978333 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0944824 -6.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.0979004 0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm15.796875 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.531982 4.9375l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm7.5788574 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#ffd966" d="m400.60892 941.51184l179.2756 0l0 94.64563l-179.2756 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m400.60892 941.51184l179.2756 0l0 94.64563l-179.2756 0z" fill-rule="nonzero"></path><path fill="#000000" d="m422.49536 995.75464l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.250702 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm6.228302 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813202 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0788574 4.9375l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290802 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.043396 6.109375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm11.616577 3.546875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm8.188232 1.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm11.828125 2.9375l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm18.035461 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m627.2572 874.48553l0 33.513184l-137.00787 0l0 33.510498" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m627.2572 874.48553l0 33.513123l-137.00787 0l0 30.083435" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m490.24933 938.0821l-1.1245728 -1.1245728l1.1245728 3.0897827l1.1246033 -3.0897827z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m627.2572 874.48553l0 33.513184l140.06299 0l0 33.510498" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m627.2572 874.48553l0 33.513123l140.06299 0l0 30.083435" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m767.3202 938.0821l-1.1245728 -1.1245728l1.1245728 3.0897827l1.1245728 -3.0897827z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m733.7454 1068.1392l137.00787 0l0 48.0l-137.00787 0z" fill-rule="nonzero"></path><path fill="#000000" d="m742.7142 1095.0591l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm16.256042 5.578125l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm7.5788574 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270386 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm19.215271 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020386 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m1027.8976 907.0079l229.48035 0l0 94.64569l-229.48035 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1038.3976 933.92786l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176147 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281982 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1135254 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9782715 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.547607 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm6.546875 2.109375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm10.366577 0l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm13.18396 4.921875l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.0217285 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.9436035 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1037.757 951.55286l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm19.584229 1.203125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm8.9626465 0l-3.75 -9.859375l1.765625 0l2.125 5.90625q0.34375 0.953125 0.625 1.984375q0.21875 -0.78125 0.625 -1.875l2.1875 -6.015625l1.71875 0l-3.734375 9.859375l-1.5625 0zm13.34375 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094482 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm18.423096 0l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.6604 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7854004 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270996 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" d="m1037.757 973.55286l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.4436035 0l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5061035 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625zm21.819702 5.09375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#bf9000" d="m550.4829 1121.1864l156.3465 0l0 76.81885l-156.3465 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m550.4829 1121.1864l156.3465 0l0 76.81885l-156.3465 0z" fill-rule="nonzero"></path><path fill="#000000" d="m571.6152 1166.5157l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm11.058289 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm16.016357 1.75l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm14.031921 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5427246 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.5354004 0l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.6989746 3.796875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm21.042664 -3.796875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507324 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094421 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m490.2467 1036.1575l0 42.51465l138.42523 0l0 42.52478" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m490.2467 1036.1575l0 42.51465l138.42523 0l0 39.097656" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m628.67194 1117.7698l-1.1246338 -1.1246338l1.1246338 3.0898438l1.1245728 -3.0898438z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m767.31494 1036.1575l0 42.51465l-138.64563 0l0 42.52478" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m767.31494 1036.1575l0 42.51465l-138.64563 0l0 39.097656" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m628.6693 1117.7698l-1.1246338 -1.1246338l1.1246338 3.0898438l1.1245728 -3.0898438z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m623.2572 704.6588l4.0 47.905518" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m623.2572 704.6588l3.5007324 41.92633" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m625.11194 746.72253l2.0236206 4.3849487l1.2684326 -4.65979z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m628.6562 1198.0052l0 25.002075l385.45148 0l0 -553.4745l-312.66412 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m628.6562 1198.0052l0 25.002075l385.45148 0l0 -553.4745l-309.237 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m704.87067 669.53284l1.1245728 -1.1246338l-3.0897827 1.1246338l3.0897827 1.1245728z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m701.4305 651.92975l522.5573 3.0775146l0 -581.44293l-519.1407 0" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m701.4304 651.92975l522.5575 3.0775146l0 -581.44293l-515.71375 0" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m708.2742 73.56431l1.1246338 -1.124588l-3.0897827 1.124588l3.0897827 1.1245804z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m808.0315 611.3517l466.8661 0l0 43.653564l-466.8661 0z" fill-rule="nonzero"></path><path fill="#000000" d="m818.5315 638.2717l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176086 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500732 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281921 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1135864 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.9783325 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.547546 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm6.546875 2.109375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm10.366638 0l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020386 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm13.215271 5.15625l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm8.261414 -0.234375l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm18.394836 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.078857 5.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.265625 -1.3125q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281921 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290833 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.293396 9.65625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297607 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819214 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.62506104 -0.453125 0.85943604 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.093811 1.296875 -2.718811 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875061 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015686 0.5625 -2.500061 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921936 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.79693604 -0.921875 -1.921936 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047668 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm16.12146 5.875l-3.015625 -9.859375l1.71875 0l1.5625 5.6875l0.59375 2.125q0.03125 -0.15625 0.5 -2.03125l1.578125 -5.78125l1.71875 0l1.46875 5.71875l0.484375 1.890625l0.578125 -1.90625l1.6875 -5.703125l1.625 0l-3.078125 9.859375l-1.734375 0l-1.578125 -5.90625l-0.375 -1.671875l-2.0 7.578125l-1.734375 0zm11.6604 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm7.7855225 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5270996 1.5l0 -13.59375l1.671875 0l0 4.875q1.171875 -1.359375 2.953125 -1.359375q1.09375 0 1.890625 0.4375q0.8125 0.421875 1.15625 1.1875q0.359375 0.765625 0.359375 2.203125l0 6.25l-1.671875 0l0 -6.25q0 -1.25 -0.546875 -1.8125q-0.546875 -0.578125 -1.53125 -0.578125q-0.75 0 -1.40625 0.390625q-0.640625 0.375 -0.921875 1.046875q-0.28125 0.65625 -0.28125 1.8125l0 5.390625l-1.671875 0zm14.887085 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.328125 0l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.5042725 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281982 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm19.21521 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm0.9020996 -3.421875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.297607 4.921875l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0zm16.0625 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m767.6221 103.74803l179.27557 0l0 43.65355l-179.27557 0z" fill-rule="nonzero"></path><path fill="#000000" d="m778.1221 130.66803l0 -13.59375l6.03125 0q1.8125 0 2.75 0.359375q0.953125 0.359375 1.515625 1.296875q0.5625 0.921875 0.5625 2.046875q0 1.453125 -0.9375 2.453125q-0.921875 0.984375 -2.890625 1.25q0.71875 0.34375 1.09375 0.671875q0.78125 0.734375 1.484375 1.8125l2.375 3.703125l-2.265625 0l-1.796875 -2.828125q-0.796875 -1.21875 -1.3125 -1.875q-0.5 -0.65625 -0.90625 -0.90625q-0.40625 -0.265625 -0.8125 -0.359375q-0.3125 -0.078125 -1.015625 -0.078125l-2.078125 0l0 6.046875l-1.796875 0zm1.796875 -7.59375l3.859375 0q1.234375 0 1.921875 -0.25q0.703125 -0.265625 1.0625 -0.828125q0.375 -0.5625 0.375 -1.21875q0 -0.96875 -0.703125 -1.578125q-0.703125 -0.625 -2.21875 -0.625l-4.296875 0l0 4.5zm18.176025 4.421875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438232 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069824 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.6657715 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.590271 2.015625l1.625 -0.21875q0.0625 1.546875 0.578125 2.125q0.53125 0.578125 1.4375 0.578125q0.6875 0 1.171875 -0.3125q0.5 -0.3125 0.671875 -0.84375q0.1875 -0.53125 0.1875 -1.703125l0 -9.359375l1.8125 0l0 9.265625q0 1.703125 -0.421875 2.640625q-0.40625 0.9375 -1.3125 1.4375q-0.890625 0.484375 -2.09375 0.484375q-1.796875 0 -2.75 -1.03125q-0.9375 -1.03125 -0.90625 -3.0625zm9.640625 -0.515625l1.6875 -0.140625q0.125 1.015625 0.5625 1.671875q0.4375 0.65625 1.359375 1.0625q0.9375 0.40625 2.09375 0.40625q1.03125 0 1.8125 -0.3125q0.796875 -0.3125 1.1875 -0.84375q0.390625 -0.53125 0.390625 -1.15625q0 -0.640625 -0.375 -1.109375q-0.375 -0.484375 -1.234375 -0.8125q-0.546875 -0.21875 -2.421875 -0.65625q-1.875 -0.453125 -2.625 -0.859375q-0.96875 -0.515625 -1.453125 -1.265625q-0.46875 -0.75 -0.46875 -1.6875q0 -1.03125 0.578125 -1.921875q0.59375 -0.90625 1.703125 -1.359375q1.125 -0.46875 2.5 -0.46875q1.515625 0 2.671875 0.484375q1.15625 0.484375 1.765625 1.4375q0.625 0.9375 0.671875 2.140625l-1.71875 0.125q-0.140625 -1.28125 -0.953125 -1.9375q-0.796875 -0.671875 -2.359375 -0.671875q-1.625 0 -2.375 0.609375q-0.75 0.59375 -0.75 1.4375q0 0.734375 0.53125 1.203125q0.515625 0.46875 2.703125 0.96875q2.203125 0.5 3.015625 0.875q1.1875 0.546875 1.75 1.390625q0.578125 0.828125 0.578125 1.921875q0 1.09375 -0.625 2.0625q-0.625 0.953125 -1.796875 1.484375q-1.15625 0.53125 -2.609375 0.53125q-1.84375 0 -3.09375 -0.53125q-1.25 -0.546875 -1.96875 -1.625q-0.703125 -1.078125 -0.734375 -2.453125zm12.5061035 -2.25q0 -3.390625 1.8125 -5.296875q1.828125 -1.921875 4.703125 -1.921875q1.875 0 3.390625 0.90625q1.515625 0.890625 2.296875 2.5q0.796875 1.609375 0.796875 3.65625q0 2.0625 -0.84375 3.703125q-0.828125 1.625 -2.359375 2.46875q-1.53125 0.84375 -3.296875 0.84375q-1.921875 0 -3.4375 -0.921875q-1.5 -0.9375 -2.28125 -2.53125q-0.78125 -1.609375 -0.78125 -3.40625zm1.859375 0.03125q0 2.453125 1.3125 3.875q1.328125 1.40625 3.3125 1.40625q2.03125 0 3.34375 -1.421875q1.3125 -1.4375 1.3125 -4.0625q0 -1.65625 -0.5625 -2.890625q-0.546875 -1.234375 -1.640625 -1.921875q-1.078125 -0.6875 -2.421875 -0.6875q-1.90625 0 -3.28125 1.3125q-1.375 1.3125 -1.375 4.390625zm13.183289 6.59375l0 -13.59375l1.84375 0l7.140625 10.671875l0 -10.671875l1.71875 0l0 13.59375l-1.84375 0l-7.140625 -10.6875l0 10.6875l-1.71875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m529.084 131.11548l-343.0866 -1.102356" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m529.084 131.11548l-337.08667 -1.0830841" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m192.00266 128.38068l-4.5433807 1.637146l4.5327606 1.6663055z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m258.7034 136.56955l156.34647 0l0 70.267715l-156.34647 0z" fill-rule="nonzero"></path><path fill="#000000" d="m269.17215 163.48955l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm16.865448 5.921875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0632324 4.9375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.556427 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.75 -0.234375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.012146 5.875l5.234375 -13.59375l1.9375 0l5.5625 13.59375l-2.046875 0l-1.59375 -4.125l-5.6875 0l-1.484375 4.125l-1.921875 0zm3.921875 -5.578125l4.609375 0l-1.40625 -3.78125q-0.65625 -1.703125 -0.96875 -2.8125q-0.265625 1.3125 -0.734375 2.59375l-1.5 4.0zm10.021698 5.578125l0 -13.59375l5.125 0q1.359375 0 2.078125 0.125q1.0 0.171875 1.671875 0.640625q0.671875 0.46875 1.078125 1.3125q0.421875 0.84375 0.421875 1.84375q0 1.734375 -1.109375 2.9375q-1.09375 1.203125 -3.984375 1.203125l-3.484375 0l0 5.53125l-1.796875 0zm1.796875 -7.140625l3.515625 0q1.75 0 2.46875 -0.640625q0.734375 -0.65625 0.734375 -1.828125q0 -0.859375 -0.4375 -1.46875q-0.421875 -0.609375 -1.125 -0.796875q-0.453125 -0.125 -1.671875 -0.125l-3.484375 0l0 4.859375zm10.943573 7.140625l0 -13.59375l1.8125 0l0 13.59375l-1.8125 0zm9.835358 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438202 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.0 6.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625z" fill-rule="nonzero"></path><path fill="#000000" d="m276.73465 183.88017q-0.828125 0.921875 -1.8125 1.390625q-0.96875 0.453125 -2.09375 0.453125q-2.09375 0 -3.3125 -1.40625q-1.0 -1.15625 -1.0 -2.578125q0 -1.265625 0.8125 -2.28125q0.8125 -1.015625 2.421875 -1.78125q-0.90625 -1.0625 -1.21875 -1.71875q-0.296875 -0.65625 -0.296875 -1.265625q0 -1.234375 0.953125 -2.125q0.953125 -0.90625 2.421875 -0.90625q1.390625 0 2.265625 0.859375q0.890625 0.84375 0.890625 2.046875q0 1.9375 -2.5625 3.3125l2.4375 3.09375q0.421875 -0.8125 0.640625 -1.890625l1.734375 0.375q-0.4375 1.78125 -1.203125 2.9375q0.9375 1.234375 2.125 2.078125l-1.125 1.328125q-1.0 -0.640625 -2.078125 -1.921875zm-3.40625 -7.078125q1.09375 -0.640625 1.40625 -1.125q0.328125 -0.484375 0.328125 -1.0625q0 -0.703125 -0.453125 -1.140625q-0.4375 -0.4375 -1.09375 -0.4375q-0.671875 0 -1.125 0.4375q-0.453125 0.421875 -0.453125 1.0625q0 0.3125 0.15625 0.65625q0.171875 0.34375 0.5 0.734375l0.734375 0.875zm2.359375 5.765625l-3.0625 -3.796875q-1.359375 0.8125 -1.84375 1.5q-0.46875 0.6875 -0.46875 1.375q0 0.8125 0.65625 1.703125q0.671875 0.890625 1.875 0.890625q0.75 0 1.546875 -0.46875q0.8125 -0.46875 1.296875 -1.203125zm17.283142 2.921875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm9.281952 -6.765625l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.4573364 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.0 6.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm8.828827 4.875l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.613586 -1.21875q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.000702 8.734375l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm14.589569 -0.015625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.297577 3.65625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm3.7819824 5.75l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875zm16.047577 1.9375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#ffffff" d="m94.25984 75.59843l0 0c0 -12.054596 10.597107 -21.826775 23.669289 -21.826775l0 0c13.072197 0 23.669289 9.772179 23.669289 21.826775l0 0c0 12.054588 -10.597092 21.826767 -23.669289 21.826767l0 0c-13.072182 0 -23.669289 -9.772179 -23.669289 -21.826767z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m94.25984 75.59843l0 0c0 -12.054596 10.597107 -21.826775 23.669289 -21.826775l0 0c13.072197 0 23.669289 9.772179 23.669289 21.826775l0 0c0 12.054588 -10.597092 21.826767 -23.669289 21.826767l0 0c-13.072182 0 -23.669289 -9.772179 -23.669289 -21.826767z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m117.92913 97.42519l1.1653595 119.55906" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m117.92913 97.42519l1.1653595 119.55906" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m117.92913 128.50131l29.574806 42.48819" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m117.92913 128.50131l29.574806 42.48819" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m91.50131 170.50131l26.425194 -41.07086" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m91.50131 170.50131l26.425194 -41.07086" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m235.77428 40.0l179.27559 0l0 48.0l-179.27559 0z" fill-rule="nonzero"></path><path fill="#000000" d="m273.33563 65.59187l0 -1.609375l5.765625 0l0 5.046875q-1.328125 1.0625 -2.75 1.59375q-1.40625 0.53125 -2.890625 0.53125q-2.0 0 -3.640625 -0.859375q-1.625 -0.859375 -2.46875 -2.484375q-0.828125 -1.625 -0.828125 -3.625q0 -1.984375 0.828125 -3.703125q0.828125 -1.71875 2.390625 -2.546875q1.5625 -0.84375 3.59375 -0.84375q1.46875 0 2.65625 0.484375q1.203125 0.46875 1.875 1.328125q0.671875 0.84375 1.03125 2.21875l-1.625 0.4375q-0.3125 -1.03125 -0.765625 -1.625q-0.453125 -0.59375 -1.296875 -0.953125q-0.84375 -0.359375 -1.875 -0.359375q-1.234375 0 -2.140625 0.375q-0.890625 0.375 -1.453125 1.0q-0.546875 0.609375 -0.84375 1.34375q-0.53125 1.25 -0.53125 2.734375q0 1.8125 0.625 3.046875q0.640625 1.21875 1.828125 1.8125q1.203125 0.59375 2.546875 0.59375q1.171875 0 2.28125 -0.453125q1.109375 -0.453125 1.6875 -0.953125l0 -2.53125l-4.0 0zm8.183289 5.328125l0 -13.59375l9.84375 0l0 1.59375l-8.046875 0l0 4.171875l7.53125 0l0 1.59375l-7.53125 0l0 4.625l8.359375 0l0 1.609375l-10.15625 0zm15.865448 0l0 -12.0l-4.46875 0l0 -1.59375l10.765625 0l0 1.59375l-4.5 0l0 12.0l-1.796875 0zm11.65741 0.234375l3.9375 -14.0625l1.34375 0l-3.9375 14.0625l-1.34375 0zm6.417694 -0.234375l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.438202 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm9.375 -1.984375q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm15.735107 4.921875l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.906952 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.665802 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m119.125984 215.50131l-38.58268 53.07086" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m119.125984 215.50131l-38.58268 53.07086" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m119.62467 215.50131l42.99212 58.992126" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m119.62467 215.50131l42.99212 58.992126" fill-rule="nonzero"></path></g></svg>
+
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page documents setting up and running the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Google Kubernetes Engine@ (GKE).
+This page documents setting up and running the "Arvados on Kubernetes":{{ site.baseurl }}/install/arvados-on-kubernetes.html @Helm@ chart on @Google Kubernetes Engine@ (GKE).
h2. Prerequisites
This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
<pre>
-$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.15
+$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2
</pre>
It takes a few minutes for the cluster to be initialized.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page documents setting up and running the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Minikube@.
+This page documents setting up and running the "Arvados on Kubernetes":{{ site.baseurl }}/install/arvados-on-kubernetes.html @Helm@ chart on @Minikube@.
h2. Prerequisites
* Minikube or Google Kubernetes Engine (Kubernetes 1.10+ with at least 3 nodes, 2+ cores per node)
* @kubectl@ and @Helm 3@ installed locally, and able to connect to your Kubernetes cluster
-Please refer to "Arvados on Minikube":/install/arvados-on-kubernetes-minikube.html or "Arvados on GKE":/install/arvados-on-kubernetes-GKE.html for detailed installation instructions.
+Please refer to "Arvados on Minikube":{{ site.baseurl }}/install/arvados-on-kubernetes-minikube.html or "Arvados on GKE":{{ site.baseurl }}/install/arvados-on-kubernetes-GKE.html for detailed installation instructions.
h2. Quick start
<pre>
-$ git clone https://github.com/arvados/arvados.git
-$ cd arvados/tools/arvbox/bin
-$ ./arvbox start localdemo
+$ curl -O https://git.arvados.org/arvados.git/blob_plain/refs/heads/main:/tools/arvbox/bin/arvbox
+$ chmod +x arvbox
+$ ./arvbox start localdemo latest
$ ./arvbox adduser demouser demo@example.com
</pre>
# If you are not using an IAM role for authentication,
# specify access credentials here instead.
- AccessKey: <span class="userinput">""</span>
- SecretKey: <span class="userinput">""</span>
+ AccessKeyID: <span class="userinput">""</span>
+ SecretAccessKey: <span class="userinput">""</span>
# Storage provider region. For Google Cloud Storage, use ""
# or omit.
- Region: <span class="userinput">us-east-1a</span>
+ Region: <span class="userinput">us-east-1</span>
# Storage provider endpoint. For Amazon S3, use "" or
# omit. For Google Cloud Storage, use
# Use the AWS S3 v2 Go driver instead of the goamz driver.
UseAWSS3v2Driver: false
+ # By default keepstore stores data using the MD5 checksum
+ # (32 hexadecimal characters) as the object name, e.g.,
+ # "0123456abc...". Setting PrefixLength to 3 changes this
+ # naming scheme to "012/0123456abc...". This can improve
+ # performance, depending on the S3 service being used. For
+ # example, PrefixLength 3 is recommended to avoid AWS
+ # limitations on the number of read/write operations per
+ # second per prefix (see
+ # https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+ #
+ # Note that changing PrefixLength on an existing volume is
+ # not currently supported. Once you have started using a
+ # bucket as an Arvados volume, you should not change its
+ # configured PrefixLength, or configure another volume using
+ # the same bucket and a different PrefixLength.
+ PrefixLength: 0
+
# Requested page size for "list bucket contents" requests.
IndexPageSize: 1000
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Configure container shell access
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Arvados can be configured to permit shell access to running containers. This can be handy for debugging, but it could affect reproducability of workflows. This feature can be enabled for admin users, or for all users. By default, it is entirely disabled.
+
+The relevant configuration section is
+
+<notextile>
+<pre><code> Containers:
+ ShellAccess:
+ # An admin user can use "arvados-client shell" to start an
+ # interactive shell (with any user ID) in any running
+ # container.
+ Admin: false
+
+ # Any user can use "arvados-client shell" to start an
+ # interactive shell (with any user ID) in any running
+ # container that they started, provided it isn't also
+ # associated with a different user's container request.
+ #
+ # Interactive sessions make it easy to alter the container's
+ # runtime environment in ways that aren't recorded or
+ # reproducible. Consider the implications for automatic
+ # container reuse before enabling and using this feature. In
+ # particular, note that starting an interactive session does
+ # not disqualify a container from being reused by a different
+ # user/workflow in the future.
+ User: false
+</code></pre>
+</notextile>
+
+To enable the feature a firewall change may also be required. This feature requires the opening of tcp connections from @arvados-controller@ to the range specified in the @net.ipv4.ip_local_port_range@ sysctl on compute nodes. If that range is unknown or hard to determine, it will be sufficient to allow tcp connections from @arvados-controller@ to port 1024-65535 on compute nodes, while allowing traffic that is part of existing tcp connections.
+
+After changing the configuration, @arvados-controller@ must be restarted for the change to take effect. When enabling, shell access will be enabled for any running containers. When disabling, access is removed immediately for any running containers, as well as any containers started subsequently. Restarting @arvados-controller@ will kill any active connections.
+
+Usage instructions for this feature are available in the "User guide":{{site.baseurl}}/user/debugging/container-shell-access.html.
{% endcomment %}
{% include 'notebox_begin_warning' %}
-arvados-dispatch-cloud is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm.
+@arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
{% include 'notebox_end' %}
# "Introduction":#introduction
# "Create an SSH keypair":#sshkeypair
# "The build script":#building
+# "Singularity mksquashfs configuration":#singularity_mksquashfs_configuration
# "Build an AWS image":#aws
# "Build an Azure image":#azure
</code></pre>
</notextile>
+{% assign show_docker_warning = true %}
+
+{% include 'singularity_mksquashfs_configuration' %}
+
+The desired amount of memory to make available for @mksquashfs@ can be configured in an argument to the build script, see the next section. It defaults to @256M@.
+
h2(#building). The build script
The necessary files are located in the @arvados/tools/compute-images@ directory in the source tree. A build script is provided to generate the image. The @--help@ argument lists all available options:
--azure-sku (default: unset, required if building for Azure, e.g. 16.04-LTS)
Azure SKU image to use
--ssh_user (default: packer)
- The user packer will use lo log into the image
- --domain (default: arvadosapi.com)
- The domain part of the FQDN for the cluster
- --resolver (default: 8.8.8.8)
+ The user packer will use to log into the image
+ --resolver (default: host's network provided)
The dns resolver for the machine
--reposuffix (default: unset)
Set this to "-dev" to track the unstable/dev Arvados repositories
--public-key-file (required)
Path to the public key file that a-d-c will use to log into the compute node
+ --mksquashfs-mem (default: 256M)
+ Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
--debug
Output debug information (default: false)
</code></pre></notextile>
{% endcomment %}
{% include 'notebox_begin_warning' %}
-arvados-dispatch-cloud is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm.
+@arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
{% include 'notebox_end' %}
# "Introduction":#introduction
The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs. It is not resource-intensive, so you can run it on the API server node.
+More detail about the internal operation of the dispatcher can be found in the "architecture section":{{site.baseurl}}/architecture/dispatchcloud.html.
+
h2(#update-config). Update config.yml
h3. Configure CloudVMs
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Install the LSF dispatcher
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'notebox_begin_warning' %}
+@arvados-dispatch-lsf@ is only relevant for on premises clusters that will spool jobs to LSF. Skip this section if you use Slurm or if you are installing a cloud cluster.
+{% include 'notebox_end' %}
+
+h2(#overview). Overview
+
+Containers can be dispatched to an LSF cluster. The dispatcher sends work to the cluster using LSF's @bsub@ command, so it works in a variety of LSF configurations.
+
+In order to run containers, you must choose a user that has permission to set up FUSE mounts and run Singularity/Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
+
+Set up all of your compute nodes with "Docker":../crunch2/install-compute-node-singularity.html or "Singularity":../crunch2/install-compute-node-docker.html.
+
+*Current limitations*:
+* Arvados container priority is not propagated to LSF job priority. This can cause inefficient use of compute resources, and even deadlock if there are fewer compute nodes than concurrent Arvados workflows.
+* Combining LSF with docker may not work, depending on LSF configuration and user/group IDs (if LSF only sets up the configured user's primary group ID when executing the crunch-run process on a compute node, it may not have permission to connect to the docker daemon).
+
+h2(#update-config). Update config.yml
+
+Arvados-dispatch-lsf reads the common configuration file at @/etc/arvados/config.yml@.
+
+Add a DispatchLSF entry to the Services section, using the hostname where @arvados-dispatch-lsf@ will run, and an available port:
+
+<notextile>
+<pre> Services:
+ DispatchLSF:
+ InternalURLs:
+ "http://<code class="userinput">hostname.zzzzz.arvadosapi.com:9007</code>": {}</pre>
+</notextile>
+
+Review the following configuration parameters and adjust as needed.
+
+h3(#BsubSudoUser). Containers.LSF.BsubSudoUser
+
+arvados-dispatch-lsf uses @sudo@ to execute @bsub@, for example @sudo -E -u crunch bsub [...]@. This means the @crunch@ account must exist on the hosts where LSF jobs run ("execution hosts"), as well as on the host where you are installing the Arvados LSF dispatcher (the "submission host"). To use a user account other than @crunch@, configure @BsubSudoUser@:
+
+<notextile>
+<pre> Containers:
+ LSF:
+ <code class="userinput">BsubSudoUser: <b>lsfuser</b>
+</code></pre>
+</notextile>
+
+Alternatively, you can arrange for the arvados-dispatch-lsf process to run as an unprivileged user that has a corresponding account on all compute nodes, and disable the use of @sudo@ by specifying an empty string:
+
+<notextile>
+<pre> Containers:
+ LSF:
+ # Don't use sudo
+ <code class="userinput">BsubSudoUser: <b>""</b>
+</code></pre>
+</notextile>
+
+
+h3(#SbatchArguments). Containers.LSF.BsubArgumentsList
+
+When arvados-dispatch-lsf invokes @bsub@, you can add arguments to the command by specifying @BsubArgumentsList@. You can use this to send the jobs to specific cluster partitions or add resource requests. Set @BsubArgumentsList@ to an array of strings. For example:
+
+<notextile>
+<pre> Containers:
+ LSF:
+ <code class="userinput">BsubArgumentsList: <b>["-C", "0", "-o", "/tmp/crunch-run.%J.out", "-e", "/tmp/crunch-run.%J.err"]</b></code>
+</pre>
+</notextile>
+
+Note that the default value for @BsubArgumentsList@ uses the @-o@ and @-e@ arguments to write stdout/stderr data to files in @/tmp@ on the compute nodes, which is helpful for troubleshooting installation/configuration problems. Ensure you have something in place to delete old files from @/tmp@, or adjust these arguments accordingly.
+
+
+h3(#PollPeriod). Containers.PollInterval
+
+arvados-dispatch-lsf polls the API server periodically for new containers to run. The @PollInterval@ option controls how often this poll happens. Set this to a string of numbers suffixed with one of the time units @s@, @m@, or @h@. For example:
+
+<notextile>
+<pre> Containers:
+ <code class="userinput">PollInterval: <b>10s</b>
+</code></pre>
+</notextile>
+
+
+h3(#ReserveExtraRAM). Containers.ReserveExtraRAM: Extra RAM for jobs
+
+Extra RAM to reserve (in bytes) on each LSF job submitted by Arvados, which is added to the amount specified in the container's @runtime_constraints@. If not provided, the default value is zero.
+
+Supports suffixes @KB@, @KiB@, @MB@, @MiB@, @GB@, @GiB@, @TB@, @TiB@, @PB@, @PiB@, @EB@, @EiB@ (where @KB@ is 10[^3^], @KiB@ is 2[^10^], @MB@ is 10[^6^], @MiB@ is 2[^20^] and so forth).
+
+<notextile>
+<pre> Containers:
+ <code class="userinput">ReserveExtraRAM: <b>256MiB</b></code>
+</pre>
+</notextile>
+
+
+h3(#CrunchRunCommand-network). Containers.CrunchRunArgumentList: Using host networking for containers
+
+Older Linux kernels (prior to 3.18) have bugs in network namespace handling which can lead to compute node lockups. This by is indicated by blocked kernel tasks in "Workqueue: netns cleanup_net". If you are experiencing this problem, as a workaround you can disable use of network namespaces by Docker across the cluster. Be aware this reduces container isolation, which may be a security risk.
+
+<notextile>
+<pre> Containers:
+ <code class="userinput">CrunchRunArgumentsList:
+ - <b>"-container-enable-networking=always"</b>
+ - <b>"-container-network-mode=host"</b></code>
+</pre>
+</notextile>
+
+{% assign arvados_component = 'arvados-dispatch-lsf' %}
+
+{% include 'install_packages' %}
+
+{% include 'start_service' %}
+
+{% include 'restart_api' %}
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
-Containers can be dispatched to a Slurm cluster. The dispatcher sends work to the cluster using Slurm's @sbatch@ command, so it works in a variety of SLURM configurations.
+Containers can be dispatched to a Slurm cluster. The dispatcher sends work to the cluster using Slurm's @sbatch@ command, so it works in a variety of Slurm configurations.
In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
Whenever you change this file, you will need to update the copy _on every compute node_ as well as the controller node, and then run @sudo scontrol reconfigure@.
-*@ControlMachine@* should be a DNS name that resolves to the Slurm controller (dispatch/API server). This must resolve correctly on all Slurm worker nodes as well as the controller itself. In general SLURM is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
+*@ControlMachine@* should be a DNS name that resolves to the Slurm controller (dispatch/API server). This must resolve correctly on all Slurm worker nodes as well as the controller itself. In general Slurm is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
*@SelectType=select/linear@* is needed on cloud-based installations that update node sizes dynamically, but it can only schedule one container at a time on each node. On a static or homogeneous cluster, use @SelectType=select/cons_res@ with @SelectTypeParameters=CR_CPU_Memory@ instead to enable node sharing.
* In @application.yml@: <code>assign_node_hostname: worker1-%<slot_number>04d</code>
* In @slurm.conf@: <code>NodeName=worker1-[0000-0255]</code>
-If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script (see "Installing a compute node":install-compute-node.html) send its current hostname, rather than expect Arvados to assign one.
+If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script send its current hostname, rather than expect Arvados to assign one.
* In @application.yml@: <code>assign_node_hostname: false</code>
* In @slurm.conf@: <code>NodeName=alice,bob,clay,darlene</code>
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
# "Introduction":#introduction
h2(#introduction). Introduction
-This assumes you already have a Slurm cluster, and have "set up all of your compute nodes":install-compute-node.html. Slurm packages are available for CentOS, Debian and Ubuntu. Please see your distribution package repositories. For information on installing Slurm from source, see "this install guide":https://slurm.schedmd.com/quickstart_admin.html
+This assumes you already have a Slurm cluster, and have set up all of your compute nodes with "Docker":../crunch2/install-compute-node-docker.html or "Singularity":../crunch2/install-compute-node-singularity.html. Slurm packages are available for CentOS, Debian and Ubuntu. Please see your distribution package repositories. For information on installing Slurm from source, see "this install guide":https://slurm.schedmd.com/quickstart_admin.html
The Arvados Slurm dispatcher can run on any node that can submit requests to both the Arvados API server and the Slurm controller (via @sbatch@). It is not resource-intensive, so you can run it on the API server node.
h3(#ReserveExtraRAM). Containers.ReserveExtraRAM: Extra RAM for jobs
-Extra RAM to reserve (in bytes) on each Slurm job submitted by Arvados, which is added to the amount specified in the container's @runtime_constraints@. If not provided, the default value is zero. Helpful when using @-cgroup-parent-subsystem@, where @crunch-run@ and @arv-mount@ share the control group memory limit with the user process. In this situation, at least 256MiB is recommended to accomodate each container's @crunch-run@ and @arv-mount@ processes.
+Extra RAM to reserve (in bytes) on each Slurm job submitted by Arvados, which is added to the amount specified in the container's @runtime_constraints@. If not provided, the default value is zero. Helpful when using @-cgroup-parent-subsystem@, where @crunch-run@ and @arv-mount@ share the control group memory limit with the user process. In this situation, at least 256MiB is recommended to accommodate each container's @crunch-run@ and @arv-mount@ processes.
Supports suffixes @KB@, @KiB@, @MB@, @MiB@, @GB@, @GiB@, @TB@, @TiB@, @PB@, @PiB@, @EB@, @EiB@ (where @KB@ is 10[^3^], @KiB@ is 2[^10^], @MB@ is 10[^6^], @MiB@ is 2[^20^] and so forth).
h3(#PrioritySpread). Containers.Slurm.PrioritySpread
crunch-dispatch-slurm adjusts the "nice" values of its Slurm jobs to ensure containers are prioritized correctly relative to one another. This option tunes the adjustment mechanism.
-* If non-Arvados jobs run on your Slurm cluster, and your Arvados containers are waiting too long in the Slurm queue because their "nice" values are too high for them to compete with other SLURM jobs, you should use a smaller PrioritySpread value.
+* If non-Arvados jobs run on your Slurm cluster, and your Arvados containers are waiting too long in the Slurm queue because their "nice" values are too high for them to compete with other Slurm jobs, you should use a smaller PrioritySpread value.
* If you have an older Slurm system that limits nice values to 10000, a smaller @PrioritySpread@ can help avoid reaching that limit.
* In other cases, a larger value is beneficial because it reduces the total number of adjustments made by executing @scontrol@.
Some versions of Docker (at least 1.9), when run under systemd, require the cgroup parent to be specified as a systemd slice. This causes an error when specifying a cgroup parent created outside systemd, such as those created by Slurm.
-You can work around this issue by disabling the Docker daemon's systemd integration. This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use Slurm's cgroups as container parents. To do this, "configure the Docker daemon on all compute nodes":install-compute-node.html#configure_docker_daemon to run with the option @--exec-opt native.cgroupdriver=cgroupfs@.
+You can work around this issue by disabling the Docker daemon's systemd integration. This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use Slurm's cgroups as container parents. To do this, configure the Docker daemon on all compute nodes to run with the option @--exec-opt native.cgroupdriver=cgroupfs@.
{% include 'notebox_end' %}
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
h2. Test compute node setup
h2. Test the dispatcher
+Make sure all of your compute nodes are set up with "Docker":../crunch2/install-compute-node-docker.html or "Singularity":../crunch2/install-compute-node-singularity.html.
+
On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
<notextile>
---
layout: default
navsection: installguide
-title: Set up a Slurm compute node
+title: Set up a compute node with Docker
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+This page describes the requirements for a compute node in a Slurm or LSF cluster that will run containers dispatched by @crunch-dispatch-slurm@ or @arvados-dispatch-lsf@. If you are installing a cloud cluster, refer to "Build a cloud compute node image":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html.
+{% include 'notebox_end' %}
+
+{% include 'notebox_begin_warning' %}
+These instructions apply when Containers.RuntimeEngine is set to @docker@, refer to "Set up a compute node with Singularity":install-compute-node-singularity.html when running @singularity@.
{% include 'notebox_end' %}
# "Introduction":#introduction
# "Set up Docker":#docker
# "Update fuse.conf":#fuse
# "Update docker-cleaner.json":#docker-cleaner
-# "Configure Linux cgroups accounting":#cgroups
-# "Install Docker":#install_docker
-# "Configure the Docker daemon":#configure_docker_daemon
# "Install'python-arvados-fuse and crunch-run and arvados-docker-cleaner":#install-packages
h2(#introduction). Introduction
-This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados, with Slurm on a static cluster. These steps must be performed on every compute node.
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados on a static cluster. These steps must be performed on every compute node.
h2(#docker). Set up Docker
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Set up a compute node with Singularity
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'notebox_begin_warning' %}
+This page describes the requirements for a compute node in a Slurm or LSF cluster that will run containers dispatched by @crunch-dispatch-slurm@ or @arvados-dispatch-lsf@. If you are installing a cloud cluster, refer to "Build a cloud compute node image":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html.
+{% include 'notebox_end' %}
+
+{% include 'notebox_begin_warning' %}
+These instructions apply when Containers.RuntimeEngine is set to @singularity@, refer to "Set up a compute node with Docker":install-compute-node-docker.html when running @docker@.
+{% include 'notebox_end' %}
+
+# "Introduction":#introduction
+# "Install python-arvados-fuse and crunch-run and squashfs-tools":#install-packages
+# "Set up Singularity":#singularity
+# "Singularity mksquashfs configuration":#singularity_mksquashfs_configuration
+
+h2(#introduction). Introduction
+
+Please refer to the "Singularity":{{site.baseurl}}/architecture/singularity.html documentation in the Architecture section.
+
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados on a static cluster. These steps must be performed on every compute node.
+
+{% assign arvados_component = 'python-arvados-fuse crunch-run squashfs-tools' %}
+
+{% include 'install_packages' %}
+
+h2(#singularity). Set up Singularity
+
+Follow the "Singularity installation instructions":https://sylabs.io/guides/3.7/user-guide/quick_start.html. Make sure @singularity@ and @mksquashfs@ are working:
+
+<notextile>
+<pre><code>$ <span class="userinput">singularity version</span>
+3.7.4
+$ <span class="userinput">mksquashfs -version</span>
+mksquashfs version 4.3-git (2014/06/09)
+[...]
+</code></pre>
+</notextile>
+
+Then update @Containers.RuntimeEngine@ in your cluster configuration:
+
+<notextile>
+<pre><code> # Container runtime: "docker" (default) or "singularity"
+ RuntimeEngine: singularity
+</code></pre>
+</notextile>
+
+{% include 'singularity_mksquashfs_configuration' %}
+
+h2(#singularity_loop_device_errors). Singularity loop device errors
+
+With singularity v3.9.1 and earlier, containers may fail intermittently at startup with an error message similar to the following in the container log's @stderr.txt@ (line breaks added):
+
+<notextile>
+<pre><code>FATAL: container creation failed:
+ mount /proc/self/fd/3->/usr/local/var/singularity/mnt/session/rootfs error:
+ while mounting image /proc/self/fd/3:
+ failed to find loop device:
+ could not attach image file to loop device:
+ failed to set loop flags on loop device:
+ resource temporarily unavailable
+</code></pre>
+</notextile>
+
+This problem is addressed in singularity v3.9.2. For details, please see "Arvados issue #18489":https://dev.arvados.org/issues/18489 and "singularity PR #458":https://github.com/sylabs/singularity/pull/458.
{% endcomment %}
{% include 'notebox_begin' %}
-This section is about installing an Arvados cluster. If you are just looking to install Arvados client tools and libraries, "go to the SDK section.":{{site.baseurl}}/sdk
+This section is about installing an Arvados cluster. If you are just looking to install Arvados client tools and libraries, "go to the SDK section.":{{site.baseurl}}/sdk/
{% include 'notebox_end' %}
Arvados components run on GNU/Linux systems, and supports AWS, GCP and Azure cloud platforms as well as on-premises installs. Arvados supports Debian and derivatives such as Ubuntu, as well as Red Hat and derivatives such as CentOS. "Arvados is Free Software":{{site.baseurl}}/user/copying/copying.html and self-install installations are not limited in any way. Commercial support and development are also available from "Curii Corporation.":mailto:info@curii.com
h3. Test configuration
-notextile. <pre><code>$ <span class="userinput">sudo -u git -i bash -c 'cd /var/www/arvados-api/current && bundle exec script/arvados-git-sync.rb production'</span></code></pre>
+notextile. <pre><code>$ <span class="userinput">sudo -u git -i bash -c 'cd /var/www/arvados-api/current && bin/bundle exec script/arvados-git-sync.rb production'</span></code></pre>
h3. Enable the synchronization script
Create @/etc/cron.d/arvados-git-sync@ with the following content:
<notextile>
-<pre><code><span class="userinput">*/5 * * * * git cd /var/www/arvados-api/current && bundle exec script/arvados-git-sync.rb production</span>
+<pre><code><span class="userinput">*/5 * * * * git cd /var/www/arvados-api/current && bin/bundle exec script/arvados-git-sync.rb production</span>
</code></pre>
</notextile>
Keep-balance deletes unreferenced and overreplicated blocks from Keep servers, makes additional copies of underreplicated blocks, and moves blocks into optimal locations as needed (e.g., after adding new servers). See "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for usage details.
-Keep-balance can be installed anywhere with network access to Keep services. Typically it runs on the same host as keepproxy.
+Keep-balance can be installed anywhere with network access to Keep services, arvados-controller, and PostgreSQL. Typically it runs on the same host as keepproxy.
*A cluster should have only one instance of keep-balance running at a time.*
{% include 'notebox_begin' %}
-If you are installing keep-balance on an existing system with valuable data, you can run keep-balance in "dry run" mode first and review its logs as a precaution. To do this, edit your keep-balance startup script to use the flags @-commit-pulls=false -commit-trash=false@.
+If you are installing keep-balance on an existing system with valuable data, you can run keep-balance in "dry run" mode first and review its logs as a precaution. To do this, edit your keep-balance startup script to use the flags @-commit-pulls=false -commit-trash=false -commit-confirmed-fields=false@.
{% include 'notebox_end' %}
h2(#introduction). Introduction
-The Keep-web server provides read/write access to files stored in Keep using WebDAV and S3 protocols. This makes it easy to access files in Keep from a browser, or mount Keep as a network folder using WebDAV support in various operating systems. It serves public data to unauthenticated clients, and serves private data to clients that supply Arvados API tokens. It can be installed anywhere with access to Keep services, typically behind a web proxy that provides TLS support. See the "godoc page":http://godoc.org/github.com/curoverse/arvados/services/keep-web for more detail.
+The Keep-web server provides read/write access to files stored in Keep using WebDAV and S3 protocols. This makes it easy to access files in Keep from a browser, or mount Keep as a network folder using WebDAV support in various operating systems. It serves public data to unauthenticated clients, and serves private data to clients that supply Arvados API tokens. It can be installed anywhere with access to Keep services, typically behind a web proxy that provides TLS support. See the "godoc page":https://pkg.go.dev/git.arvados.org/arvados.git/services/keep-web for more detail.
h2(#dns). Configure DNS
{% include 'notebox_begin' %}
Whether you choose to serve collections from their own subdomain or from a single domain, it's important to keep in mind that they should be served from me same _site_ as Workbench for the inline previews to work.
-Please check "keep-web's URL pattern guide":/api/keep-web-urls.html#same-site to learn more.
+Please check "keep-web's URL pattern guide":../api/keep-web-urls.html#same-site to learn more.
{% include 'notebox_end' %}
h2. Set InternalURLs
h2(#update-config). Configure anonymous user token
-{% assign railscmd = "bundle exec ./script/get_anonymous_user_token.rb --get" %}
+{% assign railscmd = "bin/bundle exec ./script/get_anonymous_user_token.rb --get" %}
{% assign railsout = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" %}
If you intend to use Keep-web to serve public data to anonymous clients, configure it with an anonymous token.
h3. Update nginx configuration
-Put a reverse proxy with SSL support in front of keep-web. Keep-web itself runs on the port 25107 (or whatever is specified in @Services.Keepproxy.InternalURL@) the reverse proxy runs on port 443 and forwards requests to Keepproxy.
+Put a reverse proxy with SSL support in front of keep-web. Keep-web itself runs on the port 9002 (or whatever is specified in @Services.WebDAV.InternalURL@) while the reverse proxy runs on port 443 and forwards requests to Keep-web.
Use a text editor to create a new file @/etc/nginx/conf.d/keep-web.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
h2(#update-nginx). Update Nginx configuration
-Put a reverse proxy with SSL support in front of Keepproxy. Keepproxy itself runs on the port 25107 (or whatever is specified in @Services.Keepproxy.InternalURL@) the reverse proxy runs on port 443 and forwards requests to Keepproxy.
+Put a reverse proxy with SSL support in front of Keepproxy. Keepproxy itself runs on the port 25107 (or whatever is specified in @Services.Keepproxy.InternalURL@) while the reverse proxy runs on port 443 and forwards requests to Keepproxy.
Use a text editor to create a new file @/etc/nginx/conf.d/keepproxy.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
table(table table-bordered table-condensed).
|_. Distribution|_. State|_. Last supported version|
|CentOS 7|Supported|Latest|
+|Debian 11 ("bullseye")|Supported|Latest|
|Debian 10 ("buster")|Supported|Latest|
+|Ubuntu 20.04 ("focal")|Supported|Latest|
|Ubuntu 18.04 ("bionic")|Supported|Latest|
-|Ubuntu 16.04 ("xenial")|Supported|Latest|
-|Debian 9 ("stretch")|EOL|Latest 2.1.X release|
+|Ubuntu 16.04 ("xenial")|EOL|2.1.2|
+|Debian 9 ("stretch")|EOL|2.1.2|
|Debian 8 ("jessie")|EOL|1.4.3|
|Ubuntu 14.04 ("trusty")|EOL|1.4.3|
|Ubuntu 12.04 ("precise")|EOL|8ed7b6dd5d4df93a3f37096afe6d6f81c2a7ef6e (2017-05-03)|
table(table table-bordered table-condensed).
|\3=. *Core*|
-|"Postgres database":install-postgresql.html |Stores data for the API server.|Required.|
-|"API server":install-api-server.html |Core Arvados logic for managing users, groups, collections, containers, and enforcing permissions.|Required.|
+|"PostgreSQL database":install-postgresql.html |Stores data for the API server.|Required.|
+|"API server + Controller":install-api-server.html |Core Arvados logic for managing users, groups, collections, containers, and enforcing permissions.|Required.|
|\3=. *Keep (storage)*|
|"Keepstore":install-keepstore.html |Stores content-addressed blocks in a variety of backends (local filesystem, cloud object storage).|Required.|
|"Keepproxy":install-keepproxy.html |Gateway service to access keep servers from external networks.|Required to be able to use arv-put, arv-get, or arv-mount outside the private Arvados network.|
|"Git server":install-arv-git-httpd.html |Arvados-hosted git repositories, with Arvados-token based authentication.|Optional, but required by Workflow Composer.|
|\3=. *Crunch (running containers)*|
|"arvados-dispatch-cloud":crunch2-cloud/install-dispatch-cloud.html |Allocate and free cloud VM instances on demand based on workload.|Optional, not needed for a static Slurm cluster such as on-premises HPC.|
-|"crunch-dispatch-slurm":crunch2-slurm/install-dispatch.html |Run analysis workflows using Docker containers distributed across a Slurm cluster.|Optional, not needed for a Cloud installation, or if you wish to use Arvados for data management only.|
+|"crunch-dispatch-slurm":crunch2-slurm/install-dispatch.html |Run analysis workflows using Docker or Singularity containers distributed across a Slurm cluster.|Optional, not needed for a Cloud installation, or if you wish to use Arvados for data management only.|
+|"crunch-dispatch-lsf":crunch2-lsf/install-dispatch.html |Run analysis workflows using Docker or Singularity containers distributed across an LSF cluster.|Optional, not needed for a Cloud installation, or if you wish to use Arvados for data management only.|
h2(#identity). Identity provider
* LDAP login to authenticate users by username/password using the LDAP protocol, supported by many services such as OpenLDAP and Active Directory.
* PAM login to authenticate users by username/password according to the PAM configuration on the controller node.
+h2(#postgresql). PostgreSQL
+
+Arvados works well with a standalone PostgreSQL installation. When deploying on AWS, Aurora RDS also works but Aurora Serverless is not recommended.
+
h2(#storage). Storage backend
Choose which backend you will use for storing and retrieving content-addressed Keep blocks.
Choose which backend you will use to schedule computation.
* On AWS EC2 and Azure, you probably want to use @arvados-dispatch-cloud@ to manage the full lifecycle of cloud compute nodes: starting up nodes sized to the container request, executing containers on those nodes, and shutting nodes down when no longer needed.
-* For on-premise HPC clusters using "slurm":https://slurm.schedmd.com/ use @crunch-dispatch-slurm@ to execute containers with slurm job submissions.
+* For on-premises HPC clusters using "slurm":https://slurm.schedmd.com/ use @crunch-dispatch-slurm@ to execute containers with slurm job submissions.
+* For on-premises HPC clusters using "LSF":https://www.ibm.com/products/hpc-workload-management/ use @crunch-dispatch-lsf@ to execute containers with slurm job submissions.
* For single node demos, use @crunch-dispatch-local@ to execute containers directly.
h2(#machines). Hardware (or virtual machines)
<div class="offset1">
table(table table-bordered table-condensed).
|_. Function|_. Number of nodes|_. Recommended specs|
-|Postgres database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher|1|16+ GiB RAM, 4+ cores, fast disk for database|
+|PostgreSQL database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher|1|16+ GiB RAM, 4+ cores, fast disk for database|
|Workbench, Keepproxy, Keep-web, Keep-balance|1|8 GiB RAM, 2+ cores|
|Keepstore servers ^1^|2+|4 GiB RAM|
|Compute worker nodes ^1^|0+ |Depends on workload; scaled dynamically in the cloud|
</div>
^1^ Should be scaled up as needed
-^2^ Refers to shell nodes managed by Arvados, that provide ssh access for users to interact with Arvados at the command line. Optional.
+^2^ Refers to shell nodes managed by Arvados that provide ssh access for users to interact with Arvados at the command line. Optional.
{% include 'notebox_begin' %}
For a small demo installation, it is possible to run all the Arvados services on a single node. Special considerations for single-node installs will be noted in boxes like this.
h2(#dnstls). DNS entries and TLS certificates
-The following services are normally public-facing and require DNS entries and corresponding TLS certificates. Get certificates from your preferred TLS certificate provider. We recommend using "Let's Encrypt":https://letsencrypt.org/. You can run several services on same node, but each distinct hostname requires its own TLS certificate.
+The following services are normally public-facing and require DNS entries and corresponding TLS certificates. Get certificates from your preferred TLS certificate provider. We recommend using "Let's Encrypt":https://letsencrypt.org/. You can run several services on the same node, but each distinct DNS name requires a valid, matching TLS certificate.
-This guide uses the following hostname conventions. A later part of this guide will describe how to set up Nginx virtual hosts.
+This guide uses the following DNS name conventions. A later part of this guide will describe how to set up Nginx virtual hosts.
+It is possible to use custom DNS names for the Arvados services.
<div class="offset1">
table(table table-bordered table-condensed).
-|_. Function|_. Hostname|
+|_. Function|_. DNS name|
|Arvados API|@ClusterID.example.com@|
|Arvados Git server|git.@ClusterID.example.com@|
+|Arvados Webshell|webshell.@ClusterID.example.com@|
|Arvados Websockets endpoint|ws.@ClusterID.example.com@|
|Arvados Workbench|workbench.@ClusterID.example.com@|
|Arvados Workbench 2|workbench2.@ClusterID.example.com@|
|Arvados Keepproxy server|keep.@ClusterID.example.com@|
|Arvados Keep-web server|download.@ClusterID.example.com@
_and_
-*.collections.@ClusterID.example.com@ or
-*<notextile>--</notextile>collections.@ClusterID.example.com@ or
+*.collections.@ClusterID.example.com@ _or_
+*<notextile>--</notextile>collections.@ClusterID.example.com@ _or_
collections.@ClusterID.example.com@ (see the "keep-web install docs":install-keep-web.html)|
</div>
+Setting up Arvados is easiest when Wildcard TLS and wildcard DNS are available. It is also possible to set up Arvados without wildcard TLS and DNS, but not having a wildcard for @keep-web@ (i.e. not having *.collections.@ClusterID.example.com@) comes with a tradeoff: it will disable some features that allow users to view Arvados-hosted data in their browsers. More information on this tradeoff caused by the CORS rules applied by modern browsers is available in the "keep-web URL pattern guide":../api/keep-web-urls.html.
+
+The table below lists the required TLS certificates and DNS names in each scenario.
+
+<div class="offset1">
+table(table table-bordered table-condensed).
+||_. Wildcard TLS and DNS available|_. Wildcard TLS available|_. Other|
+|TLS|*.@ClusterID.example.com@
+@ClusterID.example.com@
+*.collections.@ClusterID.example.com@|*.@ClusterID.example.com@
+@ClusterID.example.com@|@ClusterID.example.com@
+git.@ClusterID.example.com@
+webshell.@ClusterID.example.com@
+ws.@ClusterID.example.com@
+workbench.@ClusterID.example.com@
+workbench2.@ClusterID.example.com@
+keep.@ClusterID.example.com@
+download.@ClusterID.example.com@
+collections.@ClusterID.example.com@|
+|DNS|@ClusterID.example.com@
+git.@ClusterID.example.com@
+webshell.@ClusterID.example.com@
+ws.@ClusterID.example.com@
+workbench.@ClusterID.example.com@
+workbench2.@ClusterID.example.com@
+keep.@ClusterID.example.com@
+download.@ClusterID.example.com@
+*.collections.@ClusterID.example.com@|@ClusterID.example.com@
+git.@ClusterID.example.com@
+webshell.@ClusterID.example.com@
+ws.@ClusterID.example.com@
+workbench.@ClusterID.example.com@
+workbench2.@ClusterID.example.com@
+keep.@ClusterID.example.com@
+download.@ClusterID.example.com@
+collections.@ClusterID.example.com@|@ClusterID.example.com@
+git.@ClusterID.example.com@
+webshell.@ClusterID.example.com@
+ws.@ClusterID.example.com@
+workbench.@ClusterID.example.com@
+workbench2.@ClusterID.example.com@
+keep.@ClusterID.example.com@
+download.@ClusterID.example.com@
+collections.@ClusterID.example.com@|
+</div>
+
{% include 'notebox_begin' %}
It is also possible to create your own certificate authority, issue server certificates, and install a custom root certificate in the browser. This is out of scope for this guide.
{% include 'notebox_end' %}
Arvados requires at least version *9.4* of PostgreSQL.
+* "AWS":#aws
* "CentOS 7":#centos7
* "Debian or Ubuntu":#debian
+h3(#aws). AWS
+
+When deploying on AWS, Arvados can use an Aurora RDS PostgreSQL database. Aurora Serverless is not recommended.
+
h3(#centos7). CentOS 7
{% assign rh_version = "7" %}
{% include 'note_python_sc' %}
# Install PostgreSQL
<notextile><pre># <span class="userinput">apt-get --no-install-recommends install postgresql postgresql-contrib</span></pre></notextile>
# Configure the database to launch at boot and start now
- <notextile><pre># <span class="userinput">systemctl enable --now postgresql</span></pre></notextile>
+<notextile><pre># <span class="userinput">systemctl enable --now postgresql</span></pre></notextile>
# "Install git and curl":#install-packages
# "Update Git Config":#config-git
# "Create record for VM":#vm-record
-# "Create scoped token":#scoped-token
# "Install arvados-login-sync":#arvados-login-sync
# "Confirm working installation":#confirm-working
h2. Vocabulary configuration (optional)
-Workbench2 can load a vocabulary file which lists available metadata properties for groups and collections. To configure the property vocabulary definition, please visit the "Workbench2 Vocabulary Format":{{site.baseurl}}/admin/workbench2-vocabulary.html page in the Admin section.
+Workbench2 can load a vocabulary file which lists available metadata properties for groups and collections. To configure the property vocabulary definition, please visit the "Metadata Vocabulary Format":{{site.baseurl}}/admin/metadata-vocabulary.html page in the Admin section.
{% assign arvados_component = 'arvados-workbench2' %}
name=Arvados
baseurl=http://rpm.arvados.org/CentOS/$releasever/os/$basearch/
gpgcheck=1
-gpgkey=http://rpm.arvados.org/CentOS/RPM-GPG-KEY-curoverse
+gpgkey=http://rpm.arvados.org/CentOS/RPM-GPG-KEY-arvados
</code></pre>
</notextile>
-{% include 'install_redhat_key' %}
+{% include 'gpg_key_fingerprint' %}
h3(#debian). Debian and Ubuntu
{% include 'install_debian_key' %}
+{% include 'gpg_key_fingerprint' %}
+
As root, add the Arvados package repository to your sources. This command depends on your OS vendor and version:
table(table table-bordered table-condensed).
|_. OS version|_. Command|
+|Debian 11 ("bullseye")|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/buster bullseye main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
|Debian 10 ("buster")|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/buster buster main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
|Ubuntu 20.04 ("focal")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/focal focal main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
|Ubuntu 18.04 ("bionic")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/bionic bionic main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-# "Install Saltstack":#saltstack
-# "Install dependencies":#dependencies
-# "Install Arvados using Saltstack":#saltstack
-# "DNS configuration":#final_steps
+# "Introduction":#introduction
+# "Hosts preparation":#hosts_preparation
+## "Create a compute image":#create_a_compute_image
+# "Multi host install using the provision.sh script":#multi_host
+# "Choose the desired configuration":#choose_configuration
+## "Multiple hosts / multiple hostnames":#multi_host_multi_hostnames
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+# "Installation order":#installation_order
+# "Run the provision.sh script":#run_provision_script
# "Initial user and login":#initial_user
+# "Test the installed cluster running a simple workflow":#test_install
-h2(#saltstack). Install Saltstack
-If you already have a Saltstack environment you can skip this section.
-The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
+h2(#introduction). Introduction
+Arvados components can be installed in a distributed infrastructure, whether it is an "on-prem" with physical or virtual hosts, or a cloud environment.
+
+As infrastructures vary a great deal from site to site, these instructions should be considered more as 'guidelines' than fixed steps to follow.
+
+We provide an "installer script":salt.html that can help you deploy the different Arvados components. At the time of writing, the provided examples are suitable to install Arvados on AWS.
+
+
+
+h2(#hosts_preparation). Hosts preparation
+
+In order to run Arvados on a multi-host installation, there are a few requirements that your infrastructure has to fulfill.
+
+These instructions explain how to setup a multi-host environment that is suitable for production use of Arvados.
+
+We suggest distributing the Arvados components in the following way, creating at least 6 hosts:
+
+# Database server:
+## postgresql server
+# API node:
+## arvados api server
+## arvados controller
+## arvados websocket
+## arvados cloud dispatcher
+# WORKBENCH node:
+## arvados workbench
+## arvados workbench2
+## arvados webshell
+# KEEPPROXY node:
+## arvados keepproxy
+## arvados keepweb
+# KEEPSTOREs (at least 2)
+## arvados keepstore
+# SHELL node (optional):
+## arvados shell
+
+Note that these hosts can be virtual machines in your infrastructure and they don't need to be physical machines.
+
+Again, if your infrastructure differs from the setup proposed above (ie, using RDS or an existing DB server), remember that you will need to edit the configuration files for the scripts so they work with your infrastructure.
+
+h2(#multi_host). Multi host install using the provision.sh script
+
+{% include 'branchname' %}
+
+This is a package-based installation method. Start with the @provision.sh@ script which is available by cloning the @{{ branchname }}@ branch from "https://git.arvados.org/arvados.git":https://git.arvados.org/arvados.git . The @provision.sh@ script and its supporting files can be found in the "arvados/tools/salt-install":https://git.arvados.org/arvados.git/tree/refs/heads/{{ branchname }}:/tools/salt-install directory in the Arvados git repository.
+
+This procedure will install all the main Arvados components to get you up and running in a multi-host environment.
+
+The @provision.sh@ script will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located at "arvados-formula":https://git.arvados.org/arvados-formula.git/tree/refs/heads/{{ branchname }} and will be cloned during the running of the @provision.sh@ script. The installer is built using "Saltstack":https://saltproject.io/ and @provision.sh@ performs the install using master-less mode.
+
+After setting up a few variables in a config file (next step), you'll be ready to run it and get Arvados deployed.
+
+h3(#create_a_compute_image). Create a compute image
+
+In a multi-host installation, containers are dispatched in docker daemons running in the <i>compute instances</i>, which need some special setup. We provide a "compute image builder script":https://github.com/arvados/arvados/tree/main/tools/compute-images that you can use to build a template image following "these instructions":https://doc.arvados.org/main/install/crunch2-cloud/install-compute-node.html . Once you have that image created, you can use the image ID in the Arvados configuration in the next steps.
+
+h2(#choose_configuration). Choose the desired configuration
+
+For documentation's sake, we will use the cluster name <i>arva2</i> and the domain <i>arv.local</i>. If you don't change them as required in the next steps, installation won't proceed.
+
+We will try to provide a few Arvados' multi host installation configurations examples for different infrastructure providers. Currently only AWS is available but they can be used with almost any provider with little changes.
+
+You need to copy one of the example configuration files and directory, and edit them to suit your needs.
+
+h3(#multi_host_multi_hostnames). Multiple hosts / multiple hostnames
<notextile>
-<pre><code>curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
-sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
+<pre><code>cp local.params.example.multiple_hosts local.params
+cp -r config_examples/multi_host/aws local_config_dir
</code></pre>
</notextile>
-For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html
+Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_INT_IP, *_TOKEN</b> and <b>*KEY</b> variables. Those variables will be used to do a search and replace on the <i>pillars/*</i> in place of any matching __VARIABLE__.
-h2(#dependencies). Install dependencies
+The <i>multi_host</i> example includes Let's Encrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53.
-Arvados depends in a few applications and packages (postgresql, nginx+passenger, ruby) that can also be installed using their respective Saltstack formulas.
+{% include 'install_custom_certificates' %}
-The formulas we use are:
+h3(#further_customization). Further customization of the installation (modifying the salt pillars and states)
-* "postgres":https://github.com/saltstack-formulas/postgres-formula.git
-* "nginx":https://github.com/saltstack-formulas/nginx-formula.git
-* "docker":https://github.com/saltstack-formulas/docker-formula.git
-* "locale":https://github.com/saltstack-formulas/locale-formula.git
+You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> file, where you will need to provide some information that describes your environment.
-There are example Salt pillar files for each of those formulas in the "arvados-formula's test/salt/pillar/examples":https://github.com/saltstack-formulas/arvados-formula/tree/master/test/salt/pillar/examples directory. As they are, they allow you to get all the main Arvados components up and running.
+Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the hosts.
-h2(#saltstack). Install Arvados using Saltstack
+h2(#installation_order). Installation order
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+A few Arvados nodes need to be installed in certain order. The required order is
-The Arvados formula we maintain is located in the Saltstack's community repository of formulas:
+#. Database
+#. API server
+#. The other nodes can be installed in any order after the two above
-* "arvados-formula":https://github.com/saltstack-formulas/arvados-formula.git
+h2(#run_provision_script). Run the provision.sh script
-The @development@ version lives in our own repository
+When you finished customizing the configuration, you are ready to copy the files to the hosts and run the @provision.sh@ script. The script allows you to specify the <i>role/s</i> a node will have and it will install only the Arvados components required for such role. The general format of the command is:
-* "arvados-formula development":https://github.com/arvados/arvados-formula.git
+<notextile>
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply
+</code></pre>
+</notextile>
-This last one might break from time to time, as we try and add new features. Use with caution.
+and wait for it to finish.
-As much as possible, we try to keep it up to date, with example pillars to help you deploy Arvados.
+If everything goes OK, you'll get some final lines stating something like:
-For those familiar with Saltstack, the process to get it deployed is similar to any other formula:
+<notextile>
+<pre><code>arvados: Succeeded: 109 (changed=9)
+arvados: Failed: 0
+</code></pre>
+</notextile>
-1. Fork/copy the formula to your Salt master host.
-2. Edit the Arvados, nginx, postgres, locale and docker pillars to match your desired configuration.
-3. Run a @state.apply@ to get it deployed.
+The distribution of role as described above can be applied running these commands:
-h2(#final_steps). DNS configuration
+#. Database
+<notextile>
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles database
+</code></pre>
+</notextile>
-After the setup is done, you need to set up your DNS to be able to access the cluster's nodes.
+#. API
+<notextile>
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher
+</code></pre>
+</notextile>
-The simplest way to do this is to add entries in the @/etc/hosts@ file of every host:
+#. Keepstore/s
+<notextile>
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles keepstore
+</code></pre>
+</notextile>
+#. Workbench
<notextile>
-<pre><code>export CLUSTER="arva2"
-export DOMAIN="arv.local"
-
-echo A.B.C.a api ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.b keep keep.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.c keep0 keep0.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.d collections collections.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.e download download.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.f ws ws.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.g workbench workbench.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.h workbench2 workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles workbench,workbench2,webshell
</code></pre>
</notextile>
-Replacing in each case de @A.B.C.x@ IP with the corresponding IP of the node.
+#. Keepproxy / Keepweb
+<notextile>
+<pre><code>scp -r provision.sh local* user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles keepproxy,keepweb
+</code></pre>
+</notextile>
-If your infrastructure uses another DNS service setup, add the corresponding entries accordingly.
+#. Shell (here we copy the CLI test workflow too)
+<notextile>
+<pre><code>scp -r provision.sh local* tests user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles shell
+</code></pre>
+</notextile>
h2(#initial_user). Initial user and login
-At this point you should be able to log into the Arvados cluster.
-
-If you did not change the defaults, the initial URL will be:
+At this point you should be able to log into the Arvados cluster. The initial URL will be:
* https://workbench.arva2.arv.local
By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.
-Assuming you didn't change the defaults, the initial credentials are:
+Assuming you didn't change these values in the @local.params@ file, the initial credentials are:
* User: 'admin'
* Password: 'password'
* Email: 'admin@arva2.arv.local'
+
+h2(#test_install). Test the installed cluster running a simple workflow
+
+If you followed the instructions above, the @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the @shell@ node. If you want to run it, just ssh to the node, change to that directory and run:
+
+<notextile>
+<pre><code>cd /tmp/cluster_tests
+sudo /run-test.sh
+</code></pre>
+</notextile>
+
+It will create a test user (by default, the same one as the admin user), upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
+
+<notextile>
+<pre><code>Creating Arvados Standard Docker Images project
+Arvados project uuid is 'arva2-j7d0g-0prd8cjlk6kfl7y'
+{
+ ...
+ "uuid":"arva2-o0j2j-n4zu4cak5iifq2a",
+ "owner_uuid":"arva2-tpzed-000000000000000",
+ ...
+}
+Uploading arvados/jobs' docker image to the project
+2.1.1: Pulling from arvados/jobs
+8559a31e96f4: Pulling fs layer
+...
+Status: Downloaded newer image for arvados/jobs:2.1.1
+docker.io/arvados/jobs:2.1.1
+2020-11-23 21:43:39 arvados.arv_put[32678] INFO: Creating new cache file at /home/vagrant/.cache/arvados/arv-put/c59256eda1829281424c80f588c7cc4d
+2020-11-23 21:43:46 arvados.arv_put[32678] INFO: Collection saved as 'Docker image arvados jobs:2.1.1 sha256:0dd50'
+arva2-4zz18-1u5pvbld7cvxuy2
+Creating initial user ('admin')
+Setting up user ('admin')
+{
+ "items":[
+ {
+ ...
+ "owner_uuid":"arva2-tpzed-000000000000000",
+ ...
+ "uuid":"arva2-o0j2j-1ownrdne0ok9iox"
+ },
+ {
+ ...
+ "owner_uuid":"arva2-tpzed-000000000000000",
+ ...
+ "uuid":"arva2-o0j2j-1zbeyhcwxc1tvb7"
+ },
+ {
+ ...
+ "email":"admin@arva2.arv.local",
+ ...
+ "owner_uuid":"arva2-tpzed-000000000000000",
+ ...
+ "username":"admin",
+ "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
+ ...
+ }
+ ],
+ "kind":"arvados#HashList"
+}
+Activating user 'admin'
+{
+ ...
+ "email":"admin@arva2.arv.local",
+ ...
+ "username":"admin",
+ "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
+ ...
+}
+Running test CWL workflow
+INFO /usr/bin/cwl-runner 2.1.1, arvados-python-client 2.1.1, cwltool 3.0.20200807132242
+INFO Resolved 'hasher-workflow.cwl' to 'file:///tmp/cluster_tests/hasher-workflow.cwl'
+...
+INFO Using cluster arva2 (https://arva2.arv.local:8443/)
+INFO Upload local files: "test.txt"
+INFO Uploaded to ea34d971b71d5536b4f6b7d6c69dc7f6+50 (arva2-4zz18-c8uvwqdry4r8jao)
+INFO Using collection cache size 256 MiB
+INFO [container hasher-workflow.cwl] submitted container_request arva2-xvhdp-v1bkywd58gyocwm
+INFO [container hasher-workflow.cwl] arva2-xvhdp-v1bkywd58gyocwm is Final
+INFO Overall process status is success
+INFO Final output collection d6c69a88147dde9d52a418d50ef788df+123
+{
+ "hasher_out": {
+ "basename": "hasher3.md5sum.txt",
+ "class": "File",
+ "location": "keep:d6c69a88147dde9d52a418d50ef788df+123/hasher3.md5sum.txt",
+ "size": 95
+ }
+}
+INFO Final process status is success
+</code></pre>
+</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-# "Install Saltstack":#saltstack
# "Single host install using the provision.sh script":#single_host
-# "Final steps":#final_steps
-## "DNS configuration":#dns_configuration
-## "Install root certificate":#ca_root_certificate
+# "Choose the desired configuration":#choose_configuration
+## "Single host / single hostname":#single_host_single_hostnames
+## "Single host / multiple hostnames (Alternative configuration)":#single_host_multiple_hostnames
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+# "Run the provision.sh script":#run_provision_script
+# "Final configuration steps":#final_steps
+## "Install the CA root certificate (required in both alternatives)":#ca_root_certificate
+## "DNS configuration (single host / multiple hostnames)":#single_host_multiple_hostnames_dns_configuration
# "Initial user and login":#initial_user
# "Test the installed cluster running a simple workflow":#test_install
-h2(#saltstack). Install Saltstack
+h2(#single_host). Single host install using the provision.sh script
+
+<b>NOTE: The single host installation is not recommended for production use.</b>
+
+{% include 'branchname' %}
+
+This is a package-based installation method. Start with the @provision.sh@ script which is available by cloning the @{{ branchname }}@ branch from "https://git.arvados.org/arvados.git":https://git.arvados.org/arvados.git . The @provision.sh@ script and its supporting files can be found in the "arvados/tools/salt-install":https://git.arvados.org/arvados.git/tree/refs/heads/{{ branchname }}:/tools/salt-install directory in the Arvados git repository.
+
+This procedure will install all the main Arvados components to get you up and running in a single host. The whole installation procedure takes somewhere between 15 to 60 minutes, depending on the host resources and its network bandwidth. As a reference, on a virtual machine with 1 core and 1 GB RAM, it takes ~25 minutes to do the initial install.
+
+The @provision.sh@ script will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located at "arvados-formula":https://git.arvados.org/arvados-formula.git/tree/refs/heads/{{ branchname }} and will be cloned during the running of the @provision.sh@ script. The installer is built using "Saltstack":https://saltproject.io/ and @provision.sh@ performs the install using master-less mode.
+
+After setting up a few variables in a config file (next step), you'll be ready to run it and get Arvados deployed.
-If you already have a Saltstack environment you can skip this section.
+h2(#choose_configuration). Choose the desired configuration
-The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
+For documentation's sake, we will use the cluster name <i>arva2</i> and the domain <i>arv.local</i>. If you don't change them as required in the next steps, installation won't proceed.
+Arvados' single host installation can be done in two fashions:
+
+* Using a single hostname, assigning <i>a different port (other than 443) for each user-facing service</i>: This choice is easier to setup, but the user will need to know the port/s for the different services she wants to connect to.
+* Using multiple hostnames on the same IP: this setup involves a few extra steps but each service will have a meaningful hostname so it will make easier to access them later.
+
+Once you decide which of these choices you prefer, copy one the two example configuration files and directory, and edit them to suit your needs.
+
+h3(#single_host_single_hostnames). Single host / single hostname
<notextile>
-<pre><code>curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
-sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
+<pre><code>cp local.params.example.single_host_single_hostname local.params
+cp -r config_examples/single_host/single_hostname local_config_dir
</code></pre>
</notextile>
-For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html
+Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_PORT, *_TOKEN</b> and <b>*KEY</b> variables.
-h2(#single_host). Single host install using the provision.sh script
+The <i>single_host</i> examples use self-signed SSL certificates, which are deployed using the same mechanism used to deploy custom certificates.
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+{% include 'install_custom_certificates' %}
-Use the @provision.sh@ script to deploy Arvados, which is implemented with the @arvados-formula@ in a Saltstack master-less setup:
+If you want to use valid certificates provided by Let's Encrypt, please set the variable <i>USE_LETSENCRYPT=yes</i> and make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable.
-* edit the variables at the very beginning of the file,
-* run the script as root
-* wait for it to finish
+h3(#single_host_multiple_hostnames). Single host / multiple hostnames (Alternative configuration)
+<notextile>
+<pre><code>cp local.params.example.single_host_multiple_hostnames local.params
+cp -r config_examples/single_host/multiple_hostnames local_config_dir
+</code></pre>
+</notextile>
-This will install all the main Arvados components to get you up and running. The whole installation procedure takes somewhere between 15 to 60 minutes, depending on the host and your network bandwidth. On a virtual machine with 1 core and 1 GB RAM, it takes ~25 minutes to do the initial install.
+Edit the variables in the <i>local.params</i> file.
-If everything goes OK, you'll get some final lines stating something like:
+h3(#further_customization). Further customization of the installation (modifying the salt pillars and states)
+
+If you want or need further customization, you can edit the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> one. Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the host.
+
+h2(#run_provision_script). Run the provision.sh script
+
+When you finished customizing the configuration, you are ready to copy the files to the host (if needed) and run the @provision.sh@ script:
<notextile>
-<pre><code>arvados: Succeeded: 109 (changed=9)
-arvados: Failed: 0
+<pre><code>scp -r provision.sh local* tests user@host:
+ssh user@host sudo ./provision.sh
</code></pre>
</notextile>
-h2(#final_steps). Final configuration steps
+or, if you saved the @local.params@ in another directory or with some other name
-h3(#dns_configuration). DNS configuration
+<notextile>
+<pre><code>scp -r provision.sh local* tests user@host:
+ssh user@host sudo ./provision.sh -c /path/to/your/local.params.file
+</code></pre>
+</notextile>
-After the setup is done, you need to set up your DNS to be able to access the cluster.
+and wait for it to finish.
-The simplest way to do this is to edit your @/etc/hosts@ file (as root):
+If everything goes OK, you'll get some final lines stating something like:
<notextile>
-<pre><code>export CLUSTER="arva2"
-export DOMAIN="arv.local"
-export HOST_IP="127.0.0.2" # This is valid either if installing in your computer directly
- # or in a Vagrant VM. If you're installing it on a remote host
- # just change the IP to match that of the host.
-echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} keep.${CLUSTER}.${DOMAIN} keep0.${CLUSTER}.${DOMAIN} collections.${CLUSTER}.${DOMAIN} download.${CLUSTER}.${DOMAIN} ws.${CLUSTER}.${DOMAIN} workbench.${CLUSTER}.${DOMAIN} workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+<pre><code>arvados: Succeeded: 109 (changed=9)
+arvados: Failed: 0
</code></pre>
</notextile>
-h3(#ca_root_certificate). Install root certificate
+h2(#final_steps). Final configuration steps
+
+Once the deployment went OK, you'll need to perform a few extra steps in your local browser/host to access the cluster.
+
+h3(#ca_root_certificate). Install the CA root certificate (required in both alternatives)
Arvados uses SSL to encrypt communications. Its UI uses AJAX which will silently fail if the certificate is not valid or signed by an unknown Certification Authority.
</code></pre>
</notextile>
-h2(#initial_user). Initial user and login
+h3(#single_host_multiple_hostnames_dns_configuration). DNS configuration (single host / multiple hostnames)
+
+When using multiple hostnames, after the setup is done, you need to set up your DNS to be able to access the cluster.
-At this point you should be able to log into the Arvados cluster.
+If you don't have access to the domain's DNS to add the required entries, the simplest way to do it is to edit your @/etc/hosts@ file (as root):
+
+<notextile>
+<pre><code>export CLUSTER="arva2"
+export DOMAIN="arv.local"
+export HOST_IP="127.0.0.2" # This is valid either if installing in your computer directly
+ # or in a Vagrant VM. If you're installing it on a remote host
+ # just change the IP to match that of the host.
+echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} keep.${CLUSTER}.${DOMAIN} keep0.${CLUSTER}.${DOMAIN} collections.${CLUSTER}.${DOMAIN} download.${CLUSTER}.${DOMAIN} ws.${CLUSTER}.${DOMAIN} workbench.${CLUSTER}.${DOMAIN} workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+</code></pre>
+</notextile>
+
+h2(#initial_user). Initial user and login
-If you changed nothing in the @provision.sh@ script, the initial URL will be:
+At this point you should be able to log into the Arvados cluster. The initial URL will be:
* https://workbench.arva2.arv.local
By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.
-Assuming you didn't change these values in the @provision.sh@ script, the initial credentials are:
+Assuming you didn't change these values in the @local.params@ file, the initial credentials are:
* User: 'admin'
* Password: 'password'
h2(#test_install). Test the installed cluster running a simple workflow
-The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@. If you want to run it, just change to that directory and run:
+The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the node. If you want to run it, just ssh to the node, change to that directory and run:
<notextile>
<pre><code>cd /tmp/cluster_tests
-./run-test.sh
+sudo ./run-test.sh
</code></pre>
</notextile>
-It will create a test user, upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
+It will create a test user (by default, the same one as the admin user), upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
<notextile>
<pre><code>Creating Arvados Standard Docker Images project
h2(#vagrant). Vagrant
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/main/tools/salt-install directory in the Arvados git repository.
A @Vagrantfile@ is provided to install Arvados in a virtual machine on your computer using "Vagrant":https://www.vagrantup.com/.
---
layout: default
navsection: installguide
-title: Salt prerequisites
+title: Planning and prerequisites
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
{% endcomment %}
# "Introduction":#introduction
-# "Choose an installation method":#installmethod
+# "Provisioning Arvados with Saltstack":#provisioning_arvados
+# "The provisioning tool files and directories":#provisioning_tool_files and directories
+# "Choose an Arvados installation configuration":#choose_configuration
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+# "Dump the configuration files created with the provision script":#dump_provision_config
+# "Add the Arvados formula to your Saltstack infrastructure":#add_formula_to_saltstack
h2(#introduction). Introduction
-To ease the installation of the various Arvados components, we have developed a "Saltstack":https://www.saltstack.com/ 's "arvados-formula":https://github.com/saltstack-formulas/arvados-formula which can help you get an Arvados cluster up and running.
+{% include 'branchname' %}
-Saltstack is a Python-based, open-source software for event-driven IT automation, remote task execution, and configuration management. It can be used in a master/minion setup or master-less.
+To ease the installation of the various Arvados components, we have developed a "Saltstack":https://www.saltstack.com/ 's "arvados-formula":https://git.arvados.org/arvados-formula.git which can help you get an Arvados cluster up and running.
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+Saltstack is a Python-based, open-source software for event-driven IT automation, remote task execution, and configuration management. It can be used in a _master/minion_ setup (where a master node orchestrates and coordinates the configuration of nodes in an infrastructure) or <i>master-less</i>, where Saltstack is run locally in a node, with no communication with a master node.
-h2(#installmethod). Choose an installation method
+Similar to other configuration management tools like Puppet, Ansible or Chef, Saltstack uses files named <i>states</i> to describe the tasks that will be performed on a node to take it to a desired state, and <i>pillars</i> to configure variables passed to the states, adding flexibility to the tool.
-The salt formulas can be used in different ways. Choose one of these three options to install Arvados:
+You don't need to be running a Saltstack infrastructure to install Arvados: we wrote a provisioning script that will take care of setting up Saltstack in the node/s where you want to install Arvados and run a <i>master-less installer</i>. Once Arvados is installed, you can either uninstall Saltstack and its files or you can keep them, to modify/maintain your Arvados installation in the future.
-* "Use Vagrant to install Arvados in a virtual machine":salt-vagrant.html
-* "Arvados on a single host":salt-single-host.html
-* "Arvados across multiple hosts":salt-multi-host.html
+This is a package-based installation method.
+
+
+
+h2(#provisioning_arvados). Provisioning Arvados with Saltstack
+
+The "tools/salt-install":https://git.arvados.org/arvados.git/tree/{{ branchname }}:/tools/salt-install directory in the Arvados git repository contains a script that you can run in the node/s where you want to install Arvados' components (the @provision.sh@ script) and a few configuration examples for different setups, that you can use to customize your installation.
+
+The @provision.sh@ script will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located at "arvados-formula":https://git.arvados.org/arvados-formula.git/tree/refs/heads/{{ branchname }} and will be cloned during the running of the @provision.sh@ script. The installer is built using "Saltstack":https://saltproject.io/ and @provision.sh@ performs the install using master-less mode.
+
+After setting up a few variables in a config file and copying a directory from the examples (see below), you'll be ready to run it and get Arvados deployed.
+
+
+
+h2(#provisioning_tool_files and directories). The provisioning tool files and directories
+
+The "tools/salt-install":https://git.arvados.org/arvados.git/tree/{{ branchname }}:/tools/salt-install directory contains the following elements:
+
+* The @provision.sh@ script itself. You don't need to modify it.
+* A few @local.params.*@ example files. You will need to copy one of these files to a file named @local.params@, which is the main configuration file for the @provision.sh@ script.
+* A few @config_examples/*@ directories, with pillars and states templates. You need to copy one of these to a @local_config_dir@ directory, which will be used by the @provision.sh@ script to setup your nodes.
+* A @tests@ directory, with a simple workflow and arvados CLI commands you can run to tests your cluster is capable of running a CWL workflow, upload files and create a user.
+
+Once you decide on an Arvados architecture you want to apply, you need to copy one of the example configuration files and directory, and edit them to suit your needs.
+
+Ie., for a multi-hosts / multi-hostnames in AWS, you need to do this:
+<notextile>
+<pre><code>cp local.params.example.multiple_hosts local.params
+cp -r config_examples/multi_host/aws local_config_dir
+</code></pre>
+</notextile>
+
+These local files will be preserved if you upgrade the repository.
+
+
+
+h2(#choose_configuration). Choose an Arvados installation configuration
+
+The configuration examples provided with this installer are suitable to install Arvados with the following distribution of hosts/roles:
+
+* All roles on a single host, which can be done in two fashions:
+** Using a single hostname, assigning <i>a different port (other than 443) for each user-facing service</i>: This choice is easier to setup, but the user will need to know the port/s for the different services she wants to connect to. See "Single host install using the provision.sh script":salt-single-host.html for more details.
+** Using multiple hostnames on the same IP: this setup involves a few extra steps but each service will have a meaningful hostname so it will make easier to access them later. See "Single host install using the provision.sh script":salt-single-host.html for more details.
+* Roles distributed over multiple AWS instances, using multiple hostnames. This example can be adapted to use on-prem too. See "Multiple hosts installation":salt-multi-host.html for more details.
+
+Once you decide which of these choices you prefer, copy one of the example configuration files and directory, and edit them to suit your needs.
+
+Ie, if you decide to install Arvados on a single host using multiple hostnames:
+<notextile>
+<pre><code>cp local.params.example.single_host_multiple_hostnames local.params
+cp -r config_examples/single_host/multiple_hostnames local_config_dir
+</code></pre>
+</notextile>
+
+Edit the variables in the <i>local.params</i> file.
+
+
+
+h3(#further_customization). Further customization of the installation (modifying the salt pillars and states)
+
+If you want or need further customization, you can edit the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> one. Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the host.
+
+
+
+h2(#dump_provision_config). Dump the configuration files created with the provision script
+
+As mentioned above, the @provision.sh@ script helps you create a set of configuration files to be used by the Saltstack @arvados-formula@ and other helper formulas.
+
+Is it possible you want to inspect these files before deploying them or use them within your existing Saltstack environment. In order to get a rendered version of these files, the @provision.sh@ script has a option, @--dump-config@, which takes a directory as mandatory parameter. When this option it used, the script will create the specified directory and write the pillars, states and tests files so you can inspect them.
+
+Ie.
+<notextile>
+<pre><code>./provision.sh --dump-config ./config_dump --role workbench
+</code></pre>
+</notextile>
+
+will dump the configuration files used to install a workbench node under the @config_dump@ directory.
+
+These files are also suitable to be used in your existing Saltstack environment (see below).
+
+
+
+h2.(#add_formula_to_saltstack). Add the Arvados formula to your Saltstack infrastructure
+
+If you already have a Saltstack environment you can add the arvados-formula to your Saltstack master and apply the corresponding states and pillars to the nodes on your infrastructure that will be used to run Arvados.
+
+The @--dump-config@ option described above writes a @pillars/top.sls@ and @salt/top.sls@ files that you can use as a guide to configure your infrastructure.
Check the LDAP section in the "default config file":{{site.baseurl}}/admin/config.html for more details and configuration options.
-h2(#pam). PAM (experimental)
+h2(#pam). PAM
With this configuration, authentication is done according to the Linux PAM ("Pluggable Authentication Modules") configuration on your controller host.
Check the "default config file":{{site.baseurl}}/admin/config.html for more PAM configuration options.
-The default PAM configuration on most Linux systems uses the local password database in @/etc/shadow@ for all logins. In this case, in order to log in to Arvados, users must have a UNIX account and password on the controller host itself. This can be convenient for a single-user or test cluster. User accounts can have @/dev/false@ as the shell in order to allow the user to log into Arvados but not log into a shell on the controller host.
+The default PAM configuration on most Linux systems uses the local user/password database in @/etc/passwd@ and @/etc/shadow@ for all logins. In this case, in order to log in to Arvados, users must have a UNIX account and password on the controller host itself. This can be convenient for a single-user or test cluster. Configuring a user account with a shell of @/bin/false@ will enable the user to log into Arvados but not log into shell login on the controller host.
-PAM can also be configured to use different backends like LDAP. In a production environment, PAM configuration should use the service name ("arvados" by default) to set a separate policy for Arvados logins: generally, Arvados users should not have shell accounts on the controller node.
+PAM can also be configured to use other authentication systems such such as NIS or Kerberos. In a production environment, PAM configuration should use the service name ("arvados" by default) and set a separate policy for Arvados login. In this case, Arvados users should not have shell accounts on the controller node.
For information about configuring PAM, refer to the "PAM System Administrator's Guide":http://www.linux-pam.org/Linux-PAM-html/Linux-PAM_SAG.html.
<notextile>{% code example_sdk_go as go %}</notextile>
-A few more usage examples can be found in the "services/keepproxy":https://dev.arvados.org/projects/arvados/repository/revisions/master/show/services/keepproxy and "sdk/go/keepclient":https://dev.arvados.org/projects/arvados/repository/revisions/master/show/sdk/go/keepclient directories in the arvados source tree.
+A few more usage examples can be found in the "services/keepproxy":https://dev.arvados.org/projects/arvados/repository/revisions/main/show/services/keepproxy and "sdk/go/keepclient":https://dev.arvados.org/projects/arvados/repository/revisions/main/show/sdk/go/keepclient directories in the arvados source tree.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This section documents language bindings for the "Arvados API":{{site.baseurl}}/api and Keep that are available for various programming languages. Not all features are available in every SDK. The most complete SDK is the Python SDK. Note that this section only gives a high level overview of each SDK. Consult the "Arvados API":{{site.baseurl}}/api section for detailed documentation about Arvados API calls available on each resource.
+This section documents language bindings for the "Arvados API":{{site.baseurl}}/api/index.html and Keep that are available for various programming languages. Not all features are available in every SDK. The most complete SDK is the Python SDK. Note that this section only gives a high level overview of each SDK. Consult the "Arvados API":{{site.baseurl}}/api/index.html section for detailed documentation about Arvados API calls available on each resource.
* "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html (also includes essential command line tools such as "arv-put" and "arv-get")
* "Command line SDK":{{site.baseurl}}/sdk/cli/install.html ("arv")
* "R SDK":{{site.baseurl}}/sdk/R/index.html
* "Ruby SDK":{{site.baseurl}}/sdk/ruby/index.html
* "Java SDK v2":{{site.baseurl}}/sdk/java-v2/index.html
-* "Java SDK v1":{{site.baseurl}}/sdk/java/index.html
* "Perl SDK":{{site.baseurl}}/sdk/perl/index.html
Many Arvados Workbench pages, under the *Advanced* tab, provide examples of API and SDK use for accessing the current resource .
}
dependencies {
- api 'org.arvados:arvados-java-sdk:0.1.0'
+ api 'org.arvados:arvados-java-sdk:0.1.1'
}
</pre>
$ <code class="userinput">git clone https://github.com/arvados/arvados.git</code>
$ <code class="userinput">cd arvados/sdk/java-v2</code>
$ <code class="userinput">gradle test</code>
-$ <code class="userinput">gradle jar</code>
+$ <code class="userinput">gradle jar -Pversion=0.1.1</code>
</pre>
-This will build the SDK and run all unit tests, then generate an Arvados Java sdk jar file in build/libs/arvados-java-2.0.0.jar
+This will build the SDK and run all unit tests, then generate an Arvados Java sdk jar file in build/libs/arvados-java-0.1.1.jar
</notextile>
+++ /dev/null
----
-layout: default
-navsection: sdk
-navmenu: Java
-title: "Examples"
-...
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-h2. Initialize SDK
-
-{% codeblock as java %}
-import org.arvados.sdk.Arvados;
-{% endcodeblock %}
-
-{% codeblock as java %}
- String apiName = "arvados";
- String apiVersion = "v1";
-
- Arvados arv = new Arvados(apiName, apiVersion);
-{% endcodeblock %}
-
-h2. create
-
-{% codeblock as java %}
- Map<String, String> collection = new HashMap<String, String>();
- collection.put("name", "create example");
-
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("collection", collection);
- Map response = arv.call("collections", "create", params);
-{% endcodeblock %}
-
-h2. delete
-
-{% codeblock as java %}
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- Map response = arv.call("collections", "delete", params);
-{% endcodeblock %}
-
-h2. get
-
-{% codeblock as java %}
- params = new HashMap<String, Object>();
- params.put("uuid", userUuid);
- Map response = arv.call("users", "get", params);
-{% endcodeblock %}
-
-h2. list
-
-{% codeblock as java %}
- Map<String, Object> params = new HashMap<String, Object>();
- Map response = arv.call("users", "list", params);
-
- // get uuid of the first user from the response
- List items = (List)response.get("items");
-
- Map firstUser = (Map)items.get(0);
- String userUuid = (String)firstUser.get("uuid");
-{% endcodeblock %}
-
-h2. update
-
-{% codeblock as java %}
- Map<String, String> collection = new HashMap<String, String>();
- collection.put("name", "update example");
-
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- params.put("collection", collection);
- Map response = arv.call("collections", "update", params);
-{% endcodeblock %}
-
-h2. Get current user
-
-{% codeblock as java %}
- Map<String, Object> params = new HashMap<String, Object>();
- Map response = arv.call("users", "current", params);
-{% endcodeblock %}
+++ /dev/null
----
-layout: default
-navsection: sdk
-navmenu: Java SDK v1
-title: "Installation"
-...
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-The Java SDK v1 provides a low level API to call Arvados from Java.
-
-This is a legacy SDK. It is no longer used or maintained regularly. The "Arvados Java SDK v2":../java-v2/index.html should be used.
-
-h3. Introdution
-
-* The Java SDK requires Java 6 or later
-
-* The Java SDK is implemented as a maven project. Hence, you would need a working
-maven environment to be able to build the source code. If you do not have maven setup,
-you may find the "Maven in 5 Minutes":http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html link useful.
-
-* In this document $ARVADOS_HOME is used to refer to the directory where
-arvados code is cloned in your system. For ex: $ARVADOS_HOME = $HOME/arvados
-
-
-h3. Setting up the environment
-
-* The SDK requires a running Arvados API server. The following information
- about the API server needs to be passed to the SDK using environment
- variables or during the construction of the Arvados instance.
-
-<notextile>
-<pre>
-ARVADOS_API_TOKEN: API client token to be used to authorize with API server.
-
-ARVADOS_API_HOST: Host name of the API server.
-
-ARVADOS_API_HOST_INSECURE: Set this to true if you are using self-signed
- certificates and would like to bypass certificate validations.
-</pre>
-</notextile>
-
-* Please see "api-tokens":{{site.baseurl}}/user/reference/api-tokens.html for full details.
-
-
-h3. Building the Arvados SDK
-
-<notextile>
-<pre>
-$ <code class="userinput">cd $ARVADOS_HOME/sdk/java</code>
-
-$ <code class="userinput">mvn -Dmaven.test.skip=true clean package</code>
- This will generate arvados sdk jar file in the target directory
-</pre>
-</notextile>
-
-
-h3. Implementing your code to use SDK
-
-* The following two sample programs serve as sample implementations using the SDK.
-<code class="userinput">$ARVADOS_HOME/sdk/java/ArvadosSDKJavaExample.java</code> is a simple program
- that makes a few calls to API server.
-<code class="userinput">$ARVADOS_HOME/sdk/java/ArvadosSDKJavaExampleWithPrompt.java</code> can be
- used to make calls to API server interactively.
-
-Please use these implementations to see how you would use the SDK from your java program.
-
-Also, refer to <code class="userinput">$ARVADOS_HOME/arvados/sdk/java/src/test/java/org/arvados/sdk/java/ArvadosTest.java</code>
-for more sample API invocation examples.
-
-Below are the steps to compile and run these java program.
-
-* These programs create an instance of Arvados SDK class and use it to
-make various <code class="userinput">call</code> requests.
-
-* To compile the examples
-<notextile>
-<pre>
-$ <code class="userinput">javac -cp $ARVADOS_HOME/sdk/java/target/arvados-sdk-1.1-jar-with-dependencies.jar \
-ArvadosSDKJavaExample*.java</code>
-This results in the generation of the ArvadosSDKJavaExample*.class files
-in the same directory as the java files
-</pre>
-</notextile>
-
-* To run the samples
-<notextile>
-<pre>
-$ <code class="userinput">java -cp .:$ARVADOS_HOME/sdk/java/target/arvados-sdk-1.1-jar-with-dependencies.jar \
-ArvadosSDKJavaExample</code>
-$ <code class="userinput">java -cp .:$ARVADOS_HOME/sdk/java/target/arvados-sdk-1.1-jar-with-dependencies.jar \
-ArvadosSDKJavaExampleWithPrompt</code>
-</pre>
-</notextile>
-
-
-h3. Viewing and Managing SDK logging
-
-* SDK uses log4j logging
-
-* The default location of the log file is
- <code class="userinput">$ARVADOS_HOME/sdk/java/log/arvados_sdk_java.log</code>
-
-* Update <code class="userinput">log4j.properties</code> file to change name and location of the log file.
-
-<notextile>
-<pre>
-$ <code class="userinput">nano $ARVADOS_HOME/sdk/java/src/main/resources/log4j.properties</code>
-and modify the <code class="userinput">log4j.appender.fileAppender.File</code> property as needed.
-
-Rebuild the SDK:
-$ <code class="userinput">mvn -Dmaven.test.skip=true clean package</code>
-</pre>
-</notextile>
-
-
-h3. Using the SDK in eclipse
-
-* To develop in eclipse, you can use the provided <code class="userinput">eclipse project</code>
-
-* Install "m2eclipse":https://www.eclipse.org/m2e/ plugin in your eclipse
-
-* Set <code class="userinput">M2_REPO</code> classpath variable in eclipse to point to your local repository.
-The local repository is usually located in your home directory at <code class="userinput">$HOME/.m2/repository</code>.
-
-<notextile>
-<pre>
-In Eclipse IDE:
-Window -> Preferences -> Java -> Build Path -> Classpath Variables
- Click on the "New..." button and add a new
- M2_REPO variable and set it to your local Maven repository
-</pre>
-</notextile>
-
-
-* Open the SDK project in eclipse
-<notextile>
-<pre>
-In Eclipse IDE:
-File -> Import -> Existing Projects into Workspace -> Next -> Browse
- and select $ARVADOS_HOME/sdk/java
-</pre>
-</notextile>
{% codeblock as python %}
import arvados.collection
-source_collection = "x1u39-4zz18-krzg64ufvehgitl"
-target_project = "x1u39-j7d0g-67q94einb8ptznm"
+source_collection = "zzzzz-4zz18-zzzzzzzzzzzzzzz"
+target_project = "zzzzz-j7d0g-zzzzzzzzzzzzzzz"
target_name = "Files copied from source_collection"
files_to_copy = ["folder1/sample1/sample1_R1.fastq",
"folder1/sample2/sample2_R1.fastq"]
{% codeblock as python %}
import arvados.collection
-source_collection = "x1u39-4zz18-krzg64ufvehgitl"
-target_collection = "x1u39-4zz18-67q94einb8ptznm"
+source_collection = "zzzzz-4zz18-zzzzzzzzzzzzzzz"
+target_collection = "zzzzz-4zz18-aaaaaaaaaaaaaaa"
files_to_copy = ["folder1/sample1/sample1_R1.fastq",
"folder1/sample2/sample2_R1.fastq"]
target.save()
{% endcodeblock %}
+h2. Delete a file from an existing collection
+
+{% codeblock as python %}
+import arvados
+
+c = arvados.collection.Collection("zzzzz-4zz18-zzzzzzzzzzzzzzz")
+c.remove("file2.txt")
+c.save()
+{% endcodeblock %}
+
h2. Listing records with paging
Use the @arvados.util.keyset_list_all@ helper method to iterate over all the records matching an optional filter. This method handles paging internally and returns results incrementally using a Python iterator. The first parameter of the method takes a @list@ method of an Arvados resource (@collections@, @container_requests@, etc).
To use the Python SDK elsewhere, you can install from PyPI or a distribution package.
-As of Arvados 2.1, the Python SDK requires Python 3.5+. The last version to support Python 2.7 is Arvados 2.0.4.
+As of Arvados 2.2, the Python SDK requires Python 3.6+. The last version to support Python 2.7 is Arvados 2.0.4.
h2. Option 1: Install from a distribution package
If you installed from a distribution package (option 2): the package includes a virtualenv, which means the correct Python environment needs to be loaded before the Arvados SDK can be imported. This can be done by activating the virtualenv first:
<notextile>
-<pre>~$ <code class="userinput">source /usr/share/python2.7/dist/python-arvados-python-client/bin/activate</code>
+<pre>~$ <code class="userinput">source /usr/share/python3/dist/python3-arvados-python-client/bin/activate</code>
(python-arvados-python-client) ~$ <code class="userinput">python</code>
Python 3.7.3 (default, Jul 25 2020, 13:03:44)
[GCC 8.3.0] on linux
Or alternatively, by using the Python executable from the virtualenv directly:
<notextile>
-<pre>~$ <code class="userinput">/usr/share/python2.7/dist/python-arvados-python-client/bin/python</code>
+<pre>~$ <code class="userinput">/usr/share/python3/dist/python3-arvados-python-client/bin/python</code>
Python 3.7.3 (default, Jul 25 2020, 13:03:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
--- /dev/null
+---
+layout: default
+navsection: userguide
+title: Analyzing workflow cost (cloud only)
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'notebox_begin' %}
+
+This is only applicable when Arvados runs in a cloud environment and @arvados-dispatch-cloud@ is used to dispatch @crunch@ jobs. The per node-hour price for each defined InstanceType most be supplied in "config.yml":{{site.baseurl}}/admin/config.html.
+
+{% include 'notebox_end' %}
+
+The @arvados-client@ program can be used to analyze the cost of a workflow. It can be installed from packages (@apt install arvados-client@ or @yum install arvados-client@). The @arvados-client costanalyzer@ command analyzes the cost accounting information associated with Arvados container requests.
+
+h2(#syntax). Syntax
+
+The @arvados-client costanalyzer@ tool has a number of command line arguments:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client costanalyzer -h</span>
+Usage:
+ ./arvados-client costanalyzer [options ...] [UUID ...]
+
+ This program analyzes the cost of Arvados container requests and calculates
+ the total cost across all requests. At least one UUID or a timestamp range
+ must be specified.
+
+ When the '-output' option is specified, a set of CSV files with cost details
+ will be written to the provided directory. Each file is a CSV report that lists
+ all the containers used to fulfill the container request, together with the
+ machine type and cost of each container.
+
+ When supplied with the UUID of a container request, it will calculate the
+ cost of that container request and all its children.
+
+ When supplied with the UUID of a collection, it will see if there is a
+ container_request UUID in the properties of the collection, and if so, it
+ will calculate the cost of that container request and all its children.
+
+ When supplied with a project UUID or when supplied with multiple container
+ request or collection UUIDs, it will calculate the total cost for all
+ supplied UUIDs.
+
+ When supplied with a 'begin' and 'end' timestamp (format:
+ 2006-01-02T15:04:05), it will calculate the cost for all top-level container
+ requests whose containers finished during the specified interval.
+
+ The total cost calculation takes container reuse into account: if a container
+ was reused between several container requests, its cost will only be counted
+ once.
+
+ Caveats:
+
+ - This program uses the cost data from config.yml at the time of the
+ execution of the container, stored in the 'node.json' file in its log
+ collection. If the cost data was not correctly configured at the time the
+ container was executed, the output from this program will be incorrect.
+
+ - If a container was run on a preemptible ("spot") instance, the cost data
+ reported by this program may be wildly inaccurate, because it does not have
+ access to the spot pricing in effect for the node then the container ran. The
+ UUID report file that is generated when the '-output' option is specified has
+ a column that indicates the preemptible state of the instance that ran the
+ container.
+
+ - This program does not take into account overhead costs like the time spent
+ starting and stopping compute nodes that run containers, the cost of the
+ permanent cloud nodes that provide the Arvados services, the cost of data
+ stored in Arvados, etc.
+
+ - When provided with a project UUID, subprojects will not be considered.
+
+ In order to get the data for the UUIDs supplied, the ARVADOS_API_HOST and
+ ARVADOS_API_TOKEN environment variables must be set.
+
+ This program prints the total dollar amount from the aggregate cost
+ accounting across all provided UUIDs on stdout.
+
+Options:
+ -begin begin
+ timestamp begin for date range operation (format: 2006-01-02T15:04:05)
+ -cache
+ create and use a local disk cache of Arvados objects (default true)
+ -end end
+ timestamp end for date range operation (format: 2006-01-02T15:04:05)
+ -log-level level
+ logging level (debug, info, ...) (default "info")
+ -output directory
+ output directory for the CSV reports
+</code></pre>
+</notextile>
--- /dev/null
+---
+layout: default
+navsection: userguide
+title: Analyzing workflow performance
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+The @crunchstat-summary@ tool can be used to analyze workflow and container performance. It can be installed from packages (@apt install python3-crunchstat-summary@ or @yum install rh-python36-python-crunchstat-summary@). @crunchstat-summary@ analyzes the crunchstat lines from the logs of a container or workflow and generates a report in text or html format.
+
+h2(#syntax). Syntax
+
+The @crunchstat-summary@ tool has a number of command line arguments:
+
+<notextile>
+<pre><code>~$ <span class="userinput">crunchstat-summary -h</span>
+usage: crunchstat-summary [-h]
+ [--job UUID | --container UUID | --pipeline-instance UUID | --log-file LOG_FILE]
+ [--skip-child-jobs] [--format {html,text}]
+ [--threads THREADS] [--verbose]
+
+Summarize resource usage of an Arvados Crunch job
+
+optional arguments:
+ -h, --help show this help message and exit
+ --job UUID, --container-request UUID
+ Look up the specified job or container request and
+ read its log data from Keep (or from the Arvados event
+ log, if the job is still running)
+ --container UUID [Deprecated] Look up the specified container find its
+ container request and read its log data from Keep (or
+ from the Arvados event log, if the job is still
+ running)
+ --pipeline-instance UUID
+ [Deprecated] Summarize each component of the given
+ pipeline instance (historical pre-1.4)
+ --log-file LOG_FILE Read log data from a regular file
+ --skip-child-jobs Do not include stats from child jobs/containers
+ --format {html,text} Report format
+ --threads THREADS Maximum worker threads to run
+ --verbose, -v Log more information (once for progress, twice for
+ debug)
+</code></pre>
+</notextile>
+
+h2(#examples). Examples
+
+@crunchstat-summary@ prints to stdout. The html report, in particular, should be redirected to a file and then loaded in a browser.
+
+An example text report for a single workflow step:
+
+<notextile>
+<pre><code>~$ <span class="userinput">crunchstat-summary --container-request pirca-xvhdp-rs0ef250emtmbj8 --format text</span>
+category metric task_max task_max_rate job_total
+blkio:0:0 read 63067755822 53687091.20 63067755822
+blkio:0:0 write 64484253320 16376234.80 64484253320
+cpu cpus 16 - -
+cpu sys 2147.29 0.60 2147.29
+cpu user 549046.22 15.99 549046.22
+cpu user+sys 551193.51 16.00 551193.51
+fuseop:create count 1 0.10 1
+fuseop:create time 0.01 0.00 0.01
+fuseop:destroy count 0 0 0
+fuseop:destroy time 0 0 0.00
+fuseop:flush count 12 0.70 12
+fuseop:flush time 0.00 0.00 0.00
+fuseop:forget count 0 0 0
+fuseop:forget time 0 0 0.00
+fuseop:getattr count 40 2.70 40
+fuseop:getattr time 0.00 0.00 0.00
+fuseop:lookup count 36 2.90 36
+fuseop:lookup time 0.67 0.07 0.67
+fuseop:mkdir count 0 0 0
+fuseop:mkdir time 0 0 0.00
+fuseop:on_event count 0 0 0
+fuseop:on_event time 0 0 0.00
+fuseop:open count 9 0.30 9
+fuseop:open time 0.00 0.00 0.00
+fuseop:opendir count 0 0 0
+fuseop:opendir time 0 0 0.00
+fuseop:read count 481185 409.60 481185
+fuseop:read time 370.11 2.14 370.11
+fuseop:readdir count 0 0 0
+fuseop:readdir time 0 0 0.00
+fuseop:release count 7 0.30 7
+fuseop:release time 0.00 0.00 0.00
+fuseop:rename count 0 0 0
+fuseop:rename time 0 0 0.00
+fuseop:rmdir count 0 0 0
+fuseop:rmdir time 0 0 0.00
+fuseop:setattr count 0 0 0
+fuseop:setattr time 0 0 0.00
+fuseop:statfs count 0 0 0
+fuseop:statfs time 0 0 0.00
+fuseop:unlink count 0 0 0
+fuseop:unlink time 0 0 0.00
+fuseop:write count 5414406 1123.00 5414406
+fuseop:write time 475.04 0.11 475.04
+fuseops read 481185 409.60 481185
+fuseops write 5414406 1123.00 5414406
+keepcache hit 961402 819.20 961402
+keepcache miss 946 0.90 946
+keepcalls get 962348 820.00 962348
+keepcalls put 961 0.30 961
+mem cache 22748987392 - -
+mem pgmajfault 0 - 0
+mem rss 27185491968 - -
+net:docker0 rx 0 - 0
+net:docker0 tx 0 - 0
+net:docker0 tx+rx 0 - 0
+net:ens5 rx 1100398604 - 1100398604
+net:ens5 tx 1445464 - 1445464
+net:ens5 tx+rx 1101844068 - 1101844068
+net:keep0 rx 63086467386 53687091.20 63086467386
+net:keep0 tx 64482237590 20131128.60 64482237590
+net:keep0 tx+rx 127568704976 53687091.20 127568704976
+statfs available 398721179648 - 398721179648
+statfs total 400289181696 - 400289181696
+statfs used 1568198656 0 1568002048
+time elapsed 34820 - 34820
+# Number of tasks: 1
+# Max CPU time spent by a single task: 551193.51s
+# Max CPU usage in a single interval: 1599.52%
+# Overall CPU usage: 1582.98%
+# Max memory used by a single task: 27.19GB
+# Max network traffic in a single task: 127.57GB
+# Max network speed in a single interval: 53.69MB/s
+# Keep cache miss rate 0.10%
+# Keep cache utilization 99.97%
+# Temp disk utilization 0.39%
+#!! bwamem-samtools-view max RSS was 25927 MiB -- try reducing runtime_constraints to "ram":27541477785
+#!! bwamem-samtools-view max temp disk utilization was 0% of 381746 MiB -- consider reducing "tmpdirMin" and/or "outdirMin"
+</code></pre>
+</notextile>
+
+When @crunchstat-summary@ is given a container or container request uuid for a toplevel workflow runner container, it will generate a report for the whole workflow. If the workflow is big, it can take a long time to generate the report.
+
+The equivalent html report can be generated as follows:
+
+<notextile>
+<pre><code>~$ <span class="userinput">crunchstat-summary --container-request pirca-xvhdp-rs0ef250emtmbj8 --format html > report.html</span>
+</code></pre>
+</notextile>
+
+When loaded in a browser:
+
+!(full-width)images/crunchstat-summary-html.png!
{% codeblock as yaml %}
hints:
arv:RunInSingleContainer: {}
+
arv:RuntimeConstraints:
keep_cache: 123456
outputDirType: keep_output_dir
+
arv:PartitionRequirement:
partition: dev_partition
+
arv:APIRequirement: {}
- cwltool:LoadListingRequirement:
- loadListing: shallow_listing
+
arv:IntermediateOutput:
outputTTL: 3600
- arv:ReuseRequirement:
- enableReuse: false
+
cwltool:Secrets:
secrets: [input1, input2]
- cwltool:TimeLimit:
- timelimit: 14400
+
arv:WorkflowRunnerResources:
ramMin: 2048
coresMin: 2
keep_cache: 512
+
arv:ClusterTarget:
cluster_id: clsr1
project_uuid: clsr1-j7d0g-qxc4jcji7n4lafx
+
+ arv:OutputStorageClass:
+ intermediateStorageClass: fast_storage
+ finalStorageClass: robust_storage
+
+ arv:ProcessProperties:
+ processProperties:
+ property1: value1
+ property2: $(inputs.value2)
{% endcodeblock %}
h2(#RunInSingleContainer). arv:RunInSingleContainer
table(table table-bordered table-condensed).
|_. Field |_. Type |_. Description |
|outputTTL|int|If the value is greater than zero, consider intermediate output collections to be temporary and should be automatically trashed. Temporary collections will be trashed @outputTTL@ seconds after creation. A value of zero means intermediate output should be retained indefinitely (this is the default behavior).
-Note: arvados-cwl-runner currently does not take workflow dependencies into account when setting the TTL on an intermediate output collection. If the TTL is too short, it is possible for a collection to be trashed before downstream steps that consume it are started. The recommended minimum value for TTL is the expected duration of the entire the workflow.|
+Note: arvados-cwl-runner currently does not take workflow dependencies into account when setting the TTL on an intermediate output collection. If the TTL is too short, it is possible for a collection to be trashed before downstream steps that consume it are started. The recommended minimum value for TTL is the expected duration of the entire workflow.|
h2. cwltool:Secrets
|cluster_id|string|The five-character alphanumeric cluster id (uuid prefix) where a container or subworkflow will execute. May be an expression.|
|project_uuid|string|The uuid of the project which will own container request and output of the container. May be an expression.|
+h2(#OutputStorageClass). arv:OutputStorageClass
+
+Specify the "storage class":{{site.baseurl}}/user/topics/storage-classes.html to use for intermediate and final outputs.
+
+table(table table-bordered table-condensed).
+|_. Field |_. Type |_. Description |
+|intermediateStorageClass|string or array of strings|The storage class for output of intermediate steps. For example, faster "hot" storage.|
+|finalStorageClass_uuid|string or array of strings|The storage class for the final output. |
+
+h2(#ProcessProperties). arv:ProcessProperties
+
+Specify extra "properties":{{site.baseurl}}/api/methods.html#subpropertyfilters that will be set on container requests created by the workflow. May be set on a Workflow or a CommandLineTool. Setting custom properties on a container request simplifies queries to find the workflow run later on.
+
+table(table table-bordered table-condensed).
+|_. Field |_. Type |_. Description |
+|processProperties|key-value map, or list of objects with the fields {propertyName, propertyValue}|The properties that will be set on the container request. May include expressions that reference `$(inputs)` of the current workflow or tool.|
+
h2. arv:dockerCollectionPDH
This is an optional extension field appearing on the standard @DockerRequirement@. It specifies the portable data hash of the Arvados collection containing the Docker image. If present, it takes precedence over @dockerPull@ or @dockerImageId@.
The following extensions are deprecated because equivalent features are part of the CWL v1.1 standard.
+{% codeblock as yaml %}
+hints:
+ cwltool:LoadListingRequirement:
+ loadListing: shallow_listing
+ arv:ReuseRequirement:
+ enableReuse: false
+ cwltool:TimeLimit:
+ timelimit: 14400
+{% endcodeblock %}
+
h2. cwltool:LoadListingRequirement
For CWL v1.1 scripts, this is deprecated in favor of "loadListing":https://www.commonwl.org/v1.1/CommandLineTool.html#CommandInputParameter or "LoadListingRequirement":https://www.commonwl.org/v1.1/CommandLineTool.html#LoadListingRequirement
|==--no-wait==| Submit workflow runner and exit.|
|==--log-timestamps==| Prefix logging lines with timestamp|
|==--no-log-timestamps==| No timestamp on logging lines|
-|==--api== {containers}|Select work submission API. Only supports 'containers'|
|==--compute-checksum==| Compute checksum of contents while collecting outputs|
|==--submit-runner-ram== SUBMIT_RUNNER_RAM|RAM (in MiB) required for the workflow runner (default 1024)|
|==--submit-runner-image== SUBMIT_RUNNER_IMAGE|Docker image for workflow runner|
|==--always-submit-runner==|When invoked with --submit --wait, always submit a runner to manage the workflow, even when only running a single CommandLineTool|
-|==--submit-request-uuid== UUID|Update and commit to supplied container request instead of creating a new one (containers API only).|
-|==--submit-runner-cluster== CLUSTER_ID|Submit workflow runner to a remote cluster (containers API only)|
+|==--submit-request-uuid== UUID|Update and commit to supplied container request instead of creating a new one.|
+|==--submit-runner-cluster== CLUSTER_ID|Submit workflow runner to a remote cluster|
|==--name NAME==|Name to use for workflow execution instance.|
|==--on-error== {stop,continue}|Desired workflow behavior when a step fails. One of 'stop' (do not submit any more steps) or 'continue' (may submit other steps that are not downstream from the error). Default is 'continue'.|
|==--enable-dev==|Enable loading and running development versions of CWL spec.|
-|==--storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when saving workflow output to Keep.|
+|==--storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when saving the final workflow output to Keep.|
+|==--intermediate-storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when intermediate workflow output to Keep.|
|==--intermediate-output-ttl== N|If N > 0, intermediate output collections will be trashed N seconds after creation. Default is 0 (don't trash).|
-|==--priority== PRIORITY|Workflow priority (range 1..1000, higher has precedence over lower, containers api only)|
+|==--priority== PRIORITY|Workflow priority (range 1..1000, higher has precedence over lower)|
|==--thread-count== THREAD_COUNT|Number of threads to use for container submit and output collection.|
|==--http-timeout== HTTP_TIMEOUT|API request timeout in seconds. Default is 300 seconds (5 minutes).|
|==--trash-intermediate==|Immediately trash intermediate outputs on workflow success.|
h2(#get-files). Get the tutorial files
-The tutorial files are located in the documentation section of the Arvados source repository, which can be found on "git.arvados.org":https://git.arvados.org/arvados.git/tree/HEAD:/doc/user/cwl/bwa-mem or "github":https://github.com/arvados/arvados/tree/master/doc/user/cwl/bwa-mem
+The tutorial files are located in the documentation section of the Arvados source repository, which can be found on "git.arvados.org":https://git.arvados.org/arvados.git/tree/HEAD:/doc/user/cwl/bwa-mem or "github":https://github.com/arvados/arvados/tree/main/doc/user/cwl/bwa-mem
<notextile>
<pre><code>~$ <span class="userinput">git clone https://git.arvados.org/arvados.git</span>
h2. Get the example files
-The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/master/doc/user/cwl/federated or "see below":#fed-example
+The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/main/doc/user/cwl/federated or "see below":#fed-example
<notextile>
<pre><code>~$ <span class="userinput">git clone https://github.com/arvados/arvados</span>
--- /dev/null
+---
+layout: default
+navsection: userguide
+title: Debugging workflows - shell access
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'notebox_begin' %}
+
+To use this feature, your Arvados installation must be configured to allow container shell access. See "the install guide":{{site.baseurl}}/install/container-shell-access.html for more information.
+
+{% include 'notebox_end' %}
+
+The @arvados-client@ program can be used to connect to a container in a running workflow. It can be installed from packages (@apt install arvados-client@ or @yum install arvados-client@). The @arvados-client shell@ command provides an ssh connection into a running container.
+
+h2(#syntax). Syntax
+
+The @arvados-client shell@ tool has the following syntax:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client shell -h</span>
+arvados-client shell: open an interactive shell on a running container.
+
+Usage: arvados-client shell [options] [username@]container-uuid [ssh-options] [remote-command [args...]]
+
+Options:
+ -detach-keys string
+ set detach key sequence, as in docker-attach(1) (default "ctrl-],ctrl-]")
+
+</code></pre>
+</notextile>
+
+The @arvados-client shell@ command calls the ssh binary on your system to make the connection. Everything after _[username@]container-uuid_ is passed through to your OpenSSH client.
+
+h2(#Examples). Examples
+
+Connect to a running container, using the container request UUID:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client shell ce8i5-xvhdp-e6wnujfslyyqn4b</span>
+root@0f13dcd755fa:~#
+</code></pre>
+</notextile>
+
+The container UUID also works:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client shell ce8i5-dz642-h1cl0sa62d4i430</span>
+root@0f13dcd755fa:~#
+</code></pre>
+</notextile>
+
+SSH port forwarding is supported:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client shell ce8i5-dz642-h1cl0sa62d4i430 -L8888:localhost:80</span>
+root@0f13dcd755fa:~# nc -l -p 80
+</code></pre>
+</notextile>
+
+And then, connecting to port 8888 locally:
+
+<notextile>
+<pre><code>~$ <span class="userinput">echo hello | nc localhost 8888</span>
+</code></pre>
+</notextile>
+
+Which appears on the other end:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arvados-client shell ce8i5-dz642-h1cl0sa62d4i430 -L8888:localhost:80</span>
+root@0f13dcd755fa:~# nc -l -p 80
+hello
+</code></pre>
+</notextile>
h2. arv-copy
-@arv-copy@ allows users to copy collections and workflows from one cluster to another. By default, @arv-copy@ will recursively go through the workflow and copy all dependencies associated with the object.
+@arv-copy@ allows users to copy collections, workflow definitions and projects from one cluster to another.
+
+For projects, @arv-copy@ will copy all the collections workflow definitions owned by the project, and recursively copy subprojects.
+
+For workflow definitions, @arv-copy@ will recursively go through the workflow and copy all associated dependencies (input collections and Docker images).
For example, let's copy from the <a href="https://playground.arvados.org/">Arvados playground</a>, also known as *pirca*, to *dstcl*. The names *pirca* and *dstcl* are interchangable with any cluster id. You can find the cluster name from the prefix of the uuid of the object you want to copy. For example, in *zzzzz*-4zz18-tci4vn4fa95w0zx, the cluster name is *zzzzz* .
-In order to communicate with both clusters, you must create custom configuration files for each cluster. In the Arvados Workbench, click on the dropdown menu icon <span class="fa fa-lg fa-user"></span> <span class="caret"></span> in the upper right corner of the top navigation menu to access the user settings menu, and click on the menu item *Current token*. Copy the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ in both of your clusters. Then, create two configuration files in @~/.config/arvados@, one for each cluster. The names of the files must have the format of *ClusterID.conf*. Navigate to the *Current token* page on each of *pirca* and *dstcl* to get the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@.
+In order to communicate with both clusters, you must create custom configuration files for each cluster. The "Getting an API token":{{site.baseurl}}/user/reference/api-tokens.html page describes how to get a token and create a configuration file. However, instead of "settings.conf" in @~/.config/arvados@ you need two configuration files, one for each cluster, with filenames in the format of *ClusterID.conf*.
-!{display: block;margin-left: 25px;margin-right: auto;}{{ site.baseurl }}/images/api-token-host.png!
+In this example, navigate to the *Current token* page on each of *pirca* and *dstcl* to get the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@.
The config file consists of two lines, one for ARVADOS_API_HOST and one for ARVADOS_API_TOKEN:
The output of arv-copy displays the uuid of the collection generated in the destination cluster. By default, the output is placed in your home project in the destination cluster. If you want to place your collection in an existing project, you can specify the project you want it to be in using the tag @--project-uuid@ followed by the project uuid.
-For example, this will copy the collection to project dstcl-j7d0g-a894213ukjhal12 in the destination cluster.
+For example, this will copy the collection to project @dstcl-j7d0g-a894213ukjhal12@ in the destination cluster.
<notextile> <pre><code>~$ <span class="userinput">arv-copy --src pirca --dst dstcl --project-uuid dstcl-j7d0g-a894213ukjhal12 jutro-4zz18-tv416l321i4r01e
</code></pre>
</notextile>
+Additionally, if you need to specify the storage classes where to save the copied data on the destination cluster, you can do that by using the @--storage-classes LIST@ argument, where @LIST@ is a comma-separated list of storage class names.
+
h3. How to copy a workflow
We will use the uuid @jutro-7fd4e-mkmmq53m1ze6apx@ as an example workflow.
The name, description, and workflow definition from the original workflow will be used for the destination copy. In addition, any *collections* and *docker images* referenced in the source workflow definition will also be copied to the destination.
If you would like to copy the object without dependencies, you can use the @--no-recursive@ flag.
+
+h3. How to copy a project
+
+We will use the uuid @jutro-j7d0g-xj19djofle3aryq@ as an example project.
+
+<notextile>
+<pre><code>~$ <span class="userinput">peteramstutz@shell:~$ arv-copy --project-uuid pirca-j7d0g-lr8sq3tx3ovn68k jutro-j7d0g-xj19djofle3aryq
+2021-09-08 21:29:32 arvados.arv-copy[6377] INFO:
+2021-09-08 21:29:32 arvados.arv-copy[6377] INFO: Success: created copy with uuid pirca-j7d0g-ig9gvu5piznducp
+</code></pre>
+</notextile>
+
+The name and description of the original project will be used for the destination copy. If a project already exists with the same name, collections and workflow definitions will be copied into the project with the same name.
+
+If you would like to copy the project but not its subproject, you can use the @--no-recursive@ flag.
---
layout: default
navsection: userguide
-title: "Working with Docker images"
+title: "Working with container images"
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page describes how to set up the runtime environment (e.g., the programs, libraries, and other dependencies needed to run a job) that a workflow step will be run in using "Docker.":https://www.docker.com/ Docker is a tool for building and running containers that isolate applications from other applications running on the same node. For detailed information about Docker, see the "Docker User Guide.":https://docs.docker.com/userguide/
+This page describes how to set up the runtime environment (e.g., the programs, libraries, and other dependencies needed to run a job) that a workflow step will be run in using "Docker":https://www.docker.com/ or "Singularity":https://sylabs.io/singularity/. Docker and Singularity are tools for building and running containers that isolate applications from other applications running on the same node. For detailed information, see the "Docker User Guide":https://docs.docker.com/userguide/ and the "Introduction to Singularity":https://sylabs.io/guides/3.5/user-guide/introduction.html.
+
+Note that Arvados always works with Docker images, even when it is configured to use Singularity to run containers. There are some differences between the two runtimes that can affect your containers. See the "Singularity architecture":{{site.baseurl}}/architecture/singularity.html page for details.
This page describes:
{% include 'tutorial_expectations_workstation' %}
-You also need ensure that "Docker is installed,":https://docs.docker.com/installation/ the Docker daemon is running, and you have permission to access Docker. You can test this by running @docker version@. If you receive a permission denied error, your user account may need to be added to the @docker@ group. If you have root access, you can add yourself to the @docker@ group using @$ sudo addgroup $USER docker@ then log out and log back in again; otherwise consult your local sysadmin.
+You also need to ensure that "Docker is installed,":https://docs.docker.com/installation/ the Docker daemon is running, and you have permission to access Docker. You can test this by running @docker version@. If you receive a permission denied error, your user account may need to be added to the @docker@ group. If you have root access, you can add yourself to the @docker@ group using @$ sudo addgroup $USER docker@ then log out and log back in again; otherwise consult your local sysadmin.
h2(#create). Create a custom image using a Dockerfile
Users can be identified by their email address or username: the tool will check if every user exist on the system, and report back when not found. Groups on the other hand, are identified by their name.
-Permission level can be one of the following: @can_read@, @can_write@ or @can_manage@, giving the group member read, read/write or managing privileges on the group. For backwards compatibility purposes, if any record omits the third (permission) field, it will default to @can_write@ permission. You can read more about permissions on the "group management admin guide":/admin/group-management.html.
+Permission level can be one of the following: @can_read@, @can_write@ or @can_manage@, giving the group member read, read/write or managing privileges on the group. For backwards compatibility purposes, if any record omits the third (permission) field, it will default to @can_write@ permission. You can read more about permissions on the "group management admin guide":{{ site.baseurl }}/admin/group-management.html.
This tool is designed to be run periodically reading a file created by a remote auth system (ie: LDAP) dump script, applying what's included on the file as the source of truth.
table(table table-bordered table-condensed).
|_. Option |_. Description |
-|==--help==| This list of options|
-|==--parent-group-uuid==| UUID of group to own all the externally synchronized groups|
-|==--user-id== | Identifier to use in looking up user. One of 'email' or 'username' (Default: 'email')|
-|==--verbose==| Log informational messages (Default: False)|
-|==--version==| Print version and exit|
+|==--help==|This list of options|
+|==--case-insensitive==|Uses case-insensitive username matching|
+|==--parent-group-uuid==|UUID of group to own all the externally synchronized groups|
+|==--user-id==|Identifier to use in looking up user. One of 'email' or 'username' (Default: 'email')|
+|==--verbose==|Log informational messages (Default: False)|
+|==--version==|Print version and exit|
h2. Examples
One is by "configuring (system-wide) the collection's idle time":{{site.baseurl}}/admin/collection-versioning.html. This idle time is checked against the @modified_at@ attribute so that the version is saved when one or more of the previously enumerated attributes get updated and the @modified_at@ is at least at the configured idle time in the past. This way, a frequently updated collection won't create lots of version records that may not be useful.
-The other way to trigger a version save, is by setting @preserve_version@ to @true@ on the current version collection record: this ensures that the current state will be preserved as a version the next time it gets updated.
+The other way to trigger a version save, is by setting @preserve_version@ to @true@ on the current version collection record: this ensures that the current state will be preserved as a version the next time it gets updated. This includes either creating a new collection or updating a preexisting one. In the case of using @preserve_version = true@ on a collection's create call, the new record state will be preserved as a snapshot on the next update.
h3. Collection's past versions behavior & limitations
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Storage classes (alternately known as "storage tiers") allow you to control which volumes should be used to store particular collection data blocks. This can be used to implement data storage policies such as moving data to archival storage.
+Storage classes (sometimes called as "storage tiers") allow you to control which back-end storage volumes should be used to store the data blocks of a particular collection. This can be used to implement data storage policies such as assigning data collections to "fast", "robust" or "archival" storage.
-Names of storage classes are internal to the cluster and decided by the administrator. Aside from "default", Arvados currently does not define any standard storage class names.
+Names of storage classes are internal to the cluster and decided by the administrator. Aside from "default", Arvados currently does not define any standard storage class names. Consult your cluster administrator for guidance on what storage classes are available to use on your specific Arvados instance.
+
+Note that when changing the storage class of an existing collection, it does not take effect immediately, the blocks are asynchronously copied to the new storage class and removed from the old one. The collection field "storage_classes_confirmed" is updated to reflect when data blocks have been successfully copied.
h3. arv-put
-You may specify the desired storage class for a collection uploaded using @arv-put@:
+You may specify one or more desired storage classes for a collection uploaded using @arv-put@:
<pre>
-$ arv-put --storage-classes=hot myfile.txt
+$ arv-put --storage-classes=hot,archival myfile.txt
</pre>
-h3. arvados-cwl-runner
+h3. arv-mount
-You may also specify the desired storage class for the final output collection produced by @arvados-cwl-runner@:
+You can ask @arv-mount@ to use specific storage classes when creating new collections:
<pre>
-$ arvados-cwl-runner --storage-classes=hot myworkflow.cwl myinput.yml
+$ arv-mount --storage-classes=transient --mount-tmp=scratch keep
</pre>
-(Note: intermediate collections produced by a workflow run will have "default" storage class.)
+h3. arvados-cwl-runner
+
+You may specify the desired storage class for the intermediate and final output collections produced by @arvados-cwl-runner@ on the command line or using the "arv:OutputStorageClass hint":{{site.baseurl}}/user/cwl/cwl-extensions.html#OutputStorageClass .
+
+<pre>
+$ arvados-cwl-runner --intermediate-storage-classes=hot_storage --storage-classes=robust_storage myworkflow.cwl myinput.yml
+</pre>
h3. arv command line
h3. Storage class notes
-Collection blocks will be in the "default" storage class if not otherwise specified.
-
-Currently, a collection may only have one desired storage class.
+Collection blocks will be in the cluster's configured default storage class(es) if not otherwise specified.
Any user with write access to a collection may set any storage class on that collection.
-
-Names of storage classes are internal to the cluster and decided by the administrator. Aside from "default", Arvados currently does not define any standard storage class names.
require (
github.com/AdRoll/goamz v0.0.0-20170825154802-2731d20f46f4
github.com/Azure/azure-sdk-for-go v45.1.0+incompatible
- github.com/Azure/go-autorest v14.2.0+incompatible
- github.com/Azure/go-autorest/autorest v0.11.3
- github.com/Azure/go-autorest/autorest/azure/auth v0.5.1
+ github.com/Azure/go-autorest/autorest v0.11.22
+ github.com/Azure/go-autorest/autorest/adal v0.9.17 // indirect
+ github.com/Azure/go-autorest/autorest/azure/auth v0.5.9
+ github.com/Azure/go-autorest/autorest/azure/cli v0.4.4 // indirect
github.com/Azure/go-autorest/autorest/to v0.4.0
github.com/Azure/go-autorest/autorest/validation v0.3.0 // indirect
- github.com/Microsoft/go-winio v0.4.5 // indirect
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7 // indirect
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 // indirect
github.com/arvados/cgofuse v1.2.0-arvados1
github.com/aws/aws-sdk-go v1.25.30
github.com/aws/aws-sdk-go-v2 v0.23.0
- github.com/bgentry/speakeasy v0.1.0 // indirect
github.com/bradleypeabody/godap v0.0.0-20170216002349-c249933bc092
+ github.com/containerd/containerd v1.5.8 // indirect
github.com/coreos/go-oidc v2.1.0+incompatible
- github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7
+ github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e
github.com/creack/pty v1.1.7
- github.com/dnaeon/go-vcr v1.0.1 // indirect
- github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible // indirect
- github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1
+ github.com/docker/docker v17.12.0-ce-rc1.0.20210128214336-420b1d36250f+incompatible
github.com/docker/go-connections v0.3.0 // indirect
- github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d // indirect
github.com/dustin/go-humanize v1.0.0
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 // indirect
github.com/fsnotify/fsnotify v1.4.9
github.com/gliderlabs/ssh v0.2.2 // indirect
github.com/go-asn1-ber/asn1-ber v1.4.1 // indirect
github.com/go-ldap/ldap v3.0.3+incompatible
- github.com/gogo/protobuf v1.1.1
+ github.com/gogo/protobuf v1.3.2
+ github.com/golang-jwt/jwt/v4 v4.1.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
- github.com/gorilla/context v1.1.1 // indirect
- github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572
+ github.com/gorilla/mux v1.7.2
github.com/hashicorp/golang-lru v0.5.1
- github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7
+ github.com/imdario/mergo v0.3.12
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff
github.com/jmoiron/sqlx v1.2.0
github.com/johannesboyne/gofakes3 v0.0.0-20200716060623-6b2b4cb092cc
github.com/julienschmidt/httprouter v1.2.0
- github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7 // indirect
github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5 // indirect
- github.com/lib/pq v1.3.0
- github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c // indirect
+ github.com/lib/pq v1.10.2
+ github.com/morikuni/aec v1.0.0 // indirect
github.com/msteinert/pam v0.0.0-20190215180659-f29b9f28d6f9
- github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
- github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27 // indirect
github.com/pelletier/go-buffruneio v0.2.0 // indirect
github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35 // indirect
- github.com/prometheus/client_golang v1.2.1
- github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
- github.com/prometheus/common v0.7.0
+ github.com/prometheus/client_golang v1.7.1
+ github.com/prometheus/client_model v0.2.0
+ github.com/prometheus/common v0.10.0
github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5 // indirect
github.com/sergi/go-diff v1.0.0 // indirect
- github.com/sirupsen/logrus v1.4.2
+ github.com/sirupsen/logrus v1.8.1
github.com/src-d/gcfg v1.3.0 // indirect
github.com/xanzy/ssh-agent v0.1.0 // indirect
- golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9
- golang.org/x/net v0.0.0-20200202094626-16171245cfb2
- golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
- golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd
- google.golang.org/api v0.13.0
+ golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871
+ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
+ golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
+ golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e
+ golang.org/x/tools v0.1.7 // indirect
+ google.golang.org/api v0.20.0
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d // indirect
- gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405
- gopkg.in/square/go-jose.v2 v2.3.1
+ gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15
+ gopkg.in/square/go-jose.v2 v2.5.1
gopkg.in/src-d/go-billy.v4 v4.0.1
gopkg.in/src-d/go-git-fixtures.v3 v3.5.0 // indirect
gopkg.in/src-d/go-git.v4 v4.0.0
gopkg.in/warnings.v0 v0.1.2 // indirect
- gopkg.in/yaml.v2 v2.2.4 // indirect
rsc.io/getopt v0.0.0-20170811000552-20be20937449
)
replace github.com/AdRoll/goamz => github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef
+
+replace gopkg.in/yaml.v2 => github.com/arvados/yaml v0.0.0-20210427145106-92a1cab0904b
+bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-cloud.google.com/go v0.38.0 h1:ROfEUZz+Gh5pa62DJWXSaonyu3StP6EA6lPEXPI6mCo=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
-github.com/Azure/azure-sdk-for-go v0.2.0-beta h1:wYBqYNMWr0WL2lcEZi+dlK9n+N0wJ0Pjs4BKeOnDjfQ=
-github.com/Azure/azure-sdk-for-go v19.1.0+incompatible h1:ysqLW+tqZjJWOTE74heH/pDRbr4vlN3yV+dqQYgpyxw=
-github.com/Azure/azure-sdk-for-go v19.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
-github.com/Azure/azure-sdk-for-go v20.2.0+incompatible h1:La3ODnagAOf5ZFUepTfVftvNTdxkq06DNpgi1l0yaM0=
-github.com/Azure/azure-sdk-for-go v20.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
+cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
+cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
+cloud.google.com/go v0.54.0 h1:3ithwDMr7/3vpAMXiH+ZQnYbuIsh+OPhUPMFC9enmn0=
+cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
+cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
+cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v45.1.0+incompatible h1:kxtaPD8n2z5Za+9e3sKsYG2IX6PG2R6VXtgS7gAbh3A=
github.com/Azure/azure-sdk-for-go v45.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
-github.com/Azure/go-autorest v1.1.1 h1:4G9tVCqooRY3vDTB2bA1Z01PlSALtnUbji0AfzthUSs=
-github.com/Azure/go-autorest v10.15.2+incompatible h1:oZpnRzZie83xGV5txbT1aa/7zpCPvURGhV6ThJij2bs=
-github.com/Azure/go-autorest v10.15.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
+github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
+github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
-github.com/Azure/go-autorest/autorest v0.11.0/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
-github.com/Azure/go-autorest/autorest v0.11.3 h1:fyYnmYujkIXUgv88D9/Wo2ybE4Zwd/TmQd5sSI5u2Ws=
-github.com/Azure/go-autorest/autorest v0.11.3/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
+github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
+github.com/Azure/go-autorest/autorest v0.11.19/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
+github.com/Azure/go-autorest/autorest v0.11.22 h1:bXiQwDjrRmBQOE67bwlvUKAC1EU1yZTPQ38c+bstZws=
+github.com/Azure/go-autorest/autorest v0.11.22/go.mod h1:BAWYUWGPEtKPzjVkp0Q6an0MJcJDsoh5Z1BFAEFs4Xs=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
-github.com/Azure/go-autorest/autorest/adal v0.9.2 h1:Aze/GQeAN1RRbGmnUJvUj+tFGBzFdIg3293/A9rbxC4=
-github.com/Azure/go-autorest/autorest/adal v0.9.2/go.mod h1:/3SMAM86bP6wC9Ev35peQDUeqFZBMH07vvUOmg4z/fE=
-github.com/Azure/go-autorest/autorest/azure/auth v0.5.1 h1:bvUhZciHydpBxBmCheUgxxbSwJy7xcfjkUsjUcqSojc=
-github.com/Azure/go-autorest/autorest/azure/auth v0.5.1/go.mod h1:ea90/jvmnAwDrSooLH4sRIehEPtG/EPUXavDh31MnA4=
-github.com/Azure/go-autorest/autorest/azure/cli v0.4.0 h1:Ml+UCrnlKD+cJmSzrZ/RDcDw86NjkRUpnFh7V5JUhzU=
-github.com/Azure/go-autorest/autorest/azure/cli v0.4.0/go.mod h1:JljT387FplPzBA31vUcvsetLKF3pec5bdAxjVU4kI2s=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
+github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
+github.com/Azure/go-autorest/autorest/adal v0.9.14/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
+github.com/Azure/go-autorest/autorest/adal v0.9.17 h1:esOPl2dhcz9P3jqBSJ8tPGEj2EqzPPT6zfyuloiogKY=
+github.com/Azure/go-autorest/autorest/adal v0.9.17/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.9 h1:Y2CgdzitFDsdMwYMzf9LIZWrrTFysqbRc7b94XVVJ78=
+github.com/Azure/go-autorest/autorest/azure/auth v0.5.9/go.mod h1:hg3/1yw0Bq87O3KvvnJoAh34/0zbP7SFizX/qN5JvjU=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.2/go.mod h1:7qkJkT+j6b+hIpzMOwPChJhTqS8VbsqqgULzMNRugoM=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.4 h1:iuooz5cZL6VRcO7DVSFYxRcouqn6bFVE/e77Wts50Zk=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.4/go.mod h1:yAQ2b6eP/CmLPnmLvxtT1ALIY3OR1oFcCqVBi8vHiTc=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/to v0.4.0 h1:oXVqrxakqqV1UZdSazDOPOLvOIz+XA683u8EctwboHk=
github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=
github.com/Azure/go-autorest/autorest/validation v0.3.0 h1:3I9AAI63HfcLtphd9g39ruUwRI+Ca+z/f36KHPFRUss=
github.com/Azure/go-autorest/autorest/validation v0.3.0/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=
-github.com/Azure/go-autorest/logger v0.2.0 h1:e4RVHVZKC5p6UANLJHkM4OfR1UKZPj8Wt8Pcx+3oqrE=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
+github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
+github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
-github.com/Microsoft/go-winio v0.4.5 h1:U2XsGR5dBg1yzwSEJoP2dE2/aAXpmad+CNG2hE9Pd5k=
-github.com/Microsoft/go-winio v0.4.5/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
+github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17 h1:iT12IBVClFevaf8PuVyi3UmZOVh4OqnaLxDTW2O6j3w=
+github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
+github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
+github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
+github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
+github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
+github.com/Microsoft/hcsshim v0.8.23/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
+github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
+github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
+github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
+github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
+github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
+github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7 h1:uSoVVbwJiQipAclBbw+8quDsfcvFjOpI5iCf4p/cqCs=
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
+github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 h1:kFOfPq6dUM1hTo4JG6LR5AXSUEsOjtdm0kw0FtQtMJA=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
+github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/arvados/cgofuse v1.2.0-arvados1 h1:4Q4vRJ4hbTCcI4gGEaa6hqwj3rqlUuzeFQkfoEA2HqE=
github.com/arvados/cgofuse v1.2.0-arvados1/go.mod h1:79WFV98hrkRHK9XPhh2IGGOwpFSjocsWubgxAs2KhRc=
github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef h1:cl7DIRbiAYNqaVxg3CZY8qfZoBOKrj06H/x9SPGaxas=
github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef/go.mod h1:rCtgyMmBGEbjTm37fCuBYbNL0IhztiALzo3OB9HyiOM=
+github.com/arvados/yaml v0.0.0-20210427145106-92a1cab0904b h1:hK0t0aJTTXI64lpXln2A1SripqOym+GVNTnwsLes39Y=
+github.com/arvados/yaml v0.0.0-20210427145106-92a1cab0904b/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
+github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/aws/aws-sdk-go v1.17.4/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.25.30 h1:I9qj6zW3mMfsg91e+GMSN/INcaX9tTFvr/l/BAHKaIY=
github.com/aws/aws-sdk-go v1.25.30/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go-v2 v0.23.0 h1:+E1q1LLSfHSDn/DzOtdJOX+pLZE2HiNV2yO5AjZINwM=
github.com/aws/aws-sdk-go-v2 v0.23.0/go.mod h1:2LhT7UgHOXK3UXONKI5OMgIyoQL6zTAw/jwIeX6yqzw=
+github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
+github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
+github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/bradleypeabody/godap v0.0.0-20170216002349-c249933bc092 h1:0Di2onNnlN5PAyWPbqlPyN45eOQ+QW/J9eqLynt4IV4=
github.com/bradleypeabody/godap v0.0.0-20170216002349-c249933bc092/go.mod h1:8IzBjZCRSnsvM6MJMG8HNNtnzMl48H22rbJL2kRUJ0Y=
-github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
-github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
+github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
+github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
+github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
+github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
+github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
+github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
+github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
+github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
+github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
+github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
+github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
+github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
+github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
+github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
+github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
+github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
+github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
+github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
+github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
+github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
+github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
+github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
+github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
+github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
+github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
+github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
+github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
+github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
+github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
+github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
+github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
+github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
+github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
+github.com/containerd/containerd v1.5.8 h1:NmkCC1/QxyZFBny8JogwLpOy2f+VEbO/f6bV2Mqtwuw=
+github.com/containerd/containerd v1.5.8/go.mod h1:YdFSv5bTFLpG2HIYmfqDpSYYTDX+mc5qtSuYx1YUb/s=
+github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
+github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
+github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
+github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
+github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
+github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
+github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
+github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
+github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
+github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
+github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
+github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
+github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
+github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ=
+github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
+github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
+github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
+github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
+github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
+github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
+github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
+github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
+github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
+github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
+github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
+github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
+github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
+github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-oidc v2.1.0+incompatible h1:sdJrfw8akMnCuUlaZU3tE/uYXFgfqom8DBE9so9EBsM=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
-github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7 h1:e3u8KWFMR3irlDo1Z/tL8Hsz1MJmCLkSoX5AZRMKZkg=
-github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
+github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
+github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
+github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
+github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7 h1:6pwm8kMQKCmgUg0ZHTm5+/YvRK0s3THD/28+T6/kk4A=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
-github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
+github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
+github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
+github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
+github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
+github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
-github.com/dgrijalva/jwt-go v3.1.0+incompatible h1:FFziAwDQQ2dz1XClWMkwvukur3evtZx7x/wMHKM1i20=
-github.com/dgrijalva/jwt-go v3.1.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
-github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
+github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
+github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
-github.com/dimchansky/utfbom v1.0.0 h1:fGC2kkf4qOoKqZ4q7iIh+Vef4ubC1c38UDsEyZynZPc=
-github.com/dimchansky/utfbom v1.0.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
-github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4=
+github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
+github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U=
+github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE=
github.com/dnaeon/go-vcr v1.0.1 h1:r8L/HqC0Hje5AXMu1ooW8oyQyOFv4GxqpL0nRP7SLLY=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
-github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible h1:PVtvnmmxSMUcT5AY6vG7sCCzRg3eyoW6vQvXtITC60c=
-github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
-github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1 h1:0NaIDWeMBQIQACbThhJaL8lts6EMPSTCMLeDstJ6gU8=
-github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
+github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
+github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/docker v17.12.0-ce-rc1.0.20210128214336-420b1d36250f+incompatible h1:nhVo1udYfMj0Jsw0lnqrTjjf33aLpdgW9Wve9fHVzhQ=
+github.com/docker/docker v17.12.0-ce-rc1.0.20210128214336-420b1d36250f+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.3.0 h1:3lOnM9cSzgGwx8VfK/NGOW5fLQ0GjIlCkaktF+n1M6o=
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
-github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d h1:dVaNRYvaGV23AdNdsm+4y1mPN0tj3/1v6taqKMmM6Ko=
-github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
+github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
+github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
+github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
+github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
+github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
+github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
+github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
+github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
+github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 h1:BHsljHzVlRcyQhjrss6TZTdY2VfCqZPbv5k3iBFa2ZQ=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
+github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
+github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
+github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
+github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0=
github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-asn1-ber/asn1-ber v1.4.1 h1:qP/QDxOtmMoJVgXHCXNzDpA0+wkgYB2x5QoLMVOciyw=
github.com/go-asn1-ber/asn1-ber v1.4.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-ldap/ldap v3.0.3+incompatible h1:HTeSZO8hWMS1Rgb2Ziku6b8a7qRIZZMHjsvuZyatzwk=
github.com/go-ldap/ldap v3.0.3+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
+github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
+github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
+github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
+github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
+github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
+github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
+github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
+github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
+github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
+github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
-github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
+github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
+github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
+github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
-github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
+github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
+github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
+github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
+github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang-jwt/jwt/v4 v4.1.0 h1:XUgk2Ex5veyVFVeLm0xhusUTQybEbexJXrvPNOKkSY0=
+github.com/golang-jwt/jwt/v4 v4.1.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
-github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
-github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
+github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.5.0 h1:LUVKkCeviFUMKqHa4tXIIij/lbhnMbP7Fn5wKdKkRh4=
+github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
-github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
+github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
+github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
-github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
-github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
-github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572 h1:eWMpQtfzS3D63EI50baSfP/zjyqFM9tDfvVyAlCIMic=
-github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
+github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
+github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
+github.com/gorilla/mux v1.7.2 h1:zoNxOV7WjqXptQOVngLmcSQgXmgk4NMz1HibBchjl/I=
+github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
+github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
+github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
+github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
+github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
+github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
+github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
+github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
+github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
+github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
+github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
-github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7 h1:kUGMXUVH7IU1rKA3TZu9ROUE61dVv2SSgSsdeYKm0mg=
-github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
+github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
+github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
+github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff h1:6NvhExg4omUC9NfA+l4Oq3ibNNeJUdiAF3iBVB0PlDk=
github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff/go.mod h1:ddfPX8Z28YMjiqoaJhNBzWHapTHXejnB5cDCUWDwriw=
+github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmoiron/sqlx v1.2.0 h1:41Ip0zITnmWNR/vHV+S4m+VoUivnWY5E4OJfLZjCJMA=
github.com/jmoiron/sqlx v1.2.0/go.mod h1:1FEQNm3xlJgrMD+FBdI9+xvCksHtbpVBBw5dYhBSsks=
github.com/johannesboyne/gofakes3 v0.0.0-20200716060623-6b2b4cb092cc h1:JJPhSHowepOF2+ElJVyb9jgt5ZyBkPMkPuhS0uODSFs=
github.com/johannesboyne/gofakes3 v0.0.0-20200716060623-6b2b4cb092cc/go.mod h1:fNiSoOiEI5KlkWXn26OwKnNe58ilTIkpBlgOrt7Olu8=
+github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
+github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0 h1:TDTW5Yz1mjftljbcKqRcrYhd4XeOoI98t+9HbQbYf7g=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
-github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7 h1:AYzjK/SHz6m6mg5iuFwkrAhCc14jvCpW9d6frC9iDPE=
-github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7/go.mod h1:iYGcTYIPUvEWhFo6aKUuLchs+AV4ssYdyuBbQJZGcBk=
github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5 h1:xXn0nBttYwok7DhU4RxqaADEpQn7fEMt5kKc3yoj/n0=
github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
-github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
+github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
+github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
+github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
+github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
-github.com/lib/pq v1.3.0 h1:/qkRGz8zljWiDcFvgpwUpwIAPu3r07TDvs3Rws+o/pU=
-github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
-github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c h1:ouxemItv3B/Zh008HJkEXDYCN3BIRyNHxtUN7ThJ5Js=
-github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
+github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
+github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
+github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
+github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
+github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
+github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
+github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
+github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
+github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
+github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
+github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
+github.com/mattn/go-sqlite3 v1.9.0 h1:pDRiWfl+++eC2FEFRy6jXmQlvp4Yh3z1MJKg4UeYM/4=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
-github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
-github.com/mitchellh/go-homedir v0.0.0-20161203194507-b8bc1bf76747 h1:eQox4Rh4ewJF+mqYPxCkmBAirRnPaHEB26UkNuPyjlk=
-github.com/mitchellh/go-homedir v0.0.0-20161203194507-b8bc1bf76747/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
+github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
+github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
+github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
+github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
+github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
+github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
+github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
+github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/msteinert/pam v0.0.0-20190215180659-f29b9f28d6f9 h1:ZivaaKmjs9q90zi6I4gTLW6tbVGtlBjellr3hMYaly0=
github.com/msteinert/pam v0.0.0-20190215180659-f29b9f28d6f9/go.mod h1:np1wUFZ6tyoke22qDJZY40URn9Ae51gX7ljIWXN5TJs=
+github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
+github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
-github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
+github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
+github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
+github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
+github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
+github.com/onsi/ginkgo v0.0.0-20151202141238-7f8ab55aaf3b/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
+github.com/onsi/gomega v0.0.0-20151007035656-2152b45fa28a/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
+github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
+github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
-github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27 h1:8Q+VFspwMHwvVvpSS8xpuFQR7RpGX8G8ECXwgc/05sg=
-github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
+github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
+github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
+github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
+github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
+github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
+github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
+github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/pelletier/go-buffruneio v0.2.0 h1:U4t4R6YkofJ5xHm3dJzuRpPZ0mr5MMCoAWooScCR7aA=
github.com/pelletier/go-buffruneio v0.2.0/go.mod h1:JkE26KsDizTr40EUHkXVtNPvgGtbSNq5BcowyYOWdKo=
+github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
+github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
+github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
-github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
+github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35 h1:J9b7z+QKAmPf4YLrFg6oQUotqHQeUNWwkvo7jZp1GLU=
github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
+github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
+github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
-github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
-github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
+github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
+github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA=
+github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
+github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
-github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
+github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
+github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
+github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
-github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
-github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
+github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
+github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
+github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
+github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
+github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
-github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
+github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
+github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
+github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
+github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
+github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46 h1:GHRpF1pTW19a8tTFrMLUcfWwyC0pnifVo2ClaLq+hP8=
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46/go.mod h1:uAQ5PCi+MFsC7HjREoAz1BU+Mq60+05gifQSsHSDG/8=
+github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
+github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5 h1:Jw7W4WMfQDxsXvfeFSaS2cHlY7bAF4MGrgnbd0+Uo78=
github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
+github.com/shabbyrobe/gocovmerge v0.0.0-20180507124511-f6ea450bfb63 h1:J6qvD6rbmOil46orKqJaRPG+zTpoGlBTUdyv8ki63L0=
github.com/shabbyrobe/gocovmerge v0.0.0-20180507124511-f6ea450bfb63/go.mod h1:n+VKSARF5y/tS9XFSP7vWDfS+GUC5vs/YT7M5XDTUEM=
+github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
+github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
+github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
-github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
+github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
+github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
+github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
+github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
+github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
+github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
+github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
+github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
+github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
+github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
+github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
+github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
+github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/src-d/gcfg v1.3.0 h1:2BEDr8r0I0b8h/fOqwtxCEiq2HJu8n2JGZJQFGXWLjg=
github.com/src-d/gcfg v1.3.0/go.mod h1:p/UMsR43ujA89BJY9duynAwIpvqEujIH/jFlfL7jWoI=
+github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
+github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
+github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
-github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
+github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
+github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
+github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
+github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
+github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
+github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
+github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
+github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
+github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
+github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
+github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
github.com/xanzy/ssh-agent v0.1.0 h1:lOhdXLxtmYjaHc76ZtNmJWPg948y/RnT+3N3cvKWFzY=
github.com/xanzy/ssh-agent v0.1.0/go.mod h1:0NyE30eGUDliuLEHJgYte/zncp2zdTStcOnWhgSqHD8=
-go.opencensus.io v0.21.0 h1:mU6zScU4U1YAFPHEHYk+3JC4SY7JxgkqS10ZOSyksNg=
+github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
+github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
+github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
+github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
+github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
+github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
+github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
+go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
+go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.3 h1:8sGtKOrtQqkN1bp2AtX+misvLIlOmsEsNd+9NIcPEm8=
+go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
+go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
+go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
+go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
+golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
+golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550 h1:ObdrDkeb4kJdCP557AjRjq69pTHfNouLtWZG7j9rPN8=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
-golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871 h1:/pEO3GD/ABYAjuakUS6xSEmmlyVS4kxBNkeA9tLJiTI=
+golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
+golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190310074541-c10a0554eabf/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c h1:uOCk1iQW6Vc18bnC13MfzScl+wdKBmM9Y9kU7Z83/lw=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU=
+golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
-golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
+golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
-golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
+golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
+golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
-golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190310054646-10058d7d4faa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd h1:3x5uuvBgE6oaXJjCOvpCC1IpgJogqQ+PqGGU3ZxAgII=
-golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e h1:WUoyKPm6nCo1BnNUvPGnFG3T5DUVem42yDJZZ4CNxMA=
+golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
-golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190308174544-00c44ba9c14f/go.mod h1:25r3+/G6/xytQM8iWZKq3Hn0kr0rgFKPUNVEL/dr3z4=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
-golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c h1:97SnQk1GYRXJgvwZ8fadnxDOWfKvkNQHH3CtZntPSrM=
+golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.1.7 h1:6j8CgantCy3yc8JGBqkDLMKWqZ0RDU2g1HVgacojGWQ=
+golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
-google.golang.org/api v0.13.0 h1:Q3Ui3V3/CVinFWFiW39Iw0kMuVrRzYX0wN6OPFp0lTA=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.20.0 h1:jz2KixHX7EcCPiQrySzPdnYT7DbINAypCqKZ1Z7GM40=
+google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
-google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
+google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
-google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
+google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
-google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
+google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
+google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.33.2 h1:EQyQC3sa8M+p6Ulc8yy9SWSS2GVwyRc83gAbG8lrl4o=
+google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
+google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
+google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
+google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d h1:TxyelI5cVkbREznMhfzycHdkp5cLA7DpE+GKjSslYhM=
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
-gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405 h1:829vOVxxusYHC+IqBtkX5mbKtsY9fheQiQn0MZRVLfQ=
-gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
+gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
+gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
+gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA=
-gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
+gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
+gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
+gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.5.1 h1:7odma5RETjNHWJnR32wx8t+Io4djHE1PqxCFx3iiZ2w=
+gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/src-d/go-billy.v4 v4.0.1 h1:iMxwQPj2cuKRyaIZ985zxClkcdTtT5VpXYf4PTJc0Ek=
gopkg.in/src-d/go-billy.v4 v4.0.1/go.mod h1:ZHSF0JP+7oD97194otDUCD7Ofbk63+xFcfWP5bT6h+Q=
gopkg.in/src-d/go-git-fixtures.v3 v3.5.0 h1:ivZFOIltbce2Mo8IjzUHAFoq/IylO9WHhNOAJK+LsJg=
gopkg.in/src-d/go-git-fixtures.v3 v3.5.0/go.mod h1:dLBcvytrw/TYZsNTWCnkNF2DSIlzWYqTe3rJR56Ac7g=
gopkg.in/src-d/go-git.v4 v4.0.0 h1:9ZRNKHuhaTaJRGcGaH6Qg7uUORO2X0MNB5WL/CDdqto=
gopkg.in/src-d/go-git.v4 v4.0.0/go.mod h1:CzbUWqMn4pvmvndg3gnh5iZFmSsbhyhUWdI0IQ60AQo=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
-gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
-gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
-gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
-gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
-gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
+gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
+gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
+gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
+k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
+k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
+k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
+k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
+k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
+k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
+k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
+k8s.io/client-go v0.20.4/go.mod h1:LiMv25ND1gLUdBeYxBIwKpkSC5IsozMMmOOeSJboP+k=
+k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
+k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
+k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
+k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
+k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
+k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
+k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
+k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/getopt v0.0.0-20170811000552-20be20937449 h1:UukjJOsjQH0DIuyyrcod6CXHS6cdaMMuJmrt+SN1j4A=
rsc.io/getopt v0.0.0-20170811000552-20be20937449/go.mod h1:dhCdeqAxkyt5u3/sKRkUXuHaMXUu1Pt13GTQAM2xnig=
+rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
+rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
+sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
} else {
san += fmt.Sprintf(",DNS:%s", super.ListenHost)
}
- if hostname, err := os.Hostname(); err != nil {
+ hostname, err := os.Hostname()
+ if err != nil {
return fmt.Errorf("hostname: %w", err)
- } else {
- san += ",DNS:" + hostname
}
+ san += ",DNS:" + hostname
// Generate root key
- err := super.RunProgram(ctx, super.tempdir, runOptions{}, "openssl", "genrsa", "-out", "rootCA.key", "4096")
+ err = super.RunProgram(ctx, super.tempdir, runOptions{}, "openssl", "genrsa", "-out", "rootCA.key", "4096")
if err != nil {
return err
}
}
var errNeedConfigReload = errors.New("config changed, restart needed")
+var errParseFlags = errors.New("error parsing command line arguments")
type bootCommand struct{}
err := bcmd.run(ctx, prog, args, stdin, stdout, stderr)
if err == errNeedConfigReload {
continue
+ } else if err == errParseFlags {
+ return 2
} else if err != nil {
logger.WithError(err).Info("exiting")
return 1
}
flags := flag.NewFlagSet(prog, flag.ContinueOnError)
- flags.SetOutput(stderr)
loader := config.NewLoader(stdin, super.logger)
loader.SetupFlags(flags)
versionFlag := flags.Bool("version", false, "Write version information to stdout and exit 0")
flags.StringVar(&super.ClusterType, "type", "production", "cluster `type`: development, test, or production")
flags.StringVar(&super.ListenHost, "listen-host", "localhost", "host name or interface address for service listeners")
flags.StringVar(&super.ControllerAddr, "controller-address", ":0", "desired controller address, `host:port` or `:port`")
+ flags.BoolVar(&super.NoWorkbench1, "no-workbench1", false, "do not run workbench1")
flags.BoolVar(&super.OwnTemporaryDatabase, "own-temporary-database", false, "bring up a postgres server and create a temporary database")
timeout := flags.Duration("timeout", 0, "maximum time to wait for cluster to be ready")
shutdown := flags.Bool("shutdown", false, "shut down when the cluster becomes ready")
- err := flags.Parse(args)
- if err == flag.ErrHelp {
- return nil
- } else if err != nil {
- return err
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ if code == 0 {
+ return nil
+ } else {
+ return errParseFlags
+ }
} else if *versionFlag {
cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
return nil
{"WORKBENCH1", super.cluster.Services.Workbench1},
{"WS", super.cluster.Services.Websocket},
} {
- host, port, err := internalPort(cmpt.svc)
- if err != nil {
+ var host, port string
+ if len(cmpt.svc.InternalURLs) == 0 {
+ // We won't run this service, but we need an
+ // upstream port to write in our templated
+ // nginx config. Choose a port that will
+ // return 502 Bad Gateway.
+ port = "9"
+ } else if host, port, err = internalPort(cmpt.svc); err != nil {
return fmt.Errorf("%s internal port: %w (%v)", cmpt.varname, err, cmpt.svc)
- }
- if ok, err := addrIsLocal(net.JoinHostPort(host, port)); !ok || err != nil {
- return fmt.Errorf("urlIsLocal() failed for host %q port %q: %v", host, port, err)
+ } else if ok, err := addrIsLocal(net.JoinHostPort(host, port)); !ok || err != nil {
+ return fmt.Errorf("%s addrIsLocal() failed for host %q port %q: %v", cmpt.varname, host, port, err)
}
vars[cmpt.varname+"PORT"] = port
if err != nil {
return fmt.Errorf("%s external port: %w (%v)", cmpt.varname, err, cmpt.svc)
}
- if ok, err := addrIsLocal(net.JoinHostPort(super.ListenHost, port)); !ok || err != nil {
- return fmt.Errorf("urlIsLocal() failed for host %q port %q: %v", super.ListenHost, port, err)
+ listenAddr := net.JoinHostPort(super.ListenHost, port)
+ if ok, err := addrIsLocal(listenAddr); !ok || err != nil {
+ return fmt.Errorf("%s addrIsLocal(%q) failed: %w", cmpt.varname, listenAddr, err)
}
vars[cmpt.varname+"SSLPORT"] = port
}
if err != nil {
return err
}
- for _, version := range []string{"1.16.6", "1.17.3", "2.0.2"} {
+ for _, version := range []string{"2.2.19"} {
if !strings.Contains(buf.String(), "("+version+")") {
- err = super.RunProgram(ctx, runner.src, runOptions{}, "gem", "install", "--user", "--conservative", "--no-document", "bundler:1.16.6", "bundler:1.17.3", "bundler:2.0.2")
+ err = super.RunProgram(ctx, runner.src, runOptions{}, "gem", "install", "--user", "--conservative", "--no-document", "bundler:2.2.19")
if err != nil {
return err
}
"passenger", "start",
"--address", host,
"--port", port,
- "--log-file", "/dev/stderr",
"--log-level", loglevel,
"--no-friendly-error-pages",
"--disable-anonymous-telemetry",
ClusterType string // e.g., production
ListenHost string // e.g., localhost
ControllerAddr string // e.g., 127.0.0.1:8000
+ NoWorkbench1 bool
OwnTemporaryDatabase bool
Stderr io.Writer
runGoProgram{src: "services/arv-git-httpd", svc: super.cluster.Services.GitHTTP},
runGoProgram{src: "services/health", svc: super.cluster.Services.Health},
runGoProgram{src: "services/keepproxy", svc: super.cluster.Services.Keepproxy, depends: []supervisedTask{runPassenger{src: "services/api"}}},
- runGoProgram{src: "services/keepstore", svc: super.cluster.Services.Keepstore},
+ runServiceCommand{name: "keepstore", svc: super.cluster.Services.Keepstore},
runGoProgram{src: "services/keep-web", svc: super.cluster.Services.WebDAV},
runServiceCommand{name: "ws", svc: super.cluster.Services.Websocket, depends: []supervisedTask{seedDatabase{}}},
installPassenger{src: "services/api"},
runPassenger{src: "services/api", varlibdir: "railsapi", svc: super.cluster.Services.RailsAPI, depends: []supervisedTask{createCertificates{}, seedDatabase{}, installPassenger{src: "services/api"}}},
- installPassenger{src: "apps/workbench", depends: []supervisedTask{seedDatabase{}}}, // dependency ensures workbench doesn't delay api install/startup
- runPassenger{src: "apps/workbench", varlibdir: "workbench1", svc: super.cluster.Services.Workbench1, depends: []supervisedTask{installPassenger{src: "apps/workbench"}}},
seedDatabase{},
}
+ if !super.NoWorkbench1 {
+ tasks = append(tasks,
+ installPassenger{src: "apps/workbench", depends: []supervisedTask{seedDatabase{}}}, // dependency ensures workbench doesn't delay api install/startup
+ runPassenger{src: "apps/workbench", varlibdir: "workbench1", svc: super.cluster.Services.Workbench1, depends: []supervisedTask{installPassenger{src: "apps/workbench"}}},
+ )
+ }
if super.ClusterType != "test" {
tasks = append(tasks,
runServiceCommand{name: "dispatch-cloud", svc: super.cluster.Services.DispatchCloud},
svc.ExternalURL = arvados.URL{Scheme: "wss", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost)), Path: "/websocket"}
}
}
+ if super.NoWorkbench1 && svc == &cluster.Services.Workbench1 {
+ // When workbench1 is disabled, it gets an
+ // ExternalURL (so we have a valid listening
+ // port to write in our Nginx config) but no
+ // InternalURLs (so health checker doesn't
+ // complain).
+ continue
+ }
if len(svc.InternalURLs) == 0 {
svc.InternalURLs = map[arvados.URL]arvados.ServiceInstance{
{Scheme: "http", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost)), Path: "/"}: {},
AccessViaHosts: map[arvados.URL]arvados.VolumeAccess{
url: {},
},
+ StorageClasses: map[string]bool{
+ "default": true,
+ "foo": true,
+ "bar": true,
+ },
}
}
+ cluster.StorageClasses = map[string]arvados.StorageClassConfig{
+ "default": {Default: true},
+ "foo": {},
+ "bar": {},
+ }
}
if super.OwnTemporaryDatabase {
cluster.PostgreSQL.Connection = arvados.PostgreSQLConnection{
Note: This subcommand uses the "arvados" Python module. If that is
not installed, try:
* "pip install arvados" (either as root or in a virtualenv), or
-* "sudo apt-get install python-arvados-python-client", or
+* "sudo apt-get install python3-arvados-python-client", or
* see https://doc.arvados.org/install for more details.
`
"os"
"git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/dispatchcloud"
"git.arvados.org/arvados.git/sdk/go/arvados"
destroyExisting := flags.Bool("destroy-existing", false, "Destroy any existing instances tagged with our InstanceSetID, instead of erroring out")
shellCommand := flags.String("command", "", "Run an interactive shell command on the test instance when it boots")
pauseBeforeDestroy := flags.Bool("pause-before-destroy", false, "Prompt and wait before destroying the test instance")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
- }
-
- if len(flags.Args()) != 0 {
- flags.Usage()
- return 2
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
}
logger := ctxlog.New(stderr, "text", "info")
defer func() {
"fmt"
"math/big"
"sync"
+ "sync/atomic"
+ "time"
"git.arvados.org/arvados.git/lib/cloud"
"git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
+ "github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/sirupsen/logrus"
// Driver is the ec2 implementation of the cloud.Driver interface.
var Driver = cloud.DriverFunc(newEC2InstanceSet)
+const (
+ throttleDelayMin = time.Second
+ throttleDelayMax = time.Minute
+)
+
type ec2InstanceSetConfig struct {
AccessKeyID string
SecretAccessKey string
}
type ec2InstanceSet struct {
- ec2config ec2InstanceSetConfig
- instanceSetID cloud.InstanceSetID
- logger logrus.FieldLogger
- client ec2Interface
- keysMtx sync.Mutex
- keys map[string]string
+ ec2config ec2InstanceSetConfig
+ instanceSetID cloud.InstanceSetID
+ logger logrus.FieldLogger
+ client ec2Interface
+ keysMtx sync.Mutex
+ keys map[string]string
+ throttleDelayCreate atomic.Value
+ throttleDelayInstances atomic.Value
}
func newEC2InstanceSet(config json.RawMessage, instanceSetID cloud.InstanceSetID, _ cloud.SharedResourceTags, logger logrus.FieldLogger) (prv cloud.InstanceSet, err error) {
}
rsv, err := instanceSet.client.RunInstances(&rii)
-
+ err = wrapError(err, &instanceSet.throttleDelayCreate)
if err != nil {
return nil, err
}
dii := &ec2.DescribeInstancesInput{Filters: filters}
for {
dio, err := instanceSet.client.DescribeInstances(dii)
+ err = wrapError(err, &instanceSet.throttleDelayInstances)
if err != nil {
return nil, err
}
func (inst *ec2Instance) VerifyHostKey(ssh.PublicKey, *ssh.Client) error {
return cloud.ErrNotImplemented
}
+
+type rateLimitError struct {
+ error
+ earliestRetry time.Time
+}
+
+func (err rateLimitError) EarliestRetry() time.Time {
+ return err.earliestRetry
+}
+
+var isCodeCapacity = map[string]bool{
+ "InsufficientInstanceCapacity": true,
+ "VcpuLimitExceeded": true,
+ "MaxSpotInstanceCountExceeded": true,
+}
+
+// isErrorCapacity returns whether the error is to be throttled based on its code.
+// Returns false if error is nil.
+func isErrorCapacity(err error) bool {
+ if aerr, ok := err.(awserr.Error); ok && aerr != nil {
+ if _, ok := isCodeCapacity[aerr.Code()]; ok {
+ return true
+ }
+ }
+ return false
+}
+
+type ec2QuotaError struct {
+ error
+}
+
+func (er *ec2QuotaError) IsQuotaError() bool {
+ return true
+}
+
+func wrapError(err error, throttleValue *atomic.Value) error {
+ if request.IsErrorThrottle(err) {
+ // Back off exponentially until an upstream call
+ // either succeeds or returns a non-throttle error.
+ d, _ := throttleValue.Load().(time.Duration)
+ d = d*3/2 + time.Second
+ if d < throttleDelayMin {
+ d = throttleDelayMin
+ } else if d > throttleDelayMax {
+ d = throttleDelayMax
+ }
+ throttleValue.Store(d)
+ return rateLimitError{error: err, earliestRetry: time.Now().Add(d)}
+ } else if isErrorCapacity(err) {
+ return &ec2QuotaError{err}
+ } else if err != nil {
+ throttleValue.Store(time.Duration(0))
+ return err
+ }
+ throttleValue.Store(time.Duration(0))
+ return nil
+}
import (
"encoding/json"
"flag"
+ "sync/atomic"
"testing"
"git.arvados.org/arvados.git/lib/cloud"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/config"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
c.Check(i.Destroy(), check.IsNil)
}
}
+
+func (*EC2InstanceSetSuite) TestWrapError(c *check.C) {
+ retryError := awserr.New("Throttling", "", nil)
+ wrapped := wrapError(retryError, &atomic.Value{})
+ _, ok := wrapped.(cloud.RateLimitError)
+ c.Check(ok, check.Equals, true)
+
+ quotaError := awserr.New("InsufficientInstanceCapacity", "", nil)
+ wrapped = wrapError(quotaError, nil)
+ _, ok = wrapped.(cloud.QuotaError)
+ c.Check(ok, check.Equals, true)
+}
"runtime"
"sort"
"strings"
+
+ "github.com/sirupsen/logrus"
)
type Handler interface {
copy(newargs[flagargs+1:], args[flagargs+1:])
return newargs
}
+
+type NoPrefixFormatter struct{}
+
+func (NoPrefixFormatter) Format(entry *logrus.Entry) ([]byte, error) {
+ return []byte(entry.Message + "\n"), nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package cmd
+
+import (
+ "flag"
+ "fmt"
+ "io"
+)
+
+// ParseFlags calls f.Parse(args) and prints appropriate error/help
+// messages to stderr.
+//
+// The positional argument is "" if no positional arguments are
+// accepted, otherwise a string to print with the usage message,
+// "Usage: {prog} [options] {positional}".
+//
+// The first return value, ok, is true if the program should continue
+// running normally, or false if it should exit now.
+//
+// If ok is false, the second return value is an appropriate exit
+// code: 0 if "-help" was given, 2 if there was a usage error.
+func ParseFlags(f FlagSet, prog string, args []string, positional string, stderr io.Writer) (ok bool, exitCode int) {
+ f.Init(prog, flag.ContinueOnError)
+ f.SetOutput(io.Discard)
+ err := f.Parse(args)
+ switch err {
+ case nil:
+ if f.NArg() > 0 && positional == "" {
+ fmt.Fprintf(stderr, "unrecognized command line arguments: %v (try -help)\n", f.Args())
+ return false, 2
+ }
+ return true, 0
+ case flag.ErrHelp:
+ if f, ok := f.(*flag.FlagSet); ok && f.Usage != nil {
+ f.SetOutput(stderr)
+ f.Usage()
+ } else {
+ fmt.Fprintf(stderr, "Usage: %s [options] %s\n", prog, positional)
+ f.SetOutput(stderr)
+ f.PrintDefaults()
+ }
+ return false, 0
+ default:
+ fmt.Fprintf(stderr, "error parsing command line arguments: %s (try -help)\n", err)
+ return false, 2
+ }
+}
"os"
"os/exec"
- "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
}
flags := flag.NewFlagSet("", flag.ContinueOnError)
- flags.SetOutput(stderr)
loader.SetupFlags(flags)
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
}
-
- if len(flags.Args()) != 0 {
- flags.Usage()
- return 2
- }
-
cfg, err := loader.Load()
if err != nil {
return 1
Logger: logger,
}
- flags := flag.NewFlagSet("", flag.ContinueOnError)
- flags.SetOutput(stderr)
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
loader.SetupFlags(flags)
strict := flags.Bool("strict", true, "Strict validation of configuration file (warnings result in non-zero exit code)")
-
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
- }
-
- if len(flags.Args()) != 0 {
- flags.Usage()
- return 2
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
}
// Load the config twice -- once without loading deprecated
if err != nil {
return 1
}
+ // Reset() to avoid printing the same warnings twice when they
+ // are logged by both without-legacy and with-legacy loads.
+ logbuf.Reset()
loader.SkipDeprecated = false
loader.SkipLegacy = false
withDepr, err := loader.Load()
if err != nil {
return 1
}
- problems := false
- if warnAboutProblems(logger, withDepr) {
- problems = true
- }
cmd := exec.Command("diff", "-u", "--label", "without-deprecated-configs", "--label", "relying-on-deprecated-configs", "/dev/fd/3", "/dev/fd/4")
for _, obj := range []interface{}{withoutDepr, withDepr} {
y, _ := yaml.Marshal(obj)
return 1
}
}
-
- if problems {
- return 1
- }
return 0
}
-func warnAboutProblems(logger logrus.FieldLogger, cfg *arvados.Config) bool {
- warned := false
- for id, cc := range cfg.Clusters {
- if cc.SystemRootToken == "" {
- logger.Warnf("Clusters.%s.SystemRootToken is empty; see https://doc.arvados.org/master/install/install-keepstore.html", id)
- warned = true
- }
- if cc.ManagementToken == "" {
- logger.Warnf("Clusters.%s.ManagementToken is empty; see https://doc.arvados.org/admin/management-token.html", id)
- warned = true
- }
- }
- return warned
-}
-
var DumpDefaultsCommand defaultsCommand
type defaultsCommand struct{}
var stderr bytes.Buffer
code := DumpCommand.RunCommand("arvados config-dump", []string{"-badarg"}, bytes.NewBuffer(nil), bytes.NewBuffer(nil), &stderr)
c.Check(code, check.Equals, 2)
- c.Check(stderr.String(), check.Matches, `(?ms)flag provided but not defined: -badarg\nUsage:\n.*`)
+ c.Check(stderr.String(), check.Equals, "error parsing command line arguments: flag provided but not defined: -badarg (try -help)\n")
}
func (s *CommandSuite) TestDump_EmptyInput(c *check.C) {
c.Check(stderr.String(), check.Matches, `(?ms).*unexpected object in config entry: Clusters.z1234.PostgreSQL.ConnectionPool"\n.*`)
}
+func (s *CommandSuite) TestCheck_DuplicateWarnings(c *check.C) {
+ var stdout, stderr bytes.Buffer
+ in := `
+Clusters:
+ z1234: {}
+`
+ code := CheckCommand.RunCommand("arvados config-check", []string{"-config", "-"}, bytes.NewBufferString(in), &stdout, &stderr)
+ c.Check(code, check.Equals, 1)
+ c.Check(stderr.String(), check.Matches, `(?ms).*SystemRootToken.*`)
+ c.Check(stderr.String(), check.Not(check.Matches), `(?ms).*SystemRootToken.*SystemRootToken.*`)
+}
+
func (s *CommandSuite) TestDump_Formatting(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
c.Check(stdout.String(), check.Matches, `(?ms).*\n *ManagementToken: secret\n.*`)
c.Check(stdout.String(), check.Not(check.Matches), `(?ms).*UnknownKey.*`)
}
+
+func (s *CommandSuite) TestDump_KeyOrder(c *check.C) {
+ in := `
+Clusters:
+ z1234:
+ Login:
+ Test:
+ Users:
+ a: {}
+ d: {}
+ c: {}
+ b: {}
+ e: {}
+`
+ for trial := 0; trial < 20; trial++ {
+ var stdout, stderr bytes.Buffer
+ code := DumpCommand.RunCommand("arvados config-dump", []string{"-config", "-"}, bytes.NewBufferString(in), &stdout, &stderr)
+ c.Assert(code, check.Equals, 0)
+ if !c.Check(stdout.String(), check.Matches, `(?ms).*a:.*b:.*c:.*d:.*e:.*`) {
+ c.Logf("config-dump did not use lexical key order on trial %d", trial)
+ c.Log("stdout:\n", stdout.String())
+ c.Log("stderr:\n", stderr.String())
+ c.FailNow()
+ }
+ }
+}
+
+func (s *CommandSuite) TestCheck_KeyOrder(c *check.C) {
+ in := `
+Clusters:
+ z1234:
+ ManagementToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ SystemRootToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ Collections:
+ BlobSigningKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ InstanceTypes:
+ a32a: {}
+ a48a: {}
+ a4a: {}
+`
+ for trial := 0; trial < 20; trial++ {
+ var stdout, stderr bytes.Buffer
+ code := CheckCommand.RunCommand("arvados config-check", []string{"-config=-", "-strict=true"}, bytes.NewBufferString(in), &stdout, &stderr)
+ if !c.Check(code, check.Equals, 0) || stdout.String() != "" || stderr.String() != "" {
+ c.Logf("config-check returned error or non-empty output on trial %d", trial)
+ c.Log("stdout:\n", stdout.String())
+ c.Log("stderr:\n", stderr.String())
+ c.FailNow()
+ }
+ }
+}
# In each of the service sections below, the keys under
# InternalURLs are the endpoints where the service should be
- # listening, and reachable from other hosts in the cluster.
- SAMPLE:
- InternalURLs:
- "http://host1.example:12345": {}
- "http://host2.example:12345":
- # Rendezvous is normally empty/omitted. When changing the
- # URL of a Keepstore service, Rendezvous should be set to
- # the old URL (with trailing slash omitted) to preserve
- # rendezvous ordering.
- Rendezvous: ""
- SAMPLE:
- Rendezvous: ""
- ExternalURL: "-"
+ # listening, and reachable from other hosts in the
+ # cluster. Example:
+ #
+ # InternalURLs:
+ # "http://host1.example:12345": {}
+ # "http://host2.example:12345": {}
RailsAPI:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
Controller:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Websocket:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Keepbalance:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
GitHTTP:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
GitSSH:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
DispatchCloud:
- InternalURLs: {}
- ExternalURL: "-"
- SSO:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
+ DispatchLSF:
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Keepproxy:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
WebDAV:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# Base URL for Workbench inline preview. If blank, use
# WebDAVDownload instead, and disable inline preview.
# If both are empty, downloading collections from workbench
ExternalURL: ""
WebDAVDownload:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# Base URL for download links. If blank, serve links to WebDAV
# with disposition=attachment query param. Unlike preview links,
# browsers do not render attachments, so there is no risk of XSS.
ExternalURL: ""
Keepstore:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs:
+ SAMPLE:
+ # Rendezvous is normally empty/omitted. When changing the
+ # URL of a Keepstore service, Rendezvous should be set to
+ # the old URL (with trailing slash omitted) to preserve
+ # rendezvous ordering.
+ Rendezvous: ""
+ ExternalURL: ""
Composer:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
WebShell:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# ShellInABox service endpoint URL for a given VM. If empty, do not
# offer web shell logins.
#
# https://*.webshell.uuid_prefix.arvadosapi.com
ExternalURL: ""
Workbench1:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Workbench2:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Health:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
PostgreSQL:
# max concurrent connections per arvados server daemon
dbname: ""
SAMPLE: ""
API:
+ # Limits for how long a client token created by regular users can be valid,
+ # and also is used as a default expiration policy when no expiration date is
+ # specified.
+ # Default value zero means token expirations don't get clamped and no
+ # default expiration is set.
+ MaxTokenLifetime: 0s
+
# Maximum size (in bytes) allowed for a single API request. This
# limit is published in the discovery document for use by clients.
# Note: You must separately configure the upstream web server or
# Timeout on requests to internal Keep services.
KeepServiceRequestTimeout: 15s
+ # Vocabulary file path, local to the node running the controller.
+ # This JSON file should contain the description of what's allowed
+ # as object's metadata. Its format is described at:
+ # https://doc.arvados.org/admin/metadata-vocabulary.html
+ VocabularyPath: ""
+
Users:
# Config parameters to automatically setup new users. If enabled,
# this users will be able to self-activate. Enable this if you want
# user agreements. Should only be enabled for development.
NewUsersAreActive: false
+ # Newly activated users (whether set up by an admin or via
+ # AutoSetupNewUsers) immediately become visible to other active
+ # users.
+ #
+ # On a multi-tenant cluster, where the intent is for users to be
+ # invisible to one another unless they have been added to the
+ # same group(s) via Workbench admin interface, change this to
+ # false.
+ ActivatedUsersAreVisibleToOthers: true
+
# The e-mail address of the user you would like to become marked as an admin
# user on their first login.
AutoAdminUserWithEmail: ""
AdminNotifierEmailFrom: arvados@example.com
EmailSubjectPrefix: "[ARVADOS] "
UserNotifierEmailFrom: arvados@example.com
+ UserNotifierEmailBcc: {}
NewUserNotificationRecipients: {}
NewInactiveUserNotificationRecipients: {}
Thanks,
Your Arvados administrator.
+ # If RoleGroupsVisibleToAll is true, all role groups are visible
+ # to all active users.
+ #
+ # If false, users must be granted permission to role groups in
+ # order to see them. This is more appropriate for a multi-tenant
+ # cluster.
+ RoleGroupsVisibleToAll: true
+
AuditLogs:
# Time to keep audit logs, in seconds. (An audit log is a row added
# to the "logs" table in the PostgreSQL database each time an
#
# BalancePeriod determines the interval between start times of
# successive scan/balance operations. If a scan/balance operation
- # takes longer than RunPeriod, the next one will follow it
+ # takes longer than BalancePeriod, the next one will follow it
# immediately.
#
# If SIGUSR1 is received during an idle period between operations,
# long-running balancing operation.
BalanceTimeout: 6h
+ # Maximum number of replication_confirmed /
+ # storage_classes_confirmed updates to write to the database
+ # after a rebalancing run. When many updates are needed, this
+ # spreads them over a few runs rather than applying them all at
+ # once.
+ BalanceUpdateLimit: 100000
+
# Default lifetime for ephemeral collections: 2 weeks. This must not
# be less than BlobSigningTTL.
DefaultTrashLifetime: 336h
# is older than the amount of seconds defined on PreserveVersionIfIdle,
# a snapshot of the collection's previous state is created and linked to
# the current collection.
- CollectionVersioning: false
+ CollectionVersioning: true
# 0s = auto-create a new version on every update.
# -1s = never auto-create new versions.
# > 0s = auto-create a new version when older than the specified number of seconds.
- PreserveVersionIfIdle: -1s
+ PreserveVersionIfIdle: 10s
# If non-empty, allow project and collection names to contain
# the "/" character (slash/stroke/solidus), and replace "/" with
# WebDAV would have to expose XSS vulnerabilities in order to
# handle the redirect (see discussion on Services.WebDAV).
#
- # This setting has no effect in the recommended configuration,
- # where the WebDAV is configured to have a separate domain for
- # every collection; in this case XSS protection is provided by
- # browsers' same-origin policy.
+ # This setting has no effect in the recommended configuration, where the
+ # WebDAV service is configured to have a separate domain for every
+ # collection and XSS protection is provided by browsers' same-origin
+ # policy.
#
# The default setting (false) is appropriate for a multi-user site.
TrustAllContent: false
# Cache parameters for WebDAV content serving:
- # * TTL: Maximum time to cache manifests and permission checks.
- # * UUIDTTL: Maximum time to cache collection state.
- # * MaxBlockEntries: Maximum number of block cache entries.
- # * MaxCollectionEntries: Maximum number of collection cache entries.
- # * MaxCollectionBytes: Approximate memory limit for collection cache.
- # * MaxPermissionEntries: Maximum number of permission cache entries.
- # * MaxUUIDEntries: Maximum number of UUID cache entries.
WebDAVCache:
+ # Time to cache manifests, permission checks, and sessions.
TTL: 300s
+
+ # Time to cache collection state.
UUIDTTL: 5s
- MaxBlockEntries: 4
+
+ # Block cache entries. Each block consumes up to 64 MiB RAM.
+ MaxBlockEntries: 20
+
+ # Collection cache entries.
MaxCollectionEntries: 1000
- MaxCollectionBytes: 100000000
- MaxPermissionEntries: 1000
- MaxUUIDEntries: 1000
+
+ # Approximate memory limit (in bytes) for collection cache.
+ MaxCollectionBytes: 100000000
+
+ # UUID cache entries.
+ MaxUUIDEntries: 1000
+
+ # Persistent sessions.
+ MaxSessions: 100
+
+ # Selectively set permissions for regular users and admins to
+ # download or upload data files using the upload/download
+ # features for Workbench, WebDAV and S3 API support.
+ WebDAVPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ # Selectively set permissions for regular users and admins to be
+ # able to download or upload blocks using arv-put and
+ # arv-get from outside the cluster.
+ KeepproxyPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ # Post upload / download events to the API server logs table, so
+ # that they can be included in the arv-user-activity report.
+ # You can disable this if you find that it is creating excess
+ # load on the API server and you don't need it.
+ WebDAVLogEvents: true
Login:
- # One of the following mechanisms (SSO, Google, PAM, LDAP, or
+ # One of the following mechanisms (Google, PAM, LDAP, or
# LoginCluster) should be enabled; see
# https://doc.arvados.org/install/setup-login.html
# ID > Web application) and add your controller's /login URL
# (e.g., "https://zzzzz.example.com/login") as an authorized
# redirect URL.
- #
- # Incompatible with ForceLegacyAPI14. ProviderAppID must be
- # blank.
ClientID: ""
ClientSecret: ""
AuthenticationRequestParameters:
SAMPLE: ""
+ # Accept an OIDC access token as an API token if the OIDC
+ # provider's UserInfo endpoint accepts it.
+ #
+ # AcceptAccessTokenScope should also be used when enabling
+ # this feature.
+ AcceptAccessToken: false
+
+ # Before accepting an OIDC access token as an API token, first
+ # check that it is a JWT whose "scope" value includes this
+ # value. Example: "https://zzzzz.example.com/" (your Arvados
+ # API endpoint).
+ #
+ # If this value is empty and AcceptAccessToken is true, all
+ # access tokens will be accepted regardless of scope,
+ # including non-JWT tokens. This is not recommended.
+ AcceptAccessTokenScope: ""
+
PAM:
- # (Experimental) Use PAM to authenticate users.
+ # Use PAM to authenticate users.
Enable: false
# PAM service name. PAM will apply the policy in the
# originally supplied by the user will be used.
UsernameAttribute: uid
- SSO:
- # Authenticate with a separate SSO server. (Deprecated)
- Enable: false
-
- # ProviderAppID and ProviderAppSecret are generated during SSO
- # setup; see
- # https://doc.arvados.org/v2.0/install/install-sso.html#update-config
- ProviderAppID: ""
- ProviderAppSecret: ""
-
Test:
# Authenticate users listed here in the config file. This
# feature is intended to be used in test environments, and
# Default value zero means tokens don't have expiration.
TokenLifetime: 0s
+ # If true (default) tokens issued through login are allowed to create
+ # new tokens.
+ # If false, tokens issued through login are not allowed to
+ # viewing/creating other tokens. New tokens can only be created
+ # by going through login again.
+ IssueTrustedTokens: true
+
# When the token is returned to a client, the token itself may
- # be restricted from manipulating other tokens based on whether
+ # be restricted from viewing/creating other tokens based on whether
# the client is "trusted" or not. The local Workbench1 and
# Workbench2 are trusted by default, but if this is a
# LoginCluster, you probably want to include the other Workbench
UsePreemptibleInstances: false
# PEM encoded SSH key (RSA, DSA, or ECDSA) used by the
- # (experimental) cloud dispatcher for executing containers on
- # worker VMs. Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
+ # cloud dispatcher for executing containers on worker VMs.
+ # Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
# and ends with "\n-----END RSA PRIVATE KEY-----\n".
DispatchPrivateKey: ""
# Minimum time between two attempts to run the same container
MinRetryPeriod: 0s
+ # Container runtime: "docker" (default) or "singularity"
+ RuntimeEngine: docker
+
+ # When running a container, run a dedicated keepstore process,
+ # using the specified number of 64 MiB memory buffers per
+ # allocated CPU core (VCPUs in the container's runtime
+ # constraints). The dedicated keepstore handles I/O for
+ # collections mounted in the container, as well as saving
+ # container logs.
+ #
+ # A zero value disables this feature.
+ #
+ # In order for this feature to be activated, no volume may use
+ # AccessViaHosts, and each volume must have Replication higher
+ # than Collections.DefaultReplication. If these requirements are
+ # not satisfied, the feature is disabled automatically
+ # regardless of the value given here.
+ #
+ # Note that when this configuration is enabled, the entire
+ # cluster configuration file, including the system root token,
+ # is copied to the worker node and held in memory for the
+ # duration of the container.
+ LocalKeepBlobBuffersPerVCPU: 1
+
+ # When running a dedicated keepstore process for a container
+ # (see LocalKeepBlobBuffersPerVCPU), write keepstore log
+ # messages to keepstore.txt in the container's log collection.
+ #
+ # These log messages can reveal some volume configuration
+ # details, error messages from the cloud storage provider, etc.,
+ # which are not otherwise visible to users.
+ #
+ # Accepted values:
+ # * "none" -- no keepstore.txt file
+ # * "all" -- all logs, including request and response lines
+ # * "errors" -- all logs except "response" logs with 2xx
+ # response codes and "request" logs
+ LocalKeepLogsToContainerLog: none
+
Logging:
# When you run the db:delete_old_container_logs task, it will find
# containers that have been finished for at least this many seconds,
# (See http://ruby-doc.org/core-2.2.2/Kernel.html#method-i-format for more.)
AssignNodeHostname: "compute%<slot_number>d"
+ LSF:
+ # Arguments to bsub when submitting Arvados containers as LSF jobs.
+ #
+ # Template variables starting with % will be substituted as follows:
+ #
+ # %U uuid
+ # %C number of VCPUs
+ # %M memory in MB
+ # %T tmp in MB
+ #
+ # Use %% to express a literal %. The %%J in the default will be changed
+ # to %J, which is interpreted by bsub itself.
+ #
+ # Note that the default arguments cause LSF to write two files
+ # in /tmp on the compute node each time an Arvados container
+ # runs. Ensure you have something in place to delete old files
+ # from /tmp, or adjust the "-o" and "-e" arguments accordingly.
+ BsubArgumentsList: ["-o", "/tmp/crunch-run.%%J.out", "-e", "/tmp/crunch-run.%%J.err", "-J", "%U", "-n", "%C", "-D", "%MMB", "-R", "rusage[mem=%MMB:tmp=%TMB] span[hosts=1]", "-R", "select[mem>=%MMB]", "-R", "select[tmp>=%TMB]", "-R", "select[ncpus>=%C]"]
+
+ # Use sudo to switch to this user account when submitting LSF
+ # jobs.
+ #
+ # This account must exist on the hosts where LSF jobs run
+ # ("execution hosts"), as well as on the host where the
+ # Arvados LSF dispatcher runs ("submission host").
+ BsubSudoUser: "crunch"
+
JobsAPI:
# Enable the legacy 'jobs' API (crunch v1). This value must be a string.
#
GitInternalDir: /var/lib/arvados/internal.git
CloudVMs:
- # Enable the cloud scheduler (experimental).
+ # Enable the cloud scheduler.
Enable: false
# Name/number of port where workers' SSH services listen.
# Shell command to execute on each worker to determine whether
# the worker is booted and ready to run containers. It should
# exit zero if the worker is ready.
- BootProbeCommand: "docker ps -q"
+ BootProbeCommand: "systemctl is-system-running"
# Minimum interval between consecutive probes to a single
# worker.
# Maximum create/destroy-instance operations per second (0 =
# unlimited).
- MaxCloudOpsPerSecond: 0
+ MaxCloudOpsPerSecond: 10
- # Maximum concurrent node creation operations (0 = unlimited). This is
- # recommended by Azure in certain scenarios (see
- # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image)
- # and can be used with other cloud providers too, if desired.
- MaxConcurrentInstanceCreateOps: 0
+ # Maximum concurrent instance creation operations (0 = unlimited).
+ #
+ # MaxConcurrentInstanceCreateOps limits the number of instance creation
+ # requests that can be in flight at any one time, whereas
+ # MaxCloudOpsPerSecond limits the number of create/destroy operations
+ # that can be started per second.
+ #
+ # Because the API for instance creation on Azure is synchronous, it is
+ # recommended to increase MaxConcurrentInstanceCreateOps when running
+ # on Azure. When using managed images, a value of 20 would be
+ # appropriate. When using Azure Shared Image Galeries, it could be set
+ # higher. For more information, see
+ # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image
+ #
+ # MaxConcurrentInstanceCreateOps can be increased for other cloud
+ # providers too, if desired.
+ MaxConcurrentInstanceCreateOps: 1
# Interval between cloud provider syncs/updates ("list all
# instances").
Price: 0.1
Preemptible: false
+ StorageClasses:
+
+ # If you use multiple storage classes, specify them here, using
+ # the storage class name as the key (in place of "SAMPLE" in
+ # this sample entry).
+ #
+ # Further info/examples:
+ # https://doc.arvados.org/admin/storage-classes.html
+ SAMPLE:
+
+ # Priority determines the order volumes should be searched
+ # when reading data, in cases where a keepstore server has
+ # access to multiple volumes with different storage classes.
+ Priority: 0
+
+ # Default determines which storage class(es) should be used
+ # when a user/client writes data or saves a new collection
+ # without specifying storage classes.
+ #
+ # If any StorageClasses are configured, at least one of them
+ # must have Default: true.
+ Default: true
+
Volumes:
SAMPLE:
# AccessViaHosts specifies which keepstore processes can read
ReadOnly: false
Replication: 1
StorageClasses:
- default: true
+ # If you have configured storage classes (see StorageClasses
+ # section above), add an entry here for each storage class
+ # satisfied by this volume.
SAMPLE: true
- Driver: s3
+ Driver: S3
DriverParameters:
# for s3 driver -- see
# https://doc.arvados.org/install/configure-s3-object-storage.html
IAMRole: aaaaa
- AccessKey: aaaaa
- SecretKey: aaaaa
+ AccessKeyID: aaaaa
+ SecretAccessKey: aaaaa
Endpoint: ""
- Region: us-east-1a
+ Region: us-east-1
Bucket: aaaaa
LocationConstraint: false
V2Signature: false
ConnectTimeout: 1m
ReadTimeout: 10m
RaceWindow: 24h
+ PrefixLength: 0
# Use aws-s3-go (v2) instead of goamz
UseAWSS3v2Driver: false
DefaultOpenIdPrefix: "https://www.google.com/accounts/o8/id"
# Workbench2 configs
- VocabularyURL: ""
FileViewersConfigURL: ""
# Idle time after which the user's session will be auto closed.
<img src="/arvados-logo-big.png" style="width: 20%; float: right; padding: 1em;" />
<h2>Please log in.</h2>
- <p>The "Log in" button below will show you a sign-in
- page. After you log in, you will be redirected back to
- Arvados Workbench.</p>
-
<p>If you have never used Arvados Workbench before, logging in
for the first time will automatically create a new
account.</p>
- <i>Arvados Workbench uses your name and email address only for
+ <i>Arvados Workbench uses your information only for
identification, and does not retrieve any other personal
information.</i>
# this blank.
SSHHelpHostSuffix: ""
- # Bypass new (Arvados 1.5) API implementations, and hand off
- # requests directly to Rails instead. This can provide a temporary
- # workaround for clients that are incompatible with the new API
- # implementation. Note that it also disables some new federation
- # features and will be removed in a future release.
- ForceLegacyAPI14: false
-
# (Experimental) Restart services automatically when config file
# changes are detected. Only supported by `arvados-server boot` in
# dev/test mode.
package config
import (
+ "encoding/json"
"fmt"
"io/ioutil"
"net/url"
*dst = *n
}
- // Provider* moved to SSO.Provider*
- if dst, n := &cluster.Login.SSO.ProviderAppID, dcluster.Login.ProviderAppID; n != nil && *n != *dst {
- *dst = *n
- if *n != "" {
- // In old config, non-empty ID meant enable
- cluster.Login.SSO.Enable = true
+ cfg.Clusters[id] = cluster
+ }
+ return nil
+}
+
+func (ldr *Loader) applyDeprecatedVolumeDriverParameters(cfg *arvados.Config) error {
+ for clusterID, cluster := range cfg.Clusters {
+ for volID, vol := range cluster.Volumes {
+ if vol.Driver == "S3" {
+ var params struct {
+ AccessKey string `json:",omitempty"`
+ SecretKey string `json:",omitempty"`
+ AccessKeyID string
+ SecretAccessKey string
+ }
+ err := json.Unmarshal(vol.DriverParameters, ¶ms)
+ if err != nil {
+ return fmt.Errorf("error loading %s.Volumes.%s.DriverParameters: %w", clusterID, volID, err)
+ }
+ if params.AccessKey != "" || params.SecretKey != "" {
+ if params.AccessKeyID != "" || params.SecretAccessKey != "" {
+ return fmt.Errorf("cannot use old keys (AccessKey/SecretKey) and new keys (AccessKeyID/SecretAccessKey) at the same time in %s.Volumes.%s.DriverParameters -- you must remove the old config keys", clusterID, volID)
+ continue
+ }
+ var allparams map[string]interface{}
+ err = json.Unmarshal(vol.DriverParameters, &allparams)
+ if err != nil {
+ return fmt.Errorf("error loading %s.Volumes.%s.DriverParameters: %w", clusterID, volID, err)
+ }
+ for k := range allparams {
+ if lk := strings.ToLower(k); lk == "accesskey" || lk == "secretkey" {
+ delete(allparams, k)
+ }
+ }
+ ldr.Logger.Warnf("using your old config keys %s.Volumes.%s.DriverParameters.AccessKey/SecretKey -- but you should rename them to AccessKeyID/SecretAccessKey", clusterID, volID)
+ allparams["AccessKeyID"] = params.AccessKey
+ allparams["SecretAccessKey"] = params.SecretKey
+ vol.DriverParameters, err = json.Marshal(allparams)
+ if err != nil {
+ return err
+ }
+ cluster.Volumes[volID] = vol
+ }
}
}
- if dst, n := &cluster.Login.SSO.ProviderAppSecret, dcluster.Login.ProviderAppSecret; n != nil && *n != *dst {
- *dst = *n
- }
-
- cfg.Clusters[id] = cluster
}
return nil
}
UUIDTTL *arvados.Duration
MaxCollectionEntries *int
MaxCollectionBytes *int64
- MaxPermissionEntries *int
MaxUUIDEntries *int
}
if oc.Cache.MaxCollectionBytes != nil {
cluster.Collections.WebDAVCache.MaxCollectionBytes = *oc.Cache.MaxCollectionBytes
}
- if oc.Cache.MaxPermissionEntries != nil {
- cluster.Collections.WebDAVCache.MaxPermissionEntries = *oc.Cache.MaxPermissionEntries
- }
if oc.Cache.MaxUUIDEntries != nil {
cluster.Collections.WebDAVCache.MaxUUIDEntries = *oc.Cache.MaxUUIDEntries
}
StorageClasses: array2boolmap(oldvol.StorageClasses),
}
params = arvados.S3VolumeDriverParameters{
- AccessKey: string(bytes.TrimSpace(accesskeydata)),
- SecretKey: string(bytes.TrimSpace(secretkeydata)),
+ AccessKeyID: string(bytes.TrimSpace(accesskeydata)),
+ SecretAccessKey: string(bytes.TrimSpace(secretkeydata)),
Endpoint: oldvol.Endpoint,
Region: oldvol.Region,
Bucket: oldvol.Bucket,
Driver: "S3",
Replication: 4,
}, &arvados.S3VolumeDriverParameters{
- AccessKey: "accesskeydata",
- SecretKey: "secretkeydata",
+ AccessKeyID: "accesskeydata",
+ SecretAccessKey: "secretkeydata",
Endpoint: "https://storage.googleapis.com",
Region: "us-east-1z",
Bucket: "testbucket",
ldr := testLoader(c, "Clusters: {zzzzz: {}}", nil)
ldr.SetupFlags(flags)
args := ldr.MungeLegacyConfigArgs(ldr.Logger, []string{"-config", tmpfile.Name()}, mungeFlag)
- flags.Parse(args)
+ err = flags.Parse(args)
+ c.Assert(err, check.IsNil)
+ c.Assert(flags.NArg(), check.Equals, 0)
cfg, err := ldr.Load()
if err != nil {
return nil, err
return cluster, nil
}
+func (s *LoadSuite) TestLegacyVolumeDriverParameters(c *check.C) {
+ logs := checkEquivalent(c, `
+Clusters:
+ z1111:
+ Volumes:
+ z1111-nyw5e-aaaaaaaaaaaaaaa:
+ Driver: S3
+ DriverParameters:
+ AccessKey: exampleaccesskey
+ SecretKey: examplesecretkey
+ Region: foobar
+ ReadTimeout: 1200s
+`, `
+Clusters:
+ z1111:
+ Volumes:
+ z1111-nyw5e-aaaaaaaaaaaaaaa:
+ Driver: S3
+ DriverParameters:
+ AccessKeyID: exampleaccesskey
+ SecretAccessKey: examplesecretkey
+ Region: foobar
+ ReadTimeout: 1200s
+`)
+ c.Check(logs, check.Matches, `(?ms).*deprecated or unknown config entry: .*AccessKey.*`)
+ c.Check(logs, check.Matches, `(?ms).*deprecated or unknown config entry: .*SecretKey.*`)
+ c.Check(logs, check.Matches, `(?ms).*using your old config keys z1111\.Volumes\.z1111-nyw5e-aaaaaaaaaaaaaaa\.DriverParameters\.AccessKey/SecretKey -- but you should rename them to AccessKeyID/SecretAccessKey.*`)
+
+ _, err := testLoader(c, `
+Clusters:
+ z1111:
+ Volumes:
+ z1111-nyw5e-aaaaaaaaaaaaaaa:
+ Driver: S3
+ DriverParameters:
+ AccessKey: exampleaccesskey
+ SecretKey: examplesecretkey
+ AccessKeyID: exampleaccesskey
+`, nil).Load()
+ c.Check(err, check.ErrorMatches, `(?ms).*cannot use .*SecretKey.*and.*SecretAccessKey.*in z1111.Volumes.z1111-nyw5e-aaaaaaaaaaaaaaa.DriverParameters.*`)
+}
+
func (s *LoadSuite) TestDeprecatedNodeProfilesToServices(c *check.C) {
hostname, err := os.Hostname()
c.Assert(err, check.IsNil)
"UUIDTTL": "1s",
"MaxCollectionEntries": 42,
"MaxCollectionBytes": 1234567890,
- "MaxPermissionEntries": 100,
"MaxUUIDEntries": 100
},
"ManagementToken": "xyzzy"
c.Check(cluster.Collections.WebDAVCache.UUIDTTL, check.Equals, arvados.Duration(time.Second))
c.Check(cluster.Collections.WebDAVCache.MaxCollectionEntries, check.Equals, 42)
c.Check(cluster.Collections.WebDAVCache.MaxCollectionBytes, check.Equals, int64(1234567890))
- c.Check(cluster.Collections.WebDAVCache.MaxPermissionEntries, check.Equals, 100)
c.Check(cluster.Collections.WebDAVCache.MaxUUIDEntries, check.Equals, 100)
c.Check(cluster.Services.WebDAVDownload.ExternalURL, check.Equals, arvados.URL{Host: "download.example.com", Path: "/"})
"API.MaxKeepBlobBuffers": false,
"API.MaxRequestAmplification": false,
"API.MaxRequestSize": true,
+ "API.MaxTokenLifetime": false,
"API.RequestTimeout": true,
"API.SendTimeout": true,
+ "API.VocabularyPath": false,
"API.WebsocketClientEventQueue": false,
"API.WebsocketServerEventQueue": false,
"AuditLogs": false,
"Collections.BalanceCollectionBuffers": false,
"Collections.BalancePeriod": false,
"Collections.BalanceTimeout": false,
+ "Collections.BalanceUpdateLimit": false,
"Collections.BlobDeleteConcurrency": false,
"Collections.BlobMissingReport": false,
"Collections.BlobReplicateConcurrency": false,
"Collections.BlobTrashCheckInterval": false,
"Collections.BlobTrashConcurrency": false,
"Collections.BlobTrashLifetime": false,
- "Collections.CollectionVersioning": false,
+ "Collections.CollectionVersioning": true,
"Collections.DefaultReplication": true,
"Collections.DefaultTrashLifetime": true,
"Collections.ForwardSlashNameSubstitution": true,
"Collections.PreserveVersionIfIdle": true,
"Collections.S3FolderObjects": true,
"Collections.TrashSweepInterval": false,
- "Collections.TrustAllContent": false,
+ "Collections.TrustAllContent": true,
"Collections.WebDAVCache": false,
+ "Collections.KeepproxyPermission": false,
+ "Collections.WebDAVPermission": false,
+ "Collections.WebDAVLogEvents": false,
"Containers": true,
"Containers.CloudVMs": false,
"Containers.CrunchRunArgumentsList": false,
"Containers.JobsAPI": true,
"Containers.JobsAPI.Enable": true,
"Containers.JobsAPI.GitInternalDir": false,
+ "Containers.LocalKeepBlobBuffersPerVCPU": false,
+ "Containers.LocalKeepLogsToContainerLog": false,
"Containers.Logging": false,
"Containers.LogReuseDecisions": false,
+ "Containers.LSF": false,
"Containers.MaxComputeVMs": false,
"Containers.MaxDispatchAttempts": false,
"Containers.MaxRetryAttempts": true,
"Containers.MinRetryPeriod": true,
"Containers.ReserveExtraRAM": true,
+ "Containers.RuntimeEngine": true,
"Containers.ShellAccess": true,
"Containers.ShellAccess.Admin": true,
"Containers.ShellAccess.User": true,
"Containers.SupportedDockerImageFormats": true,
"Containers.SupportedDockerImageFormats.*": true,
"Containers.UsePreemptibleInstances": true,
- "ForceLegacyAPI14": false,
"Git": false,
"InstanceTypes": true,
"InstanceTypes.*": true,
"Login.LDAP.UsernameAttribute": false,
"Login.LoginCluster": true,
"Login.OpenIDConnect": true,
+ "Login.OpenIDConnect.AcceptAccessToken": false,
+ "Login.OpenIDConnect.AcceptAccessTokenScope": false,
"Login.OpenIDConnect.AuthenticationRequestParameters": false,
"Login.OpenIDConnect.ClientID": false,
"Login.OpenIDConnect.ClientSecret": false,
"Login.PAM.Enable": true,
"Login.PAM.Service": false,
"Login.RemoteTokenRefresh": true,
- "Login.SSO": true,
- "Login.SSO.Enable": true,
- "Login.SSO.ProviderAppID": false,
- "Login.SSO.ProviderAppSecret": false,
"Login.Test": true,
"Login.Test.Enable": true,
"Login.Test.Users": false,
"Login.TokenLifetime": false,
+ "Login.IssueTrustedTokens": false,
"Login.TrustedClients": false,
"Mail": true,
"Mail.EmailFrom": false,
"Services.*": true,
"Services.*.ExternalURL": true,
"Services.*.InternalURLs": false,
+ "StorageClasses": true,
+ "StorageClasses.*": true,
+ "StorageClasses.*.Default": true,
+ "StorageClasses.*.Priority": true,
"SystemLogs": false,
"SystemRootToken": false,
"TLS": false,
"Users": true,
+ "Users.ActivatedUsersAreVisibleToOthers": false,
"Users.AdminNotifierEmailFrom": false,
"Users.AnonymousUserToken": true,
"Users.AutoAdminFirstUser": false,
"Users.NewUsersAreActive": false,
"Users.PreferDomainForUsername": false,
"Users.UserNotifierEmailFrom": false,
+ "Users.UserNotifierEmailBcc": false,
"Users.UserProfileNotificationAddress": false,
"Users.UserSetupMailText": false,
+ "Users.RoleGroupsVisibleToAll": false,
"Volumes": true,
"Volumes.*": true,
"Volumes.*.*": false,
"Volumes.*.ReadOnly": true,
"Volumes.*.Replication": true,
"Volumes.*.StorageClasses": true,
- "Volumes.*.StorageClasses.*": false,
+ "Volumes.*.StorageClasses.*": true,
"Workbench": true,
"Workbench.ActivationContactLink": false,
"Workbench.APIClientConnectTimeout": true,
"Workbench.UserProfileFormFields.*.*": true,
"Workbench.UserProfileFormFields.*.*.*": true,
"Workbench.UserProfileFormMessage": true,
- "Workbench.VocabularyURL": true,
"Workbench.WelcomePageHTML": true,
}
type ExportSuite struct{}
func (s *ExportSuite) TestExport(c *check.C) {
- confdata := strings.Replace(string(DefaultYAML), "SAMPLE", "testkey", -1)
+ confdata := strings.Replace(string(DefaultYAML), "SAMPLE", "12345", -1)
cfg, err := testLoader(c, confdata, nil).Load()
c.Assert(err, check.IsNil)
cluster, err := cfg.GetCluster("xxxxx")
//
// SPDX-License-Identifier: AGPL-3.0
+//go:build ignore
// +build ignore
package main
# In each of the service sections below, the keys under
# InternalURLs are the endpoints where the service should be
- # listening, and reachable from other hosts in the cluster.
- SAMPLE:
- InternalURLs:
- "http://host1.example:12345": {}
- "http://host2.example:12345":
- # Rendezvous is normally empty/omitted. When changing the
- # URL of a Keepstore service, Rendezvous should be set to
- # the old URL (with trailing slash omitted) to preserve
- # rendezvous ordering.
- Rendezvous: ""
- SAMPLE:
- Rendezvous: ""
- ExternalURL: "-"
+ # listening, and reachable from other hosts in the
+ # cluster. Example:
+ #
+ # InternalURLs:
+ # "http://host1.example:12345": {}
+ # "http://host2.example:12345": {}
RailsAPI:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
Controller:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Websocket:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Keepbalance:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
GitHTTP:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
GitSSH:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
DispatchCloud:
- InternalURLs: {}
- ExternalURL: "-"
- SSO:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
+ DispatchLSF:
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Keepproxy:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
WebDAV:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# Base URL for Workbench inline preview. If blank, use
# WebDAVDownload instead, and disable inline preview.
# If both are empty, downloading collections from workbench
ExternalURL: ""
WebDAVDownload:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# Base URL for download links. If blank, serve links to WebDAV
# with disposition=attachment query param. Unlike preview links,
# browsers do not render attachments, so there is no risk of XSS.
ExternalURL: ""
Keepstore:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs:
+ SAMPLE:
+ # Rendezvous is normally empty/omitted. When changing the
+ # URL of a Keepstore service, Rendezvous should be set to
+ # the old URL (with trailing slash omitted) to preserve
+ # rendezvous ordering.
+ Rendezvous: ""
+ ExternalURL: ""
Composer:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
WebShell:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
# ShellInABox service endpoint URL for a given VM. If empty, do not
# offer web shell logins.
#
# https://*.webshell.uuid_prefix.arvadosapi.com
ExternalURL: ""
Workbench1:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Workbench2:
- InternalURLs: {}
+ InternalURLs: {SAMPLE: {}}
ExternalURL: ""
Health:
- InternalURLs: {}
- ExternalURL: "-"
+ InternalURLs: {SAMPLE: {}}
+ ExternalURL: ""
PostgreSQL:
# max concurrent connections per arvados server daemon
dbname: ""
SAMPLE: ""
API:
+ # Limits for how long a client token created by regular users can be valid,
+ # and also is used as a default expiration policy when no expiration date is
+ # specified.
+ # Default value zero means token expirations don't get clamped and no
+ # default expiration is set.
+ MaxTokenLifetime: 0s
+
# Maximum size (in bytes) allowed for a single API request. This
# limit is published in the discovery document for use by clients.
# Note: You must separately configure the upstream web server or
# Timeout on requests to internal Keep services.
KeepServiceRequestTimeout: 15s
+ # Vocabulary file path, local to the node running the controller.
+ # This JSON file should contain the description of what's allowed
+ # as object's metadata. Its format is described at:
+ # https://doc.arvados.org/admin/metadata-vocabulary.html
+ VocabularyPath: ""
+
Users:
# Config parameters to automatically setup new users. If enabled,
# this users will be able to self-activate. Enable this if you want
# user agreements. Should only be enabled for development.
NewUsersAreActive: false
+ # Newly activated users (whether set up by an admin or via
+ # AutoSetupNewUsers) immediately become visible to other active
+ # users.
+ #
+ # On a multi-tenant cluster, where the intent is for users to be
+ # invisible to one another unless they have been added to the
+ # same group(s) via Workbench admin interface, change this to
+ # false.
+ ActivatedUsersAreVisibleToOthers: true
+
# The e-mail address of the user you would like to become marked as an admin
# user on their first login.
AutoAdminUserWithEmail: ""
AdminNotifierEmailFrom: arvados@example.com
EmailSubjectPrefix: "[ARVADOS] "
UserNotifierEmailFrom: arvados@example.com
+ UserNotifierEmailBcc: {}
NewUserNotificationRecipients: {}
NewInactiveUserNotificationRecipients: {}
Thanks,
Your Arvados administrator.
+ # If RoleGroupsVisibleToAll is true, all role groups are visible
+ # to all active users.
+ #
+ # If false, users must be granted permission to role groups in
+ # order to see them. This is more appropriate for a multi-tenant
+ # cluster.
+ RoleGroupsVisibleToAll: true
+
AuditLogs:
# Time to keep audit logs, in seconds. (An audit log is a row added
# to the "logs" table in the PostgreSQL database each time an
#
# BalancePeriod determines the interval between start times of
# successive scan/balance operations. If a scan/balance operation
- # takes longer than RunPeriod, the next one will follow it
+ # takes longer than BalancePeriod, the next one will follow it
# immediately.
#
# If SIGUSR1 is received during an idle period between operations,
# long-running balancing operation.
BalanceTimeout: 6h
+ # Maximum number of replication_confirmed /
+ # storage_classes_confirmed updates to write to the database
+ # after a rebalancing run. When many updates are needed, this
+ # spreads them over a few runs rather than applying them all at
+ # once.
+ BalanceUpdateLimit: 100000
+
# Default lifetime for ephemeral collections: 2 weeks. This must not
# be less than BlobSigningTTL.
DefaultTrashLifetime: 336h
# is older than the amount of seconds defined on PreserveVersionIfIdle,
# a snapshot of the collection's previous state is created and linked to
# the current collection.
- CollectionVersioning: false
+ CollectionVersioning: true
# 0s = auto-create a new version on every update.
# -1s = never auto-create new versions.
# > 0s = auto-create a new version when older than the specified number of seconds.
- PreserveVersionIfIdle: -1s
+ PreserveVersionIfIdle: 10s
# If non-empty, allow project and collection names to contain
# the "/" character (slash/stroke/solidus), and replace "/" with
# WebDAV would have to expose XSS vulnerabilities in order to
# handle the redirect (see discussion on Services.WebDAV).
#
- # This setting has no effect in the recommended configuration,
- # where the WebDAV is configured to have a separate domain for
- # every collection; in this case XSS protection is provided by
- # browsers' same-origin policy.
+ # This setting has no effect in the recommended configuration, where the
+ # WebDAV service is configured to have a separate domain for every
+ # collection and XSS protection is provided by browsers' same-origin
+ # policy.
#
# The default setting (false) is appropriate for a multi-user site.
TrustAllContent: false
# Cache parameters for WebDAV content serving:
- # * TTL: Maximum time to cache manifests and permission checks.
- # * UUIDTTL: Maximum time to cache collection state.
- # * MaxBlockEntries: Maximum number of block cache entries.
- # * MaxCollectionEntries: Maximum number of collection cache entries.
- # * MaxCollectionBytes: Approximate memory limit for collection cache.
- # * MaxPermissionEntries: Maximum number of permission cache entries.
- # * MaxUUIDEntries: Maximum number of UUID cache entries.
WebDAVCache:
+ # Time to cache manifests, permission checks, and sessions.
TTL: 300s
+
+ # Time to cache collection state.
UUIDTTL: 5s
- MaxBlockEntries: 4
+
+ # Block cache entries. Each block consumes up to 64 MiB RAM.
+ MaxBlockEntries: 20
+
+ # Collection cache entries.
MaxCollectionEntries: 1000
- MaxCollectionBytes: 100000000
- MaxPermissionEntries: 1000
- MaxUUIDEntries: 1000
+
+ # Approximate memory limit (in bytes) for collection cache.
+ MaxCollectionBytes: 100000000
+
+ # UUID cache entries.
+ MaxUUIDEntries: 1000
+
+ # Persistent sessions.
+ MaxSessions: 100
+
+ # Selectively set permissions for regular users and admins to
+ # download or upload data files using the upload/download
+ # features for Workbench, WebDAV and S3 API support.
+ WebDAVPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ # Selectively set permissions for regular users and admins to be
+ # able to download or upload blocks using arv-put and
+ # arv-get from outside the cluster.
+ KeepproxyPermission:
+ User:
+ Download: true
+ Upload: true
+ Admin:
+ Download: true
+ Upload: true
+
+ # Post upload / download events to the API server logs table, so
+ # that they can be included in the arv-user-activity report.
+ # You can disable this if you find that it is creating excess
+ # load on the API server and you don't need it.
+ WebDAVLogEvents: true
Login:
- # One of the following mechanisms (SSO, Google, PAM, LDAP, or
+ # One of the following mechanisms (Google, PAM, LDAP, or
# LoginCluster) should be enabled; see
# https://doc.arvados.org/install/setup-login.html
# ID > Web application) and add your controller's /login URL
# (e.g., "https://zzzzz.example.com/login") as an authorized
# redirect URL.
- #
- # Incompatible with ForceLegacyAPI14. ProviderAppID must be
- # blank.
ClientID: ""
ClientSecret: ""
AuthenticationRequestParameters:
SAMPLE: ""
+ # Accept an OIDC access token as an API token if the OIDC
+ # provider's UserInfo endpoint accepts it.
+ #
+ # AcceptAccessTokenScope should also be used when enabling
+ # this feature.
+ AcceptAccessToken: false
+
+ # Before accepting an OIDC access token as an API token, first
+ # check that it is a JWT whose "scope" value includes this
+ # value. Example: "https://zzzzz.example.com/" (your Arvados
+ # API endpoint).
+ #
+ # If this value is empty and AcceptAccessToken is true, all
+ # access tokens will be accepted regardless of scope,
+ # including non-JWT tokens. This is not recommended.
+ AcceptAccessTokenScope: ""
+
PAM:
- # (Experimental) Use PAM to authenticate users.
+ # Use PAM to authenticate users.
Enable: false
# PAM service name. PAM will apply the policy in the
# originally supplied by the user will be used.
UsernameAttribute: uid
- SSO:
- # Authenticate with a separate SSO server. (Deprecated)
- Enable: false
-
- # ProviderAppID and ProviderAppSecret are generated during SSO
- # setup; see
- # https://doc.arvados.org/v2.0/install/install-sso.html#update-config
- ProviderAppID: ""
- ProviderAppSecret: ""
-
Test:
# Authenticate users listed here in the config file. This
# feature is intended to be used in test environments, and
# Default value zero means tokens don't have expiration.
TokenLifetime: 0s
+ # If true (default) tokens issued through login are allowed to create
+ # new tokens.
+ # If false, tokens issued through login are not allowed to
+ # viewing/creating other tokens. New tokens can only be created
+ # by going through login again.
+ IssueTrustedTokens: true
+
# When the token is returned to a client, the token itself may
- # be restricted from manipulating other tokens based on whether
+ # be restricted from viewing/creating other tokens based on whether
# the client is "trusted" or not. The local Workbench1 and
# Workbench2 are trusted by default, but if this is a
# LoginCluster, you probably want to include the other Workbench
UsePreemptibleInstances: false
# PEM encoded SSH key (RSA, DSA, or ECDSA) used by the
- # (experimental) cloud dispatcher for executing containers on
- # worker VMs. Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
+ # cloud dispatcher for executing containers on worker VMs.
+ # Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
# and ends with "\n-----END RSA PRIVATE KEY-----\n".
DispatchPrivateKey: ""
# Minimum time between two attempts to run the same container
MinRetryPeriod: 0s
+ # Container runtime: "docker" (default) or "singularity"
+ RuntimeEngine: docker
+
+ # When running a container, run a dedicated keepstore process,
+ # using the specified number of 64 MiB memory buffers per
+ # allocated CPU core (VCPUs in the container's runtime
+ # constraints). The dedicated keepstore handles I/O for
+ # collections mounted in the container, as well as saving
+ # container logs.
+ #
+ # A zero value disables this feature.
+ #
+ # In order for this feature to be activated, no volume may use
+ # AccessViaHosts, and each volume must have Replication higher
+ # than Collections.DefaultReplication. If these requirements are
+ # not satisfied, the feature is disabled automatically
+ # regardless of the value given here.
+ #
+ # Note that when this configuration is enabled, the entire
+ # cluster configuration file, including the system root token,
+ # is copied to the worker node and held in memory for the
+ # duration of the container.
+ LocalKeepBlobBuffersPerVCPU: 1
+
+ # When running a dedicated keepstore process for a container
+ # (see LocalKeepBlobBuffersPerVCPU), write keepstore log
+ # messages to keepstore.txt in the container's log collection.
+ #
+ # These log messages can reveal some volume configuration
+ # details, error messages from the cloud storage provider, etc.,
+ # which are not otherwise visible to users.
+ #
+ # Accepted values:
+ # * "none" -- no keepstore.txt file
+ # * "all" -- all logs, including request and response lines
+ # * "errors" -- all logs except "response" logs with 2xx
+ # response codes and "request" logs
+ LocalKeepLogsToContainerLog: none
+
Logging:
# When you run the db:delete_old_container_logs task, it will find
# containers that have been finished for at least this many seconds,
# (See http://ruby-doc.org/core-2.2.2/Kernel.html#method-i-format for more.)
AssignNodeHostname: "compute%<slot_number>d"
+ LSF:
+ # Arguments to bsub when submitting Arvados containers as LSF jobs.
+ #
+ # Template variables starting with % will be substituted as follows:
+ #
+ # %U uuid
+ # %C number of VCPUs
+ # %M memory in MB
+ # %T tmp in MB
+ #
+ # Use %% to express a literal %. The %%J in the default will be changed
+ # to %J, which is interpreted by bsub itself.
+ #
+ # Note that the default arguments cause LSF to write two files
+ # in /tmp on the compute node each time an Arvados container
+ # runs. Ensure you have something in place to delete old files
+ # from /tmp, or adjust the "-o" and "-e" arguments accordingly.
+ BsubArgumentsList: ["-o", "/tmp/crunch-run.%%J.out", "-e", "/tmp/crunch-run.%%J.err", "-J", "%U", "-n", "%C", "-D", "%MMB", "-R", "rusage[mem=%MMB:tmp=%TMB] span[hosts=1]", "-R", "select[mem>=%MMB]", "-R", "select[tmp>=%TMB]", "-R", "select[ncpus>=%C]"]
+
+ # Use sudo to switch to this user account when submitting LSF
+ # jobs.
+ #
+ # This account must exist on the hosts where LSF jobs run
+ # ("execution hosts"), as well as on the host where the
+ # Arvados LSF dispatcher runs ("submission host").
+ BsubSudoUser: "crunch"
+
JobsAPI:
# Enable the legacy 'jobs' API (crunch v1). This value must be a string.
#
GitInternalDir: /var/lib/arvados/internal.git
CloudVMs:
- # Enable the cloud scheduler (experimental).
+ # Enable the cloud scheduler.
Enable: false
# Name/number of port where workers' SSH services listen.
# Shell command to execute on each worker to determine whether
# the worker is booted and ready to run containers. It should
# exit zero if the worker is ready.
- BootProbeCommand: "docker ps -q"
+ BootProbeCommand: "systemctl is-system-running"
# Minimum interval between consecutive probes to a single
# worker.
# Maximum create/destroy-instance operations per second (0 =
# unlimited).
- MaxCloudOpsPerSecond: 0
+ MaxCloudOpsPerSecond: 10
- # Maximum concurrent node creation operations (0 = unlimited). This is
- # recommended by Azure in certain scenarios (see
- # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image)
- # and can be used with other cloud providers too, if desired.
- MaxConcurrentInstanceCreateOps: 0
+ # Maximum concurrent instance creation operations (0 = unlimited).
+ #
+ # MaxConcurrentInstanceCreateOps limits the number of instance creation
+ # requests that can be in flight at any one time, whereas
+ # MaxCloudOpsPerSecond limits the number of create/destroy operations
+ # that can be started per second.
+ #
+ # Because the API for instance creation on Azure is synchronous, it is
+ # recommended to increase MaxConcurrentInstanceCreateOps when running
+ # on Azure. When using managed images, a value of 20 would be
+ # appropriate. When using Azure Shared Image Galeries, it could be set
+ # higher. For more information, see
+ # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image
+ #
+ # MaxConcurrentInstanceCreateOps can be increased for other cloud
+ # providers too, if desired.
+ MaxConcurrentInstanceCreateOps: 1
# Interval between cloud provider syncs/updates ("list all
# instances").
Price: 0.1
Preemptible: false
+ StorageClasses:
+
+ # If you use multiple storage classes, specify them here, using
+ # the storage class name as the key (in place of "SAMPLE" in
+ # this sample entry).
+ #
+ # Further info/examples:
+ # https://doc.arvados.org/admin/storage-classes.html
+ SAMPLE:
+
+ # Priority determines the order volumes should be searched
+ # when reading data, in cases where a keepstore server has
+ # access to multiple volumes with different storage classes.
+ Priority: 0
+
+ # Default determines which storage class(es) should be used
+ # when a user/client writes data or saves a new collection
+ # without specifying storage classes.
+ #
+ # If any StorageClasses are configured, at least one of them
+ # must have Default: true.
+ Default: true
+
Volumes:
SAMPLE:
# AccessViaHosts specifies which keepstore processes can read
ReadOnly: false
Replication: 1
StorageClasses:
- default: true
+ # If you have configured storage classes (see StorageClasses
+ # section above), add an entry here for each storage class
+ # satisfied by this volume.
SAMPLE: true
- Driver: s3
+ Driver: S3
DriverParameters:
# for s3 driver -- see
# https://doc.arvados.org/install/configure-s3-object-storage.html
IAMRole: aaaaa
- AccessKey: aaaaa
- SecretKey: aaaaa
+ AccessKeyID: aaaaa
+ SecretAccessKey: aaaaa
Endpoint: ""
- Region: us-east-1a
+ Region: us-east-1
Bucket: aaaaa
LocationConstraint: false
V2Signature: false
ConnectTimeout: 1m
ReadTimeout: 10m
RaceWindow: 24h
+ PrefixLength: 0
# Use aws-s3-go (v2) instead of goamz
UseAWSS3v2Driver: false
DefaultOpenIdPrefix: "https://www.google.com/accounts/o8/id"
# Workbench2 configs
- VocabularyURL: ""
FileViewersConfigURL: ""
# Idle time after which the user's session will be auto closed.
<img src="/arvados-logo-big.png" style="width: 20%; float: right; padding: 1em;" />
<h2>Please log in.</h2>
- <p>The "Log in" button below will show you a sign-in
- page. After you log in, you will be redirected back to
- Arvados Workbench.</p>
-
<p>If you have never used Arvados Workbench before, logging in
for the first time will automatically create a new
account.</p>
- <i>Arvados Workbench uses your name and email address only for
+ <i>Arvados Workbench uses your information only for
identification, and does not retrieve any other personal
information.</i>
# this blank.
SSHHelpHostSuffix: ""
- # Bypass new (Arvados 1.5) API implementations, and hand off
- # requests directly to Rails instead. This can provide a temporary
- # workaround for clients that are incompatible with the new API
- # implementation. Note that it also disables some new federation
- # features and will be removed in a future release.
- ForceLegacyAPI14: false
-
# (Experimental) Restart services automatically when config file
# changes are detected. Only supported by ` + "`" + `arvados-server boot` + "`" + ` in
# dev/test mode.
ldr.configdata = buf
}
+ // FIXME: We should reject YAML if the same key is used twice
+ // in a map/object, like {foo: bar, foo: baz}. Maybe we'll get
+ // this fixed free when we upgrade ghodss/yaml to a version
+ // that uses go-yaml v3.
+
// Load the config into a dummy map to get the cluster ID
// keys, discarding the values; then set up defaults for each
// cluster ID; then load the real config on top of the
return nil, fmt.Errorf("transcoding config data: %s", err)
}
+ var loadFuncs []func(*arvados.Config) error
if !ldr.SkipDeprecated {
- err = ldr.applyDeprecatedConfig(&cfg)
- if err != nil {
- return nil, err
- }
+ loadFuncs = append(loadFuncs,
+ ldr.applyDeprecatedConfig,
+ ldr.applyDeprecatedVolumeDriverParameters,
+ )
}
if !ldr.SkipLegacy {
// legacy file is required when either:
// * a non-default location was specified
// * no primary config was loaded, and this is the
// legacy config file for the current component
- for _, err := range []error{
- ldr.loadOldEnvironmentVariables(&cfg),
- ldr.loadOldKeepstoreConfig(&cfg),
- ldr.loadOldKeepWebConfig(&cfg),
- ldr.loadOldCrunchDispatchSlurmConfig(&cfg),
- ldr.loadOldWebsocketConfig(&cfg),
- ldr.loadOldKeepproxyConfig(&cfg),
- ldr.loadOldGitHttpdConfig(&cfg),
- ldr.loadOldKeepBalanceConfig(&cfg),
- } {
- if err != nil {
- return nil, err
- }
+ loadFuncs = append(loadFuncs,
+ ldr.loadOldEnvironmentVariables,
+ ldr.loadOldKeepstoreConfig,
+ ldr.loadOldKeepWebConfig,
+ ldr.loadOldCrunchDispatchSlurmConfig,
+ ldr.loadOldWebsocketConfig,
+ ldr.loadOldKeepproxyConfig,
+ ldr.loadOldGitHttpdConfig,
+ ldr.loadOldKeepBalanceConfig,
+ )
+ }
+ loadFuncs = append(loadFuncs, ldr.setImplicitStorageClasses)
+ for _, f := range loadFuncs {
+ err = f(&cfg)
+ if err != nil {
+ return nil, err
}
}
// Check for known mistakes
for id, cc := range cfg.Clusters {
+ for remote := range cc.RemoteClusters {
+ if remote == "*" || remote == "SAMPLE" {
+ continue
+ }
+ err = ldr.checkClusterID(fmt.Sprintf("Clusters.%s.RemoteClusters.%s", id, remote), remote, true)
+ if err != nil {
+ return nil, err
+ }
+ }
for _, err = range []error{
+ ldr.checkClusterID(fmt.Sprintf("Clusters.%s", id), id, false),
+ ldr.checkClusterID(fmt.Sprintf("Clusters.%s.Login.LoginCluster", id), cc.Login.LoginCluster, true),
ldr.checkToken(fmt.Sprintf("Clusters.%s.ManagementToken", id), cc.ManagementToken),
ldr.checkToken(fmt.Sprintf("Clusters.%s.SystemRootToken", id), cc.SystemRootToken),
ldr.checkToken(fmt.Sprintf("Clusters.%s.Collections.BlobSigningKey", id), cc.Collections.BlobSigningKey),
checkKeyConflict(fmt.Sprintf("Clusters.%s.PostgreSQL.Connection", id), cc.PostgreSQL.Connection),
+ ldr.checkEnum("Containers.LocalKeepLogsToContainerLog", cc.Containers.LocalKeepLogsToContainerLog, "none", "all", "errors"),
ldr.checkEmptyKeepstores(cc),
ldr.checkUnlistedKeepstores(cc),
+ ldr.checkStorageClasses(cc),
+ // TODO: check non-empty Rendezvous on
+ // services other than Keepstore
} {
if err != nil {
return nil, err
return &cfg, nil
}
+var acceptableClusterIDRe = regexp.MustCompile(`^[a-z0-9]{5}$`)
+
+func (ldr *Loader) checkClusterID(label, clusterID string, emptyStringOk bool) error {
+ if emptyStringOk && clusterID == "" {
+ return nil
+ } else if !acceptableClusterIDRe.MatchString(clusterID) {
+ return fmt.Errorf("%s: cluster ID should be 5 alphanumeric characters", label)
+ }
+ return nil
+}
+
var acceptableTokenRe = regexp.MustCompile(`^[a-zA-Z0-9]+$`)
var acceptableTokenLength = 32
func (ldr *Loader) checkToken(label, token string) error {
if token == "" {
- ldr.Logger.Warnf("%s: secret token is not set (use %d+ random characters from a-z, A-Z, 0-9)", label, acceptableTokenLength)
+ if ldr.Logger != nil {
+ ldr.Logger.Warnf("%s: secret token is not set (use %d+ random characters from a-z, A-Z, 0-9)", label, acceptableTokenLength)
+ }
} else if !acceptableTokenRe.MatchString(token) {
return fmt.Errorf("%s: unacceptable characters in token (only a-z, A-Z, 0-9 are acceptable)", label)
} else if len(token) < acceptableTokenLength {
- ldr.Logger.Warnf("%s: token is too short (should be at least %d characters)", label, acceptableTokenLength)
+ if ldr.Logger != nil {
+ ldr.Logger.Warnf("%s: token is too short (should be at least %d characters)", label, acceptableTokenLength)
+ }
+ }
+ return nil
+}
+
+func (ldr *Loader) checkEnum(label, value string, accepted ...string) error {
+ for _, s := range accepted {
+ if s == value {
+ return nil
+ }
+ }
+ return fmt.Errorf("%s: unacceptable value %q: must be one of %q", label, value, accepted)
+}
+
+func (ldr *Loader) setImplicitStorageClasses(cfg *arvados.Config) error {
+cluster:
+ for id, cc := range cfg.Clusters {
+ if len(cc.StorageClasses) > 0 {
+ continue cluster
+ }
+ for _, vol := range cc.Volumes {
+ if len(vol.StorageClasses) > 0 {
+ continue cluster
+ }
+ }
+ // No explicit StorageClasses config info at all; fill
+ // in implicit defaults.
+ for id, vol := range cc.Volumes {
+ vol.StorageClasses = map[string]bool{"default": true}
+ cc.Volumes[id] = vol
+ }
+ cc.StorageClasses = map[string]arvados.StorageClassConfig{"default": {Default: true}}
+ cfg.Clusters[id] = cc
+ }
+ return nil
+}
+
+func (ldr *Loader) checkStorageClasses(cc arvados.Cluster) error {
+ classOnVolume := map[string]bool{}
+ for volid, vol := range cc.Volumes {
+ if len(vol.StorageClasses) == 0 {
+ return fmt.Errorf("%s: volume has no StorageClasses listed", volid)
+ }
+ for classid := range vol.StorageClasses {
+ if _, ok := cc.StorageClasses[classid]; !ok {
+ return fmt.Errorf("%s: volume refers to storage class %q that is not defined in StorageClasses", volid, classid)
+ }
+ classOnVolume[classid] = true
+ }
+ }
+ haveDefault := false
+ for classid, sc := range cc.StorageClasses {
+ if !classOnVolume[classid] && len(cc.Volumes) > 0 {
+ ldr.Logger.Warnf("there are no volumes providing storage class %q", classid)
+ }
+ if sc.Default {
+ haveDefault = true
+ }
+ }
+ if !haveDefault {
+ return fmt.Errorf("there is no default storage class (at least one entry in StorageClasses must have Default: true)")
}
return nil
}
if ldr.Logger == nil {
return
}
- allowed := map[string]interface{}{}
- for k, v := range expected {
- allowed[strings.ToLower(k)] = v
- }
for k, vsupp := range supplied {
if k == "SAMPLE" {
// entry will be dropped in removeSampleKeys anyway
continue
}
- vexp, ok := allowed[strings.ToLower(k)]
+ vexp, ok := expected[k]
if expected["SAMPLE"] != nil {
+ // use the SAMPLE entry's keys as the
+ // "expected" map when checking vsupp
+ // recursively.
vexp = expected["SAMPLE"]
} else if !ok {
- ldr.Logger.Warnf("deprecated or unknown config entry: %s%s", prefix, k)
+ // check for a case-insensitive match
+ hint := ""
+ for ek := range expected {
+ if strings.EqualFold(k, ek) {
+ hint = " (perhaps you meant " + ek + "?)"
+ // If we don't delete this, it
+ // will end up getting merged,
+ // unpredictably
+ // merging/overriding the
+ // default.
+ delete(supplied, k)
+ break
+ }
+ }
+ ldr.Logger.Warnf("deprecated or unknown config entry: %s%s%s", prefix, k, hint)
continue
}
if vsupp, ok := vsupp.(map[string]interface{}); !ok {
var _ = check.Suite(&LoadSuite{})
+var emptyConfigYAML = `Clusters: {"z1111": {}}`
+
// Return a new Loader that reads cluster config from configdata
// (instead of the usual default /etc/arvados/config.yml), and logs to
// logdst or (if that's nil) c.Log.
}
func (s *LoadSuite) TestNoConfigs(c *check.C) {
- cfg, err := testLoader(c, `Clusters: {"z1111": {}}`, nil).Load()
+ cfg, err := testLoader(c, emptyConfigYAML, nil).Load()
c.Assert(err, check.IsNil)
c.Assert(cfg.Clusters, check.HasLen, 1)
cc, err := cfg.GetCluster("z1111")
f, err = ioutil.TempFile("", "")
c.Check(err, check.IsNil)
defer os.Remove(f.Name())
- io.WriteString(f, "Clusters: {aaaaa: {}}\n")
+ io.WriteString(f, emptyConfigYAML)
newfile := f.Name()
for _, trial := range []struct {
SystemRootToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Collections:
BlobSigningKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- postgresql: {}
- BadKey: {}
- Containers: {}
+ PostgreSQL: {}
+ BadKey1: {}
+ Containers:
+ RunTimeEngine: abc
RemoteClusters:
z2222:
Host: z2222.arvadosapi.com
Proxy: true
- BadKey: badValue
+ BadKey2: badValue
+ Services:
+ KeepStore:
+ InternalURLs:
+ "http://host.example:12345": {}
+ Keepstore:
+ InternalURLs:
+ "http://host.example:12345":
+ RendezVous: x
+ ServiceS:
+ Keepstore:
+ InternalURLs:
+ "http://host.example:12345": {}
+ Volumes:
+ zzzzz-nyw5e-aaaaaaaaaaaaaaa: {}
`, &logbuf).Load()
c.Assert(err, check.IsNil)
+ c.Log(logbuf.String())
logs := strings.Split(strings.TrimSuffix(logbuf.String(), "\n"), "\n")
for _, log := range logs {
- c.Check(log, check.Matches, `.*deprecated or unknown config entry:.*BadKey.*`)
+ c.Check(log, check.Matches, `.*deprecated or unknown config entry:.*(RunTimeEngine.*RuntimeEngine|BadKey1|BadKey2|KeepStore|ServiceS|RendezVous).*`)
}
- c.Check(logs, check.HasLen, 2)
+ c.Check(logs, check.HasLen, 6)
}
func (s *LoadSuite) checkSAMPLEKeys(c *check.C, path string, x interface{}) {
_, err := testLoader(c, `
Clusters:
zzzzz:
- postgresql:
- connection:
+ PostgreSQL:
+ Connection:
DBName: dbname
Host: host
`, nil).Load()
c.Check(err, check.ErrorMatches, `Clusters.zzzzz.PostgreSQL.Connection: multiple entries for "(dbname|host)".*`)
}
+func (s *LoadSuite) TestBadClusterIDs(c *check.C) {
+ for _, data := range []string{`
+Clusters:
+ 123456:
+ ManagementToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ SystemRootToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ Collections:
+ BlobSigningKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+`, `
+Clusters:
+ 12345:
+ ManagementToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ SystemRootToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ Collections:
+ BlobSigningKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ RemoteClusters:
+ Zzzzz:
+ Host: Zzzzz.arvadosapi.com
+ Proxy: true
+`, `
+Clusters:
+ abcde:
+ ManagementToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ SystemRootToken: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ Collections:
+ BlobSigningKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ Login:
+ LoginCluster: zz-zz
+`,
+ } {
+ c.Log(data)
+ v, err := testLoader(c, data, nil).Load()
+ if v != nil {
+ c.Logf("%#v", v.Clusters)
+ }
+ c.Check(err, check.ErrorMatches, `.*cluster ID should be 5 alphanumeric characters.*`)
+ }
+}
+
func (s *LoadSuite) TestBadType(c *check.C) {
for _, data := range []string{`
Clusters:
`)
}
-func checkEquivalent(c *check.C, goty, expectedy string) {
- gotldr := testLoader(c, goty, nil)
+func checkEquivalent(c *check.C, goty, expectedy string) string {
+ var logbuf bytes.Buffer
+ gotldr := testLoader(c, goty, &logbuf)
expectedldr := testLoader(c, expectedy, nil)
checkEquivalentLoaders(c, gotldr, expectedldr)
+ return logbuf.String()
}
func checkEqualYAML(c *check.C, got, expected interface{}) {
c.Errorf("Should have produced an error")
}
- var logbuf bytes.Buffer
- loader := testLoader(c, string(DefaultYAML), &logbuf)
+ loader := testLoader(c, string(DefaultYAML), nil)
cfg, err := loader.Load()
c.Assert(err, check.IsNil)
if err := checkListKeys("", cfg); err != nil {
c.Error(err)
}
}
+
+func (s *LoadSuite) TestImplicitStorageClasses(c *check.C) {
+ // If StorageClasses and Volumes.*.StorageClasses are all
+ // empty, there is a default storage class named "default".
+ ldr := testLoader(c, `{"Clusters":{"z1111":{}}}`, nil)
+ cfg, err := ldr.Load()
+ c.Assert(err, check.IsNil)
+ cc, err := cfg.GetCluster("z1111")
+ c.Assert(err, check.IsNil)
+ c.Check(cc.StorageClasses, check.HasLen, 1)
+ c.Check(cc.StorageClasses["default"].Default, check.Equals, true)
+ c.Check(cc.StorageClasses["default"].Priority, check.Equals, 0)
+
+ // The implicit "default" storage class is used by all
+ // volumes.
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ Volumes:
+ z: {}`, nil)
+ cfg, err = ldr.Load()
+ c.Assert(err, check.IsNil)
+ cc, err = cfg.GetCluster("z1111")
+ c.Assert(err, check.IsNil)
+ c.Check(cc.StorageClasses, check.HasLen, 1)
+ c.Check(cc.StorageClasses["default"].Default, check.Equals, true)
+ c.Check(cc.StorageClasses["default"].Priority, check.Equals, 0)
+ c.Check(cc.Volumes["z"].StorageClasses["default"], check.Equals, true)
+
+ // The "default" storage class isn't implicit if any classes
+ // are configured explicitly.
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ StorageClasses:
+ local:
+ Default: true
+ Priority: 111
+ Volumes:
+ z:
+ StorageClasses:
+ local: true`, nil)
+ cfg, err = ldr.Load()
+ c.Assert(err, check.IsNil)
+ cc, err = cfg.GetCluster("z1111")
+ c.Assert(err, check.IsNil)
+ c.Check(cc.StorageClasses, check.HasLen, 1)
+ c.Check(cc.StorageClasses["local"].Default, check.Equals, true)
+ c.Check(cc.StorageClasses["local"].Priority, check.Equals, 111)
+
+ // It is an error for a volume to refer to a storage class
+ // that isn't listed in StorageClasses.
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ StorageClasses:
+ local:
+ Default: true
+ Priority: 111
+ Volumes:
+ z:
+ StorageClasses:
+ nx: true`, nil)
+ _, err = ldr.Load()
+ c.Assert(err, check.ErrorMatches, `z: volume refers to storage class "nx" that is not defined.*`)
+
+ // It is an error for a volume to refer to a storage class
+ // that isn't listed in StorageClasses ... even if it's
+ // "default", which would exist implicitly if it weren't
+ // referenced explicitly by a volume.
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ Volumes:
+ z:
+ StorageClasses:
+ default: true`, nil)
+ _, err = ldr.Load()
+ c.Assert(err, check.ErrorMatches, `z: volume refers to storage class "default" that is not defined.*`)
+
+ // If the "default" storage class is configured explicitly, it
+ // is not used implicitly by any volumes, even if it's the
+ // only storage class.
+ var logbuf bytes.Buffer
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ StorageClasses:
+ default:
+ Default: true
+ Priority: 111
+ Volumes:
+ z: {}`, &logbuf)
+ _, err = ldr.Load()
+ c.Assert(err, check.ErrorMatches, `z: volume has no StorageClasses listed`)
+
+ // If StorageClasses are configured explicitly, there must be
+ // at least one with Default: true. (Calling one "default" is
+ // not sufficient.)
+ ldr = testLoader(c, `
+Clusters:
+ z1111:
+ StorageClasses:
+ default:
+ Priority: 111
+ Volumes:
+ z:
+ StorageClasses:
+ default: true`, nil)
+ _, err = ldr.Load()
+ c.Assert(err, check.ErrorMatches, `there is no default storage class.*`)
+}
"context"
"encoding/json"
"fmt"
+ "net"
"net/http"
"net/http/httptest"
"os"
s.fakeProvider.ValidClientSecret = "test#client/secret"
cluster := &arvados.Cluster{
- ClusterID: "zhome",
- PostgreSQL: integrationTestCluster().PostgreSQL,
- ForceLegacyAPI14: forceLegacyAPI14,
- SystemRootToken: arvadostest.SystemRootToken,
+ ClusterID: "zhome",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
+ SystemRootToken: arvadostest.SystemRootToken,
}
cluster.TLS.Insecure = true
cluster.API.MaxItemsPerResponse = 1000
cluster.Login.OpenIDConnect.ClientSecret = s.fakeProvider.ValidClientSecret
cluster.Login.OpenIDConnect.EmailClaim = "email"
cluster.Login.OpenIDConnect.EmailVerifiedClaim = "email_verified"
+ cluster.Login.OpenIDConnect.AcceptAccessToken = true
+ cluster.Login.OpenIDConnect.AcceptAccessTokenScope = ""
- s.testHandler = &Handler{Cluster: cluster}
+ s.testHandler = &Handler{Cluster: cluster, BackgroundContext: ctxlog.Context(context.Background(), s.log)}
s.testServer = newServerFromIntegrationTestEnv(c)
- s.testServer.Server.Handler = httpserver.HandlerWithContext(
- ctxlog.Context(context.Background(), s.log),
- httpserver.AddRequestIDs(httpserver.LogRequests(s.testHandler)))
+ s.testServer.Server.BaseContext = func(net.Listener) context.Context {
+ return ctxlog.Context(context.Background(), s.log)
+ }
+ s.testServer.Server.Handler = httpserver.AddRequestIDs(httpserver.LogRequests(s.testHandler))
c.Assert(s.testServer.Start(), check.IsNil)
}
// Command starts a controller service. See cmd/arvados-server/cmd.go
var Command cmd.Handler = service.Command(arvados.ServiceNameController, newHandler)
-func newHandler(_ context.Context, cluster *arvados.Cluster, _ string, _ *prometheus.Registry) service.Handler {
- return &Handler{Cluster: cluster}
+func newHandler(ctx context.Context, cluster *arvados.Cluster, _ string, _ *prometheus.Registry) service.Handler {
+ return &Handler{Cluster: cluster, BackgroundContext: ctx}
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package dblock
+
+import (
+ "context"
+ "database/sql"
+ "sync"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/jmoiron/sqlx"
+)
+
+var (
+ TrashSweep = &DBLocker{key: 10001}
+ retryDelay = 5 * time.Second
+)
+
+// DBLocker uses pg_advisory_lock to maintain a cluster-wide lock for
+// a long-running task like "do X every N seconds".
+type DBLocker struct {
+ key int
+ mtx sync.Mutex
+ ctx context.Context
+ getdb func(context.Context) (*sqlx.DB, error)
+ conn *sql.Conn // != nil if advisory lock has been acquired
+}
+
+// Lock acquires the advisory lock, waiting/reconnecting if needed.
+func (dbl *DBLocker) Lock(ctx context.Context, getdb func(context.Context) (*sqlx.DB, error)) {
+ logger := ctxlog.FromContext(ctx)
+ for ; ; time.Sleep(retryDelay) {
+ dbl.mtx.Lock()
+ if dbl.conn != nil {
+ // Another goroutine is already locked/waiting
+ // on this lock. Wait for them to release.
+ dbl.mtx.Unlock()
+ continue
+ }
+ db, err := getdb(ctx)
+ if err != nil {
+ logger.WithError(err).Infof("error getting database pool")
+ dbl.mtx.Unlock()
+ continue
+ }
+ conn, err := db.Conn(ctx)
+ if err != nil {
+ logger.WithError(err).Info("error getting database connection")
+ dbl.mtx.Unlock()
+ continue
+ }
+ var locked bool
+ err = conn.QueryRowContext(ctx, `SELECT pg_try_advisory_lock($1)`, dbl.key).Scan(&locked)
+ if err != nil {
+ logger.WithError(err).Infof("error getting pg_try_advisory_lock %d", dbl.key)
+ conn.Close()
+ dbl.mtx.Unlock()
+ continue
+ }
+ if !locked {
+ conn.Close()
+ dbl.mtx.Unlock()
+ continue
+ }
+ logger.Debugf("acquired pg_advisory_lock %d", dbl.key)
+ dbl.ctx, dbl.getdb, dbl.conn = ctx, getdb, conn
+ dbl.mtx.Unlock()
+ return
+ }
+}
+
+// Check confirms that the lock is still active (i.e., the session is
+// still alive), and re-acquires if needed. Panics if Lock is not
+// acquired first.
+func (dbl *DBLocker) Check() {
+ dbl.mtx.Lock()
+ err := dbl.conn.PingContext(dbl.ctx)
+ if err == nil {
+ ctxlog.FromContext(dbl.ctx).Debugf("pg_advisory_lock %d connection still alive", dbl.key)
+ dbl.mtx.Unlock()
+ return
+ }
+ ctxlog.FromContext(dbl.ctx).WithError(err).Info("database connection ping failed")
+ dbl.conn.Close()
+ dbl.conn = nil
+ ctx, getdb := dbl.ctx, dbl.getdb
+ dbl.mtx.Unlock()
+ dbl.Lock(ctx, getdb)
+}
+
+func (dbl *DBLocker) Unlock() {
+ dbl.mtx.Lock()
+ defer dbl.mtx.Unlock()
+ if dbl.conn != nil {
+ _, err := dbl.conn.ExecContext(context.Background(), `SELECT pg_advisory_unlock($1)`, dbl.key)
+ if err != nil {
+ ctxlog.FromContext(dbl.ctx).WithError(err).Infof("error releasing pg_advisory_lock %d", dbl.key)
+ } else {
+ ctxlog.FromContext(dbl.ctx).Debugf("released pg_advisory_lock %d", dbl.key)
+ }
+ dbl.conn.Close()
+ dbl.conn = nil
+ }
+}
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: AGPL-3.0
-
-package controller
-
-import (
- "bufio"
- "bytes"
- "context"
- "crypto/md5"
- "encoding/json"
- "fmt"
- "io"
- "io/ioutil"
- "net/http"
- "strings"
- "sync"
-
- "git.arvados.org/arvados.git/sdk/go/arvados"
- "git.arvados.org/arvados.git/sdk/go/httpserver"
- "git.arvados.org/arvados.git/sdk/go/keepclient"
-)
-
-func rewriteSignatures(clusterID string, expectHash string,
- resp *http.Response, requestError error) (newResponse *http.Response, err error) {
-
- if requestError != nil {
- return resp, requestError
- }
-
- if resp.StatusCode != http.StatusOK {
- return resp, nil
- }
-
- originalBody := resp.Body
- defer originalBody.Close()
-
- var col arvados.Collection
- err = json.NewDecoder(resp.Body).Decode(&col)
- if err != nil {
- return nil, err
- }
-
- // rewriting signatures will make manifest text 5-10% bigger so calculate
- // capacity accordingly
- updatedManifest := bytes.NewBuffer(make([]byte, 0, int(float64(len(col.ManifestText))*1.1)))
-
- hasher := md5.New()
- mw := io.MultiWriter(hasher, updatedManifest)
- sz := 0
-
- scanner := bufio.NewScanner(strings.NewReader(col.ManifestText))
- scanner.Buffer(make([]byte, 1048576), len(col.ManifestText))
- for scanner.Scan() {
- line := scanner.Text()
- tokens := strings.Split(line, " ")
- if len(tokens) < 3 {
- return nil, fmt.Errorf("Invalid stream (<3 tokens): %q", line)
- }
-
- n, err := mw.Write([]byte(tokens[0]))
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
- sz += n
- for _, token := range tokens[1:] {
- n, err = mw.Write([]byte(" "))
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
- sz += n
-
- m := keepclient.SignedLocatorRe.FindStringSubmatch(token)
- if m != nil {
- // Rewrite the block signature to be a remote signature
- _, err = fmt.Fprintf(updatedManifest, "%s%s%s+R%s-%s%s", m[1], m[2], m[3], clusterID, m[5][2:], m[8])
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
-
- // for hash checking, ignore signatures
- n, err = fmt.Fprintf(hasher, "%s%s", m[1], m[2])
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
- sz += n
- } else {
- n, err = mw.Write([]byte(token))
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
- sz += n
- }
- }
- n, err = mw.Write([]byte("\n"))
- if err != nil {
- return nil, fmt.Errorf("Error updating manifest: %v", err)
- }
- sz += n
- }
-
- // Check that expected hash is consistent with
- // portable_data_hash field of the returned record
- if expectHash == "" {
- expectHash = col.PortableDataHash
- } else if expectHash != col.PortableDataHash {
- return nil, fmt.Errorf("portable_data_hash %q on returned record did not match expected hash %q ", expectHash, col.PortableDataHash)
- }
-
- // Certify that the computed hash of the manifest_text matches our expectation
- sum := hasher.Sum(nil)
- computedHash := fmt.Sprintf("%x+%v", sum, sz)
- if computedHash != expectHash {
- return nil, fmt.Errorf("Computed manifest_text hash %q did not match expected hash %q", computedHash, expectHash)
- }
-
- col.ManifestText = updatedManifest.String()
-
- newbody, err := json.Marshal(col)
- if err != nil {
- return nil, err
- }
-
- buf := bytes.NewBuffer(newbody)
- resp.Body = ioutil.NopCloser(buf)
- resp.ContentLength = int64(buf.Len())
- resp.Header.Set("Content-Length", fmt.Sprintf("%v", buf.Len()))
-
- return resp, nil
-}
-
-func filterLocalClusterResponse(resp *http.Response, requestError error) (newResponse *http.Response, err error) {
- if requestError != nil {
- return resp, requestError
- }
-
- if resp.StatusCode == http.StatusNotFound {
- // Suppress returning this result, because we want to
- // search the federation.
- return nil, nil
- }
- return resp, nil
-}
-
-type searchRemoteClusterForPDH struct {
- pdh string
- remoteID string
- mtx *sync.Mutex
- sentResponse *bool
- sharedContext *context.Context
- cancelFunc func()
- errors *[]string
- statusCode *int
-}
-
-func fetchRemoteCollectionByUUID(
- h *genericFederatedRequestHandler,
- effectiveMethod string,
- clusterID *string,
- uuid string,
- remainder string,
- w http.ResponseWriter,
- req *http.Request) bool {
-
- if effectiveMethod != "GET" {
- // Only handle GET requests right now
- return false
- }
-
- if uuid != "" {
- // Collection UUID GET request
- *clusterID = uuid[0:5]
- if *clusterID != "" && *clusterID != h.handler.Cluster.ClusterID {
- // request for remote collection by uuid
- resp, err := h.handler.remoteClusterRequest(*clusterID, req)
- newResponse, err := rewriteSignatures(*clusterID, "", resp, err)
- h.handler.proxy.ForwardResponse(w, newResponse, err)
- return true
- }
- }
-
- return false
-}
-
-func fetchRemoteCollectionByPDH(
- h *genericFederatedRequestHandler,
- effectiveMethod string,
- clusterID *string,
- uuid string,
- remainder string,
- w http.ResponseWriter,
- req *http.Request) bool {
-
- if effectiveMethod != "GET" {
- // Only handle GET requests right now
- return false
- }
-
- m := collectionsByPDHRe.FindStringSubmatch(req.URL.Path)
- if len(m) != 2 {
- return false
- }
-
- // Request for collection by PDH. Search the federation.
-
- // First, query the local cluster.
- resp, err := h.handler.localClusterRequest(req)
- newResp, err := filterLocalClusterResponse(resp, err)
- if newResp != nil || err != nil {
- h.handler.proxy.ForwardResponse(w, newResp, err)
- return true
- }
-
- // Create a goroutine for each cluster in the
- // RemoteClusters map. The first valid result gets
- // returned to the client. When that happens, all
- // other outstanding requests are cancelled
- sharedContext, cancelFunc := context.WithCancel(req.Context())
- defer cancelFunc()
-
- req = req.WithContext(sharedContext)
- wg := sync.WaitGroup{}
- pdh := m[1]
- success := make(chan *http.Response)
- errorChan := make(chan error, len(h.handler.Cluster.RemoteClusters))
-
- acquire, release := semaphore(h.handler.Cluster.API.MaxRequestAmplification)
-
- for remoteID := range h.handler.Cluster.RemoteClusters {
- if remoteID == h.handler.Cluster.ClusterID {
- // No need to query local cluster again
- continue
- }
- if remoteID == "*" {
- // This isn't a real remote cluster: it just sets defaults for unlisted remotes.
- continue
- }
-
- wg.Add(1)
- go func(remote string) {
- defer wg.Done()
- acquire()
- defer release()
- select {
- case <-sharedContext.Done():
- return
- default:
- }
-
- resp, err := h.handler.remoteClusterRequest(remote, req)
- wasSuccess := false
- defer func() {
- if resp != nil && !wasSuccess {
- resp.Body.Close()
- }
- }()
- if err != nil {
- errorChan <- err
- return
- }
- if resp.StatusCode != http.StatusOK {
- errorChan <- HTTPError{resp.Status, resp.StatusCode}
- return
- }
- select {
- case <-sharedContext.Done():
- return
- default:
- }
-
- newResponse, err := rewriteSignatures(remote, pdh, resp, nil)
- if err != nil {
- errorChan <- err
- return
- }
- select {
- case <-sharedContext.Done():
- case success <- newResponse:
- wasSuccess = true
- }
- }(remoteID)
- }
- go func() {
- wg.Wait()
- cancelFunc()
- }()
-
- errorCode := http.StatusNotFound
-
- for {
- select {
- case newResp = <-success:
- h.handler.proxy.ForwardResponse(w, newResp, nil)
- return true
- case <-sharedContext.Done():
- var errors []string
- for len(errorChan) > 0 {
- err := <-errorChan
- if httperr, ok := err.(HTTPError); !ok || httperr.Code != http.StatusNotFound {
- errorCode = http.StatusBadGateway
- }
- errors = append(errors, err.Error())
- }
- httpserver.Errors(w, errors, errorCode)
- return true
- }
- }
-
- // shouldn't ever get here
- return true
-}
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: AGPL-3.0
-
-package controller
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "io/ioutil"
- "net/http"
- "strings"
-
- "git.arvados.org/arvados.git/sdk/go/auth"
- "git.arvados.org/arvados.git/sdk/go/httpserver"
-)
-
-func remoteContainerRequestCreate(
- h *genericFederatedRequestHandler,
- effectiveMethod string,
- clusterID *string,
- uuid string,
- remainder string,
- w http.ResponseWriter,
- req *http.Request) bool {
-
- if effectiveMethod != "POST" || uuid != "" || remainder != "" {
- return false
- }
-
- // First make sure supplied token is valid.
- creds := auth.NewCredentials()
- creds.LoadTokensFromHTTPRequest(req)
-
- currentUser, ok, err := h.handler.validateAPItoken(req, creds.Tokens[0])
- if err != nil {
- httpserver.Error(w, err.Error(), http.StatusInternalServerError)
- return true
- } else if !ok {
- httpserver.Error(w, "invalid API token", http.StatusForbidden)
- return true
- }
-
- if *clusterID == "" || *clusterID == h.handler.Cluster.ClusterID {
- // Submitting container request to local cluster. No
- // need to set a runtime_token (rails api will create
- // one when the container runs) or do a remote cluster
- // request.
- return false
- }
-
- if req.Header.Get("Content-Type") != "application/json" {
- httpserver.Error(w, "Expected Content-Type: application/json, got "+req.Header.Get("Content-Type"), http.StatusBadRequest)
- return true
- }
-
- originalBody := req.Body
- defer originalBody.Close()
- var request map[string]interface{}
- err = json.NewDecoder(req.Body).Decode(&request)
- if err != nil {
- httpserver.Error(w, err.Error(), http.StatusBadRequest)
- return true
- }
-
- crString, ok := request["container_request"].(string)
- if ok {
- var crJSON map[string]interface{}
- err := json.Unmarshal([]byte(crString), &crJSON)
- if err != nil {
- httpserver.Error(w, err.Error(), http.StatusBadRequest)
- return true
- }
-
- request["container_request"] = crJSON
- }
-
- containerRequest, ok := request["container_request"].(map[string]interface{})
- if !ok {
- // Use toplevel object as the container_request object
- containerRequest = request
- }
-
- // If runtime_token is not set, create a new token
- if _, ok := containerRequest["runtime_token"]; !ok {
- if len(currentUser.Authorization.Scopes) != 1 || currentUser.Authorization.Scopes[0] != "all" {
- httpserver.Error(w, "Token scope is not [all]", http.StatusForbidden)
- return true
- }
-
- if strings.HasPrefix(currentUser.Authorization.UUID, h.handler.Cluster.ClusterID) {
- // Local user, submitting to a remote cluster.
- // Create a new time-limited token.
- newtok, err := h.handler.createAPItoken(req, currentUser.UUID, nil)
- if err != nil {
- httpserver.Error(w, err.Error(), http.StatusForbidden)
- return true
- }
- containerRequest["runtime_token"] = newtok.TokenV2()
- } else {
- // Remote user. Container request will use the
- // current token, minus the trailing portion
- // (optional container uuid).
- sp := strings.Split(creds.Tokens[0], "/")
- if len(sp) >= 3 {
- containerRequest["runtime_token"] = strings.Join(sp[0:3], "/")
- } else {
- containerRequest["runtime_token"] = creds.Tokens[0]
- }
- }
- }
-
- newbody, err := json.Marshal(request)
- buf := bytes.NewBuffer(newbody)
- req.Body = ioutil.NopCloser(buf)
- req.ContentLength = int64(buf.Len())
- req.Header.Set("Content-Length", fmt.Sprintf("%v", buf.Len()))
-
- resp, err := h.handler.remoteClusterRequest(*clusterID, req)
- h.handler.proxy.ForwardResponse(w, resp, err)
- return true
-}
wfHandler := &genericFederatedRequestHandler{next, h, wfRe, nil}
containersHandler := &genericFederatedRequestHandler{next, h, containersRe, nil}
- containerRequestsHandler := &genericFederatedRequestHandler{next, h, containerRequestsRe,
- []federatedRequestDelegate{remoteContainerRequestCreate}}
- collectionsRequestsHandler := &genericFederatedRequestHandler{next, h, collectionsRe,
- []federatedRequestDelegate{fetchRemoteCollectionByUUID, fetchRemoteCollectionByPDH}}
linksRequestsHandler := &genericFederatedRequestHandler{next, h, linksRe, nil}
mux.Handle("/arvados/v1/workflows", wfHandler)
mux.Handle("/arvados/v1/workflows/", wfHandler)
mux.Handle("/arvados/v1/containers", containersHandler)
mux.Handle("/arvados/v1/containers/", containersHandler)
- mux.Handle("/arvados/v1/container_requests", containerRequestsHandler)
- mux.Handle("/arvados/v1/container_requests/", containerRequestsHandler)
- mux.Handle("/arvados/v1/collections", collectionsRequestsHandler)
- mux.Handle("/arvados/v1/collections/", collectionsRequestsHandler)
mux.Handle("/arvados/v1/links", linksRequestsHandler)
mux.Handle("/arvados/v1/links/", linksRequestsHandler)
mux.Handle("/", next)
mux.ServeHTTP(w, req)
})
-
- return mux
}
type CurrentUser struct {
}
return &arvados.APIClientAuthorization{
- UUID: uuid,
- APIToken: token,
- ExpiresAt: "",
- Scopes: scopes}, nil
+ UUID: uuid,
+ APIToken: token,
+ Scopes: scopes}, nil
}
// Extract the auth token supplied in req, and replace it with a
return updatedReq, nil
}
- ctxlog.FromContext(req.Context()).Infof("saltAuthToken: cluster %s token %s remote %s", h.Cluster.ClusterID, creds.Tokens[0], remote)
+ ctxlog.FromContext(req.Context()).Debugf("saltAuthToken: cluster %s token %s remote %s", h.Cluster.ClusterID, creds.Tokens[0], remote)
token, err := auth.SaltToken(creds.Tokens[0], remote)
- if err == auth.ErrObsoleteToken {
+ if err == auth.ErrObsoleteToken || err == auth.ErrTokenFormat {
// If the token exists in our own database for our own
// user, salt it for the remote. Otherwise, assume it
// was issued by the remote, and pass it through
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
)
type Conn struct {
remotes map[string]backend
}
-func New(cluster *arvados.Cluster) *Conn {
+func New(cluster *arvados.Cluster, healthFuncs *map[string]health.Func) *Conn {
local := localdb.NewConn(cluster)
remotes := map[string]backend{}
for id, remote := range cluster.RemoteClusters {
if !remote.Proxy || id == cluster.ClusterID {
continue
}
- conn := rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(local, id))
+ conn := rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(cluster, local, id))
// Older versions of controller rely on the Via header
// to detect loops.
conn.SendHeader = http.Header{"Via": {"HTTP/1.1 arvados-controller"}}
remotes[id] = conn
}
+ if healthFuncs != nil {
+ hf := map[string]health.Func{"vocabulary": local.LastVocabularyError}
+ *healthFuncs = hf
+ }
+
return &Conn{
cluster: cluster,
local: local,
// tokens from an incoming request context, determines whether they
// should (and can) be salted for the given remoteID, and returns the
// resulting tokens.
-func saltedTokenProvider(local backend, remoteID string) rpc.TokenProvider {
+func saltedTokenProvider(cluster *arvados.Cluster, local backend, remoteID string) rpc.TokenProvider {
return func(ctx context.Context) ([]string, error) {
var tokens []string
incoming, ok := auth.FromContext(ctx)
return nil, errors.New("no token provided")
}
for _, token := range incoming.Tokens {
+ if strings.HasPrefix(token, "v2/"+cluster.ClusterID+"-") && remoteID == cluster.Login.LoginCluster {
+ // If we did this, the login cluster
+ // would call back to us and then
+ // reject our response because the
+ // user UUID prefix (i.e., the
+ // LoginCluster prefix) won't match
+ // the token UUID prefix (i.e., our
+ // prefix).
+ return nil, httpErrorf(http.StatusUnauthorized, "cannot use a locally issued token to forward a request to our login cluster (%s)", remoteID)
+ }
salted, err := auth.SaltToken(token, remoteID)
switch err {
case nil:
tokens = append(tokens, salted)
case auth.ErrSalted:
tokens = append(tokens, token)
+ case auth.ErrTokenFormat:
+ // pass through unmodified (assume it's an OIDC access token)
+ tokens = append(tokens, token)
case auth.ErrObsoleteToken:
ctx := auth.NewContext(ctx, &auth.Credentials{Tokens: []string{token}})
aca, err := local.APIClientAuthorizationCurrent(ctx, arvados.GetOptions{})
return json.RawMessage(buf.Bytes()), err
}
+func (conn *Conn) VocabularyGet(ctx context.Context) (arvados.Vocabulary, error) {
+ return conn.chooseBackend(conn.cluster.ClusterID).VocabularyGet(ctx)
+}
+
func (conn *Conn) Login(ctx context.Context, options arvados.LoginOptions) (arvados.LoginResponse, error) {
if id := conn.cluster.Login.LoginCluster; id != "" && id != conn.cluster.ClusterID {
// defer entire login procedure to designated cluster
if err != nil {
return err
}
- // options.UUID is either hash+size or
- // hash+size+hints; only hash+size need to
- // match the computed PDH.
- if pdh := arvados.PortableDataHash(c.ManifestText); pdh != options.UUID && !strings.HasPrefix(options.UUID, pdh+"+") {
- err = httpErrorf(http.StatusBadGateway, "bad portable data hash %q received from remote %q (expected %q)", pdh, remoteID, options.UUID)
- ctxlog.FromContext(ctx).Warn(err)
- return err
+ haveManifest := true
+ if options.Select != nil {
+ haveManifest = false
+ for _, s := range options.Select {
+ if s == "manifest_text" {
+ haveManifest = true
+ break
+ }
+ }
+ }
+ if haveManifest {
+ pdh := arvados.PortableDataHash(c.ManifestText)
+ // options.UUID is either hash+size or
+ // hash+size+hints; only hash+size need to
+ // match the computed PDH.
+ if pdh != options.UUID && !strings.HasPrefix(options.UUID, pdh+"+") {
+ err = httpErrorf(http.StatusBadGateway, "bad portable data hash %q received from remote %q (expected %q)", pdh, remoteID, options.UUID)
+ ctxlog.FromContext(ctx).Warn(err)
+ return err
+ }
}
if remoteID != "" {
c.ManifestText = rewriteManifest(c.ManifestText, remoteID)
return conn.chooseBackend(options.UUID).ContainerRequestDelete(ctx, options)
}
+func (conn *Conn) GroupCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.ClusterID).GroupCreate(ctx, options)
+}
+
+func (conn *Conn) GroupUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.UUID).GroupUpdate(ctx, options)
+}
+
+func (conn *Conn) GroupGet(ctx context.Context, options arvados.GetOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.UUID).GroupGet(ctx, options)
+}
+
+func (conn *Conn) GroupList(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ return conn.generated_GroupList(ctx, options)
+}
+
+var userUuidRe = regexp.MustCompile(`^[0-9a-z]{5}-tpzed-[0-9a-z]{15}$`)
+
+func (conn *Conn) GroupContents(ctx context.Context, options arvados.GroupContentsOptions) (arvados.ObjectList, error) {
+ if options.ClusterID != "" {
+ // explicitly selected cluster
+ return conn.chooseBackend(options.ClusterID).GroupContents(ctx, options)
+ } else if userUuidRe.MatchString(options.UUID) {
+ // user, get the things they own on the local cluster
+ return conn.local.GroupContents(ctx, options)
+ } else {
+ // a group, potentially want to make federated request
+ return conn.chooseBackend(options.UUID).GroupContents(ctx, options)
+ }
+}
+
+func (conn *Conn) GroupShared(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ return conn.chooseBackend(options.ClusterID).GroupShared(ctx, options)
+}
+
+func (conn *Conn) GroupDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.UUID).GroupDelete(ctx, options)
+}
+
+func (conn *Conn) GroupTrash(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.UUID).GroupTrash(ctx, options)
+}
+
+func (conn *Conn) GroupUntrash(ctx context.Context, options arvados.UntrashOptions) (arvados.Group, error) {
+ return conn.chooseBackend(options.UUID).GroupUntrash(ctx, options)
+}
+
+func (conn *Conn) LinkCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.ClusterID).LinkCreate(ctx, options)
+}
+
+func (conn *Conn) LinkUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkUpdate(ctx, options)
+}
+
+func (conn *Conn) LinkGet(ctx context.Context, options arvados.GetOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkGet(ctx, options)
+}
+
+func (conn *Conn) LinkList(ctx context.Context, options arvados.ListOptions) (arvados.LinkList, error) {
+ return conn.generated_LinkList(ctx, options)
+}
+
+func (conn *Conn) LinkDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkDelete(ctx, options)
+}
+
func (conn *Conn) SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
return conn.generated_SpecimenList(ctx, options)
}
return conn.chooseBackend(options.UUID).SpecimenDelete(ctx, options)
}
+func (conn *Conn) SysTrashSweep(ctx context.Context, options struct{}) (struct{}, error) {
+ return conn.local.SysTrashSweep(ctx, options)
+}
+
var userAttrsCachedFromLoginCluster = map[string]bool{
"created_at": true,
"email": true,
"modified_at": true,
"prefs": true,
"username": true,
+ "kind": true,
"etag": false,
"full_name": false,
return resp, err
}
-func (conn *Conn) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
- return conn.local.UserUpdateUUID(ctx, options)
-}
-
func (conn *Conn) UserMerge(ctx context.Context, options arvados.UserMergeOptions) (arvados.User, error) {
return conn.local.UserMerge(ctx, options)
}
return conn.chooseBackend(options.UUID).APIClientAuthorizationCurrent(ctx, options)
}
+func (conn *Conn) APIClientAuthorizationCreate(ctx context.Context, options arvados.CreateOptions) (arvados.APIClientAuthorization, error) {
+ if conn.cluster.Login.LoginCluster != "" {
+ return conn.chooseBackend(conn.cluster.Login.LoginCluster).APIClientAuthorizationCreate(ctx, options)
+ }
+ ownerUUID, ok := options.Attrs["owner_uuid"].(string)
+ if ok && ownerUUID != "" {
+ return conn.chooseBackend(ownerUUID).APIClientAuthorizationCreate(ctx, options)
+ }
+ return conn.local.APIClientAuthorizationCreate(ctx, options)
+}
+
+func (conn *Conn) APIClientAuthorizationUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.APIClientAuthorization, error) {
+ return conn.chooseBackend(options.UUID).APIClientAuthorizationUpdate(ctx, options)
+}
+
+func (conn *Conn) APIClientAuthorizationDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.APIClientAuthorization, error) {
+ return conn.chooseBackend(options.UUID).APIClientAuthorizationDelete(ctx, options)
+}
+
+func (conn *Conn) APIClientAuthorizationList(ctx context.Context, options arvados.ListOptions) (arvados.APIClientAuthorizationList, error) {
+ return conn.local.APIClientAuthorizationList(ctx, options)
+}
+
+func (conn *Conn) APIClientAuthorizationGet(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
+ return conn.chooseBackend(options.UUID).APIClientAuthorizationGet(ctx, options)
+}
+
type backend interface {
arvados.API
BaseURL() url.URL
"os"
"testing"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/controller/router"
"git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/lib/ctrlctx"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
"git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/httpserver"
+ "github.com/jmoiron/sqlx"
check "gopkg.in/check.v1"
)
// FederationSuite does some generic setup/teardown. Don't add Test*
// methods to FederationSuite itself.
type FederationSuite struct {
- cluster *arvados.Cluster
- ctx context.Context
- fed *Conn
+ integrationTestCluster *arvados.Cluster
+ cluster *arvados.Cluster
+ ctx context.Context
+ tx *sqlx.Tx
+ fed *Conn
+}
+
+func (s *FederationSuite) SetUpSuite(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ s.integrationTestCluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
}
func (s *FederationSuite) SetUpTest(c *check.C) {
Host: os.Getenv("ARVADOS_API_HOST"),
},
},
+ PostgreSQL: s.integrationTestCluster.PostgreSQL,
}
arvadostest.SetServiceURL(&s.cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
s.cluster.TLS.Insecure = true
s.cluster.API.MaxItemsPerResponse = 3
+ tx, err := arvadostest.DB(c, s.cluster).Beginx()
+ c.Assert(err, check.IsNil)
+ s.tx = tx
+
ctx := context.Background()
ctx = ctxlog.Context(ctx, ctxlog.TestLogger(c))
ctx = auth.NewContext(ctx, &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+ ctx = ctrlctx.NewWithTransaction(ctx, s.tx)
s.ctx = ctx
- s.fed = New(s.cluster)
+ s.fed = New(s.cluster, nil)
+}
+
+func (s *FederationSuite) TearDownTest(c *check.C) {
+ s.tx.Rollback()
}
func (s *FederationSuite) addDirectRemote(c *check.C, id string, backend backend) {
func (s *FederationSuite) addHTTPRemote(c *check.C, id string, backend backend) {
srv := httpserver.Server{Addr: ":"}
- srv.Handler = router.New(backend, nil)
+ srv.Handler = router.New(backend, router.Config{})
c.Check(srv.Start(), check.IsNil)
s.cluster.RemoteClusters[id] = arvados.RemoteCluster{
Scheme: "http",
Host: srv.Addr,
Proxy: true,
}
- s.fed.remotes[id] = rpc.NewConn(id, &url.URL{Scheme: "http", Host: srv.Addr}, true, saltedTokenProvider(s.fed.local, id))
+ s.fed.remotes[id] = rpc.NewConn(id, &url.URL{Scheme: "http", Host: srv.Addr}, true, saltedTokenProvider(s.cluster, s.fed.local, id))
}
//
// SPDX-License-Identifier: AGPL-3.0
+//go:build ignore
// +build ignore
package main
defer out.Close()
out.Write(regexp.MustCompile(`(?ms)^.*package .*?import.*?\n\)\n`).Find(buf))
io.WriteString(out, "//\n// -- this file is auto-generated -- do not edit -- edit list.go and run \"go generate\" instead --\n//\n\n")
- for _, t := range []string{"Container", "ContainerRequest", "Specimen", "User"} {
+ for _, t := range []string{"Container", "ContainerRequest", "Group", "Specimen", "User", "Link"} {
_, err := out.Write(bytes.ReplaceAll(orig, []byte("Collection"), []byte(t)))
if err != nil {
panic(err)
return merged, err
}
+func (conn *Conn) generated_GroupList(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ var mtx sync.Mutex
+ var merged arvados.GroupList
+ var needSort atomic.Value
+ needSort.Store(false)
+ err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
+ options.ForwardedFor = conn.cluster.ClusterID + "-" + options.ForwardedFor
+ cl, err := backend.GroupList(ctx, options)
+ if err != nil {
+ return nil, err
+ }
+ mtx.Lock()
+ defer mtx.Unlock()
+ if len(merged.Items) == 0 {
+ merged = cl
+ } else if len(cl.Items) > 0 {
+ merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
+ }
+ uuids := make([]string, 0, len(cl.Items))
+ for _, item := range cl.Items {
+ uuids = append(uuids, item.UUID)
+ }
+ return uuids, nil
+ })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.Group{}
+ }
+ return merged, err
+}
+
func (conn *Conn) generated_SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
var mtx sync.Mutex
var merged arvados.SpecimenList
}
return merged, err
}
+
+func (conn *Conn) generated_LinkList(ctx context.Context, options arvados.ListOptions) (arvados.LinkList, error) {
+ var mtx sync.Mutex
+ var merged arvados.LinkList
+ var needSort atomic.Value
+ needSort.Store(false)
+ err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
+ options.ForwardedFor = conn.cluster.ClusterID + "-" + options.ForwardedFor
+ cl, err := backend.LinkList(ctx, options)
+ if err != nil {
+ return nil, err
+ }
+ mtx.Lock()
+ defer mtx.Unlock()
+ if len(merged.Items) == 0 {
+ merged = cl
+ } else if len(cl.Items) > 0 {
+ merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
+ }
+ uuids := make([]string, 0, len(cl.Items))
+ for _, item := range cl.Items {
+ uuids = append(uuids, item.UUID)
+ }
+ return uuids, nil
+ })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.Link{}
+ }
+ return merged, err
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package federation
+
+import (
+ "errors"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&GroupSuite{})
+
+type GroupSuite struct {
+ FederationSuite
+}
+
+func makeConn() (*Conn, *arvadostest.APIStub, *arvadostest.APIStub) {
+ localAPIstub := &arvadostest.APIStub{Error: errors.New("No result")}
+ remoteAPIstub := &arvadostest.APIStub{Error: errors.New("No result")}
+ return &Conn{&arvados.Cluster{ClusterID: "local"}, localAPIstub, map[string]backend{"zzzzz": remoteAPIstub}}, localAPIstub, remoteAPIstub
+}
+
+func (s *UserSuite) TestGroupContents(c *check.C) {
+ conn, localAPIstub, remoteAPIstub := makeConn()
+ conn.GroupContents(s.ctx, arvados.GroupContentsOptions{UUID: "local-tpzed-xurymjxw79nv3jz"})
+ c.Check(len(localAPIstub.Calls(nil)), check.Equals, 1)
+ c.Check(len(remoteAPIstub.Calls(nil)), check.Equals, 0)
+
+ conn, localAPIstub, remoteAPIstub = makeConn()
+ conn.GroupContents(s.ctx, arvados.GroupContentsOptions{UUID: "zzzzz-tpzed-xurymjxw79nv3jz"})
+ c.Check(len(localAPIstub.Calls(nil)), check.Equals, 1)
+ c.Check(len(remoteAPIstub.Calls(nil)), check.Equals, 0)
+
+ conn, localAPIstub, remoteAPIstub = makeConn()
+ conn.GroupContents(s.ctx, arvados.GroupContentsOptions{UUID: "local-j7d0g-xurymjxw79nv3jz"})
+ c.Check(len(localAPIstub.Calls(nil)), check.Equals, 1)
+ c.Check(len(remoteAPIstub.Calls(nil)), check.Equals, 0)
+
+ conn, localAPIstub, remoteAPIstub = makeConn()
+ conn.GroupContents(s.ctx, arvados.GroupContentsOptions{UUID: "zzzzz-j7d0g-xurymjxw79nv3jz"})
+ c.Check(len(localAPIstub.Calls(nil)), check.Equals, 0)
+ c.Check(len(remoteAPIstub.Calls(nil)), check.Equals, 1)
+
+ conn, localAPIstub, remoteAPIstub = makeConn()
+ conn.GroupContents(s.ctx, arvados.GroupContentsOptions{UUID: "zzzzz-tpzed-xurymjxw79nv3jz", ClusterID: "zzzzz"})
+ c.Check(len(localAPIstub.Calls(nil)), check.Equals, 0)
+ c.Check(len(remoteAPIstub.Calls(nil)), check.Equals, 1)
+}
_, err := fn(ctx, conn.cluster.ClusterID, conn.local, opts)
return err
}
+ if opts.ClusterID != "" {
+ // Client explicitly selected cluster
+ _, err := fn(ctx, conn.cluster.ClusterID, conn.chooseBackend(opts.ClusterID), opts)
+ return err
+ }
cannotSplit := false
var matchAllFilters map[string]bool
if opts.Count != "none" {
return httpErrorf(http.StatusBadRequest, "cannot execute federated list query unless count==\"none\"")
}
- if opts.Limit >= 0 || opts.Offset != 0 || len(opts.Order) > 0 {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query with limit, offset, or order parameter")
+ if (opts.Limit >= 0 && opts.Limit < int64(nUUIDs)) || opts.Offset != 0 || len(opts.Order) > 0 {
+ return httpErrorf(http.StatusBadRequest, "cannot execute federated list query with limit (%d) < nUUIDs (%d), offset (%d) > 0, or order (%v) parameter", opts.Limit, nUUIDs, opts.Offset, opts.Order)
}
if max := conn.cluster.API.MaxItemsPerResponse; nUUIDs > max {
return httpErrorf(http.StatusBadRequest, "cannot execute federated list query because number of UUIDs (%d) exceeds page size limit %d", nUUIDs, max)
func (s *CollectionListSuite) TestCollectionListMultiSiteWithCount(c *check.C) {
for _, count := range []string{"", "exact"} {
+ s.SetUpTest(c) // Reset backends / call counters
s.test(c, listTrial{
count: count,
limit: -1,
func (s *CollectionListSuite) TestCollectionListMultiSiteWithLimit(c *check.C) {
for _, limit := range []int64{0, 1, 2} {
+ s.SetUpTest(c) // Reset backends / call counters
s.test(c, listTrial{
count: "none",
limit: limit,
filters: []arvados.Filter{
- {"uuid", "in", []string{s.uuids[0][0], s.uuids[1][0]}},
+ {"uuid", "in", []string{s.uuids[0][0], s.uuids[1][0], s.uuids[2][0]}},
{"uuid", "is_a", "teapot"},
},
expectCalls: []int{0, 0, 0},
}
}
+func (s *CollectionListSuite) TestCollectionListMultiSiteWithHighLimit(c *check.C) {
+ uuids := []string{s.uuids[0][0], s.uuids[1][0], s.uuids[2][0]}
+ for _, limit := range []int64{3, 4, 1234567890} {
+ s.SetUpTest(c) // Reset backends / call counters
+ s.test(c, listTrial{
+ count: "none",
+ limit: limit,
+ filters: []arvados.Filter{
+ {"uuid", "in", uuids},
+ },
+ expectUUIDs: uuids,
+ expectCalls: []int{1, 1, 1},
+ })
+ }
+}
+
func (s *CollectionListSuite) TestCollectionListMultiSiteWithOffset(c *check.C) {
s.test(c, listTrial{
count: "none",
s.cluster.Login.LoginCluster = "zhome"
// s.fed is already set by SetUpTest, but we need to
// reinitialize with the above config changes.
- s.fed = New(s.cluster)
+ s.fed = New(s.cluster, nil)
returnTo := "https://app.example.com/foo?bar"
for _, trial := range []struct {
{token: "v2/zhome-aaaaa-aaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", returnTo: returnTo, target: "http://" + s.cluster.RemoteClusters["zhome"].Host + "/logout?" + url.Values{"return_to": {returnTo}}.Encode()},
} {
c.Logf("trial %#v", trial)
- ctx := context.Background()
+ ctx := s.ctx
if trial.token != "" {
ctx = auth.NewContext(ctx, &auth.Credentials{Tokens: []string{trial.token}})
}
func (s *UserSuite) TestLoginClusterUserList(c *check.C) {
s.cluster.ClusterID = "local"
s.cluster.Login.LoginCluster = "zzzzz"
- s.fed = New(s.cluster)
+ s.fed = New(s.cluster, nil)
s.addDirectRemote(c, "zzzzz", rpc.NewConn("zzzzz", &url.URL{Scheme: "https", Host: os.Getenv("ARVADOS_API_HOST")}, true, rpc.PassthroughTokenProvider))
for _, updateFail := range []bool{false, true} {
func (s *UserSuite) TestLoginClusterUserGet(c *check.C) {
s.cluster.ClusterID = "local"
s.cluster.Login.LoginCluster = "zzzzz"
- s.fed = New(s.cluster)
+ s.fed = New(s.cluster, nil)
s.addDirectRemote(c, "zzzzz", rpc.NewConn("zzzzz", &url.URL{Scheme: "https", Host: os.Getenv("ARVADOS_API_HOST")}, true, rpc.PassthroughTokenProvider))
opts := arvados.GetOptions{UUID: "zzzzz-tpzed-xurymjxw79nv3jz", Select: []string{"uuid", "email"}}
func (s *UserSuite) TestLoginClusterUserListBypassFederation(c *check.C) {
s.cluster.ClusterID = "local"
s.cluster.Login.LoginCluster = "zzzzz"
- s.fed = New(s.cluster)
+ s.fed = New(s.cluster, nil)
s.addDirectRemote(c, "zzzzz", rpc.NewConn("zzzzz", &url.URL{Scheme: "https", Host: os.Getenv("ARVADOS_API_HOST")},
true, rpc.PassthroughTokenProvider))
"fmt"
"io"
"io/ioutil"
+ "net"
"net/http"
"net/http/httptest"
"net/url"
c.Assert(s.remoteMock.Start(), check.IsNil)
cluster := &arvados.Cluster{
- ClusterID: "zhome",
- PostgreSQL: integrationTestCluster().PostgreSQL,
- ForceLegacyAPI14: forceLegacyAPI14,
+ ClusterID: "zhome",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
}
cluster.TLS.Insecure = true
cluster.API.MaxItemsPerResponse = 1000
cluster.API.MaxRequestAmplification = 4
cluster.API.RequestTimeout = arvados.Duration(5 * time.Minute)
+ cluster.Collections.BlobSigning = true
+ cluster.Collections.BlobSigningKey = arvadostest.BlobSigningKey
+ cluster.Collections.BlobSigningTTL = arvados.Duration(time.Hour * 24 * 14)
arvadostest.SetServiceURL(&cluster.Services.RailsAPI, "http://localhost:1/")
arvadostest.SetServiceURL(&cluster.Services.Controller, "http://localhost:/")
- s.testHandler = &Handler{Cluster: cluster}
+ s.testHandler = &Handler{Cluster: cluster, BackgroundContext: ctxlog.Context(context.Background(), s.log)}
s.testServer = newServerFromIntegrationTestEnv(c)
- s.testServer.Server.Handler = httpserver.HandlerWithContext(
- ctxlog.Context(context.Background(), s.log),
- httpserver.AddRequestIDs(httpserver.LogRequests(s.testHandler)))
+ s.testServer.Server.BaseContext = func(net.Listener) context.Context {
+ return ctxlog.Context(context.Background(), s.log)
+ }
+ s.testServer.Server.Handler = httpserver.AddRequestIDs(httpserver.LogRequests(s.testHandler))
cluster.RemoteClusters = map[string]arvados.RemoteCluster{
"zzzzz": {
arvadostest.SetServiceURL(&s.testHandler.Cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
s.testHandler.Cluster.ClusterID = "zzzzz"
s.testHandler.Cluster.SystemRootToken = arvadostest.SystemRootToken
+ s.testHandler.Cluster.API.MaxTokenLifetime = arvados.Duration(time.Hour)
resp := s.testRequest(req).Result()
c.Check(resp.StatusCode, check.Equals, http.StatusOK)
// Runtime token must match zzzzz cluster
c.Check(cr.RuntimeToken, check.Matches, "v2/zzzzz-gj3su-.*")
+
// RuntimeToken must be different than the Original Token we originally did the request with.
c.Check(cr.RuntimeToken, check.Not(check.Equals), arvadostest.ActiveTokenV2)
+
+ // Runtime token should not have an expiration based on API.MaxTokenLifetime
+ req2 := httptest.NewRequest("GET", "/arvados/v1/api_client_authorizations/current", nil)
+ req2.Header.Set("Authorization", "Bearer "+cr.RuntimeToken)
+ req2.Header.Set("Content-type", "application/json")
+ resp = s.testRequest(req2).Result()
+ c.Check(resp.StatusCode, check.Equals, http.StatusOK)
+ var aca arvados.APIClientAuthorization
+ c.Check(json.NewDecoder(resp.Body).Decode(&aca), check.IsNil)
+ c.Check(aca.ExpiresAt, check.NotNil) // Time.Now()+BlobSigningTTL
+ t := aca.ExpiresAt
+ c.Check(t.After(time.Now().Add(s.testHandler.Cluster.API.MaxTokenLifetime.Duration())), check.Equals, true)
+ c.Check(t.Before(time.Now().Add(s.testHandler.Cluster.Collections.BlobSigningTTL.Duration())), check.Equals, true)
}
func (s *FederationSuite) TestCreateRemoteContainerRequestCheckSetRuntimeToken(c *check.C) {
"errors"
"fmt"
"net/http"
+ "net/http/httptest"
"net/url"
"strings"
"sync"
)
type Handler struct {
- Cluster *arvados.Cluster
+ Cluster *arvados.Cluster
+ BackgroundContext context.Context
setupOnce sync.Once
+ federation *federation.Conn
handlerStack http.Handler
proxy *proxy
secureClient *http.Client
return err
}
_, _, err = railsproxy.FindRailsAPI(h.Cluster)
- return err
+ if err != nil {
+ return err
+ }
+ if h.Cluster.API.VocabularyPath != "" {
+ req, err := http.NewRequest("GET", "/arvados/v1/vocabulary", nil)
+ if err != nil {
+ return err
+ }
+ var resp httptest.ResponseRecorder
+ h.handlerStack.ServeHTTP(&resp, req)
+ if resp.Result().StatusCode != http.StatusOK {
+ return fmt.Errorf("%d %s", resp.Result().StatusCode, resp.Result().Status)
+ }
+ }
+ return nil
}
func (h *Handler) Done() <-chan struct{} {
func (h *Handler) setup() {
mux := http.NewServeMux()
+ healthFuncs := make(map[string]health.Func)
+
+ oidcAuthorizer := localdb.OIDCAccessTokenAuthorizer(h.Cluster, h.db)
+ h.federation = federation.New(h.Cluster, &healthFuncs)
+ rtr := router.New(h.federation, router.Config{
+ MaxRequestSize: h.Cluster.API.MaxRequestSize,
+ WrapCalls: api.ComposeWrappers(ctrlctx.WrapCallsInTransactions(h.db), oidcAuthorizer.WrapCalls),
+ })
+
+ healthRoutes := health.Routes{"ping": func() error { _, err := h.db(context.TODO()); return err }}
+ for name, f := range healthFuncs {
+ healthRoutes[name] = f
+ }
mux.Handle("/_health/", &health.Handler{
Token: h.Cluster.ManagementToken,
Prefix: "/_health/",
- Routes: health.Routes{"ping": func() error { _, err := h.db(context.TODO()); return err }},
+ Routes: healthRoutes,
})
-
- oidcAuthorizer := localdb.OIDCAccessTokenAuthorizer(h.Cluster, h.db)
- rtr := router.New(federation.New(h.Cluster), api.ComposeWrappers(ctrlctx.WrapCallsInTransactions(h.db), oidcAuthorizer.WrapCalls))
mux.Handle("/arvados/v1/config", rtr)
- mux.Handle("/"+arvados.EndpointUserAuthenticate.Path, rtr)
-
- if !h.Cluster.ForceLegacyAPI14 {
- mux.Handle("/arvados/v1/collections", rtr)
- mux.Handle("/arvados/v1/collections/", rtr)
- mux.Handle("/arvados/v1/users", rtr)
- mux.Handle("/arvados/v1/users/", rtr)
- mux.Handle("/arvados/v1/connect/", rtr)
- mux.Handle("/arvados/v1/container_requests", rtr)
- mux.Handle("/arvados/v1/container_requests/", rtr)
- mux.Handle("/login", rtr)
- mux.Handle("/logout", rtr)
- }
+ mux.Handle("/arvados/v1/vocabulary", rtr)
+ mux.Handle("/"+arvados.EndpointUserAuthenticate.Path, rtr) // must come before .../users/
+ mux.Handle("/arvados/v1/collections", rtr)
+ mux.Handle("/arvados/v1/collections/", rtr)
+ mux.Handle("/arvados/v1/users", rtr)
+ mux.Handle("/arvados/v1/users/", rtr)
+ mux.Handle("/arvados/v1/connect/", rtr)
+ mux.Handle("/arvados/v1/container_requests", rtr)
+ mux.Handle("/arvados/v1/container_requests/", rtr)
+ mux.Handle("/arvados/v1/groups", rtr)
+ mux.Handle("/arvados/v1/groups/", rtr)
+ mux.Handle("/arvados/v1/links", rtr)
+ mux.Handle("/arvados/v1/links/", rtr)
+ mux.Handle("/login", rtr)
+ mux.Handle("/logout", rtr)
+ mux.Handle("/arvados/v1/api_client_authorizations", rtr)
+ mux.Handle("/arvados/v1/api_client_authorizations/", rtr)
hs := http.NotFoundHandler()
hs = prepend(hs, h.proxyRailsAPI)
h.proxy = &proxy{
Name: "arvados-controller",
}
+
+ go h.trashSweepWorker()
}
var errDBConnection = errors.New("database connection error")
check "gopkg.in/check.v1"
)
-var forceLegacyAPI14 bool
-
// Gocheck boilerplate
func Test(t *testing.T) {
- for _, forceLegacyAPI14 = range []bool{false, true} {
- check.TestingT(t)
- }
+ check.TestingT(t)
}
var _ = check.Suite(&HandlerSuite{})
type HandlerSuite struct {
cluster *arvados.Cluster
- handler http.Handler
+ handler *Handler
ctx context.Context
cancel context.CancelFunc
}
s.ctx, s.cancel = context.WithCancel(context.Background())
s.ctx = ctxlog.Context(s.ctx, ctxlog.New(os.Stderr, "json", "debug"))
s.cluster = &arvados.Cluster{
- ClusterID: "zzzzz",
- PostgreSQL: integrationTestCluster().PostgreSQL,
- ForceLegacyAPI14: forceLegacyAPI14,
+ ClusterID: "zzzzz",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
}
s.cluster.API.RequestTimeout = arvados.Duration(5 * time.Minute)
s.cluster.TLS.Insecure = true
arvadostest.SetServiceURL(&s.cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
arvadostest.SetServiceURL(&s.cluster.Services.Controller, "http://localhost:/")
- s.handler = newHandler(s.ctx, s.cluster, "", prometheus.NewRegistry())
+ s.handler = newHandler(s.ctx, s.cluster, "", prometheus.NewRegistry()).(*Handler)
}
func (s *HandlerSuite) TearDownTest(c *check.C) {
}
}
+func (s *HandlerSuite) TestVocabularyExport(c *check.C) {
+ voc := `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCE": {
+ "strict": false,
+ "labels": [{"label": "Importance"}],
+ "values": {
+ "HIGH": {
+ "labels": [{"label": "High"}]
+ },
+ "LOW": {
+ "labels": [{"label": "Low"}]
+ }
+ }
+ }
+ }
+ }`
+ f, err := os.CreateTemp("", "test-vocabulary-*.json")
+ c.Assert(err, check.IsNil)
+ defer os.Remove(f.Name())
+ _, err = f.WriteString(voc)
+ c.Assert(err, check.IsNil)
+ f.Close()
+ s.cluster.API.VocabularyPath = f.Name()
+ for _, method := range []string{"GET", "OPTIONS"} {
+ c.Log(c.TestName()+" ", method)
+ req := httptest.NewRequest(method, "/arvados/v1/vocabulary", nil)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Log(resp.Body.String())
+ if !c.Check(resp.Code, check.Equals, http.StatusOK) {
+ continue
+ }
+ c.Check(resp.Header().Get("Access-Control-Allow-Origin"), check.Equals, `*`)
+ c.Check(resp.Header().Get("Access-Control-Allow-Methods"), check.Matches, `.*\bGET\b.*`)
+ c.Check(resp.Header().Get("Access-Control-Allow-Headers"), check.Matches, `.+`)
+ if method == "OPTIONS" {
+ c.Check(resp.Body.String(), check.HasLen, 0)
+ continue
+ }
+ var expectedVoc, receivedVoc *arvados.Vocabulary
+ err := json.Unmarshal([]byte(voc), &expectedVoc)
+ c.Check(err, check.IsNil)
+ err = json.Unmarshal(resp.Body.Bytes(), &receivedVoc)
+ c.Check(err, check.IsNil)
+ c.Check(receivedVoc, check.DeepEquals, expectedVoc)
+ }
+}
+
+func (s *HandlerSuite) TestVocabularyFailedCheckStatus(c *check.C) {
+ voc := `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCE": {
+ "strict": true,
+ "labels": [{"label": "Importance"}],
+ "values": {
+ "HIGH": {
+ "labels": [{"label": "High"}]
+ },
+ "LOW": {
+ "labels": [{"label": "Low"}]
+ }
+ }
+ }
+ }
+ }`
+ f, err := os.CreateTemp("", "test-vocabulary-*.json")
+ c.Assert(err, check.IsNil)
+ defer os.Remove(f.Name())
+ _, err = f.WriteString(voc)
+ c.Assert(err, check.IsNil)
+ f.Close()
+ s.cluster.API.VocabularyPath = f.Name()
+
+ req := httptest.NewRequest("POST", "/arvados/v1/collections",
+ strings.NewReader(`{
+ "collection": {
+ "properties": {
+ "IDTAGIMPORTANCE": "Critical"
+ }
+ }
+ }`))
+ req.Header.Set("Authorization", "Bearer "+arvadostest.ActiveToken)
+ req.Header.Set("Content-type", "application/json")
+
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Log(resp.Body.String())
+ c.Assert(resp.Code, check.Equals, http.StatusBadRequest)
+ var jresp httpserver.ErrorResponse
+ err = json.Unmarshal(resp.Body.Bytes(), &jresp)
+ c.Check(err, check.IsNil)
+ c.Assert(len(jresp.Errors), check.Equals, 1)
+ c.Check(jresp.Errors[0], check.Matches, `.*tag value.*is not valid for key.*`)
+}
+
func (s *HandlerSuite) TestProxyDiscoveryDoc(c *check.C) {
req := httptest.NewRequest("GET", "/discovery/v1/apis/arvados/v1/rest", nil)
resp := httptest.NewRecorder()
c.Check(jresp["errors"], check.FitsTypeOf, []interface{}{})
}
-func (s *HandlerSuite) TestProxyRedirect(c *check.C) {
- s.cluster.Login.SSO.Enable = true
- s.cluster.Login.SSO.ProviderAppID = "test"
- s.cluster.Login.SSO.ProviderAppSecret = "test"
- req := httptest.NewRequest("GET", "https://0.0.0.0:1/login?return_to=foo", nil)
- resp := httptest.NewRecorder()
- s.handler.ServeHTTP(resp, req)
- if !c.Check(resp.Code, check.Equals, http.StatusFound) {
- c.Log(resp.Body.String())
- }
- // Old "proxy entire request" code path returns an absolute
- // URL. New lib/controller/federation code path returns a
- // relative URL.
- c.Check(resp.Header().Get("Location"), check.Matches, `(https://0.0.0.0:1)?/auth/joshid\?return_to=%2Cfoo&?`)
-}
-
-func (s *HandlerSuite) TestLogoutSSO(c *check.C) {
- s.cluster.Login.SSO.Enable = true
- s.cluster.Login.SSO.ProviderAppID = "test"
- req := httptest.NewRequest("GET", "https://0.0.0.0:1/logout?return_to=https://example.com/foo", nil)
- resp := httptest.NewRecorder()
- s.handler.ServeHTTP(resp, req)
- if !c.Check(resp.Code, check.Equals, http.StatusFound) {
- c.Log(resp.Body.String())
- }
- c.Check(resp.Header().Get("Location"), check.Equals, "http://localhost:3002/users/sign_out?"+url.Values{"redirect_uri": {"https://example.com/foo"}}.Encode())
-}
-
func (s *HandlerSuite) TestLogoutGoogle(c *check.C) {
- if s.cluster.ForceLegacyAPI14 {
- // Google login N/A
- return
- }
s.cluster.Login.Google.Enable = true
s.cluster.Login.Google.ClientID = "test"
req := httptest.NewRequest("GET", "https://0.0.0.0:1/logout?return_to=https://example.com/foo", nil)
func (s *HandlerSuite) TestValidateV1APIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
- user, ok, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveToken)
+ user, ok, err := s.handler.validateAPItoken(req, arvadostest.ActiveToken)
c.Assert(err, check.IsNil)
c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, arvadostest.ActiveTokenUUID)
func (s *HandlerSuite) TestValidateV2APIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
- user, ok, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveTokenV2)
+ user, ok, err := s.handler.validateAPItoken(req, arvadostest.ActiveTokenV2)
c.Assert(err, check.IsNil)
c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, arvadostest.ActiveTokenUUID)
func (s *HandlerSuite) TestCreateAPIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
- auth, err := s.handler.(*Handler).createAPItoken(req, arvadostest.ActiveUserUUID, nil)
+ auth, err := s.handler.createAPItoken(req, arvadostest.ActiveUserUUID, nil)
c.Assert(err, check.IsNil)
c.Check(auth.Scopes, check.DeepEquals, []string{"all"})
- user, ok, err := s.handler.(*Handler).validateAPItoken(req, auth.TokenV2())
+ user, ok, err := s.handler.validateAPItoken(req, auth.TokenV2())
c.Assert(err, check.IsNil)
c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, auth.UUID)
resp := httptest.NewRecorder()
s.handler.ServeHTTP(resp, req)
c.Assert(resp.Code, check.Equals, http.StatusOK,
- check.Commentf("Wasn't able to get data from the controller at %q", url))
+ check.Commentf("Wasn't able to get data from the controller at %q: %q", url, resp.Body.String()))
err = json.Unmarshal(resp.Body.Bytes(), &proxied)
c.Check(err, check.Equals, nil)
json.Unmarshal(resp.Body.Bytes(), &ksList)
c.Assert(len(ksList.Items), check.Not(check.Equals), 0)
ksUUID := ksList.Items[0].UUID
+ // Create a new token for the test user so that we're not comparing
+ // the ones from the fixtures.
+ req = httptest.NewRequest("POST", "/arvados/v1/api_client_authorizations",
+ strings.NewReader(`{
+ "api_client_authorization": {
+ "owner_uuid": "`+arvadostest.AdminUserUUID+`",
+ "created_by_ip_address": "::1",
+ "last_used_by_ip_address": "::1",
+ "default_owner_uuid": "`+arvadostest.AdminUserUUID+`"
+ }
+ }`))
+ req.Header.Set("Authorization", "Bearer "+arvadostest.SystemRootToken)
+ req.Header.Set("Content-type", "application/json")
+ resp = httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Assert(resp.Code, check.Equals, http.StatusOK,
+ check.Commentf("%s", resp.Body.String()))
+ var auth arvados.APIClientAuthorization
+ json.Unmarshal(resp.Body.Bytes(), &auth)
+ c.Assert(auth.UUID, check.Not(check.Equals), "")
testCases := map[string]map[string]bool{
"api_clients/" + arvadostest.TrustedWorkbenchAPIClientUUID: nil,
- "api_client_authorizations/" + arvadostest.AdminTokenUUID: nil,
+ "api_client_authorizations/" + auth.UUID: {"href": true, "modified_by_client_uuid": true, "modified_by_user_uuid": true},
"authorized_keys/" + arvadostest.AdminAuthorizedKeysUUID: nil,
"collections/" + arvadostest.CollectionWithUniqueWordsUUID: {"href": true},
"containers/" + arvadostest.RunningContainerUUID: nil,
"workflows/" + arvadostest.WorkflowWithDefinitionYAMLUUID: nil,
}
for url, skippedFields := range testCases {
- s.CheckObjectType(c, "/arvados/v1/"+url, arvadostest.AdminToken, skippedFields)
+ c.Logf("Testing %q", url)
+ s.CheckObjectType(c, "/arvados/v1/"+url, auth.TokenV2(), skippedFields)
+ }
+}
+
+func (s *HandlerSuite) TestRedactRailsAPIHostFromErrors(c *check.C) {
+ req := httptest.NewRequest("GET", "https://0.0.0.0:1/arvados/v1/collections/zzzzz-4zz18-abcdefghijklmno", nil)
+ req.Header.Set("Authorization", "Bearer "+arvadostest.ActiveToken)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, http.StatusNotFound)
+ var jresp struct {
+ Errors []string
+ }
+ c.Log(resp.Body.String())
+ c.Assert(json.NewDecoder(resp.Body).Decode(&jresp), check.IsNil)
+ c.Assert(jresp.Errors, check.HasLen, 1)
+ c.Check(jresp.Errors[0], check.Matches, `.*//railsapi\.internal/arvados/v1/collections/.*: 404 Not Found.*`)
+ c.Check(jresp.Errors[0], check.Not(check.Matches), `(?ms).*127.0.0.1.*`)
+}
+
+func (s *HandlerSuite) TestTrashSweep(c *check.C) {
+ s.cluster.SystemRootToken = arvadostest.SystemRootToken
+ s.cluster.Collections.TrashSweepInterval = arvados.Duration(time.Second / 10)
+ s.handler.CheckHealth()
+ ctx := auth.NewContext(s.ctx, &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+ coll, err := s.handler.federation.CollectionCreate(ctx, arvados.CreateOptions{Attrs: map[string]interface{}{"name": "test trash sweep"}, EnsureUniqueName: true})
+ c.Assert(err, check.IsNil)
+ defer s.handler.federation.CollectionDelete(ctx, arvados.DeleteOptions{UUID: coll.UUID})
+ db, err := s.handler.db(s.ctx)
+ c.Assert(err, check.IsNil)
+ _, err = db.ExecContext(s.ctx, `update collections set trash_at = $1, delete_at = $2 where uuid = $3`, time.Now().UTC().Add(time.Second/10), time.Now().UTC().Add(time.Hour), coll.UUID)
+ c.Assert(err, check.IsNil)
+ deadline := time.Now().Add(5 * time.Second)
+ for {
+ if time.Now().After(deadline) {
+ c.Log("timed out")
+ c.FailNow()
+ }
+ updated, err := s.handler.federation.CollectionGet(ctx, arvados.GetOptions{UUID: coll.UUID, IncludeTrash: true})
+ c.Assert(err, check.IsNil)
+ if updated.IsTrashed {
+ break
+ }
+ time.Sleep(time.Second / 10)
}
}
"path/filepath"
"strconv"
"strings"
+ "sync"
"git.arvados.org/arvados.git/lib/boot"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
check "gopkg.in/check.v1"
)
}
func (s *IntegrationSuite) SetUpSuite(c *check.C) {
- if forceLegacyAPI14 {
- c.Skip("heavy integration tests don't run with forceLegacyAPI14")
- return
- }
-
cwd, _ := os.Getwd()
s.oidcprovider = arvadostest.NewOIDCProvider(c)
ClientSecret: ` + s.oidcprovider.ValidClientSecret + `
EmailClaim: email
EmailVerifiedClaim: email_verified
+ AcceptAccessToken: true
+ AcceptAccessTokenScope: ""
`
} else {
yaml += `
tc := boot.NewTestCluster(
filepath.Join(cwd, "..", ".."),
id, cfg, "127.0.0."+id[3:], c.Log)
+ tc.Super.NoWorkbench1 = true
+ tc.Start()
s.testClusters[id] = tc
- s.testClusters[id].Start()
}
for _, tc := range s.testClusters {
ok := tc.WaitReady()
}
}
+func (s *IntegrationSuite) TestDefaultStorageClassesOnCollections(c *check.C) {
+ conn := s.testClusters["z1111"].Conn()
+ rootctx, _, _ := s.testClusters["z1111"].RootClients()
+ userctx, _, kc, _ := s.testClusters["z1111"].UserClients(rootctx, c, conn, s.oidcprovider.AuthEmail, true)
+ c.Assert(len(kc.DefaultStorageClasses) > 0, check.Equals, true)
+ coll, err := conn.CollectionCreate(userctx, arvados.CreateOptions{})
+ c.Assert(err, check.IsNil)
+ c.Assert(coll.StorageClassesDesired, check.DeepEquals, kc.DefaultStorageClasses)
+}
+
func (s *IntegrationSuite) TestGetCollectionByPDH(c *check.C) {
conn1 := s.testClusters["z1111"].Conn()
rootctx1, _, _ := s.testClusters["z1111"].RootClients()
c.Check(coll.PortableDataHash, check.Equals, pdh)
}
+// Tests bug #18004
+func (s *IntegrationSuite) TestRemoteUserAndTokenCacheRace(c *check.C) {
+ conn1 := s.testClusters["z1111"].Conn()
+ rootctx1, _, _ := s.testClusters["z1111"].RootClients()
+ rootctx2, _, _ := s.testClusters["z2222"].RootClients()
+ conn2 := s.testClusters["z2222"].Conn()
+ userctx1, _, _, _ := s.testClusters["z1111"].UserClients(rootctx1, c, conn1, "user2@example.com", true)
+
+ var wg1, wg2 sync.WaitGroup
+ creqs := 100
+
+ // Make concurrent requests to z2222 with a local token to make sure more
+ // than one worker is listening.
+ wg1.Add(1)
+ for i := 0; i < creqs; i++ {
+ wg2.Add(1)
+ go func() {
+ defer wg2.Done()
+ wg1.Wait()
+ _, err := conn2.UserGetCurrent(rootctx2, arvados.GetOptions{})
+ c.Check(err, check.IsNil, check.Commentf("warm up phase failed"))
+ }()
+ }
+ wg1.Done()
+ wg2.Wait()
+
+ // Real test pass -- use a new remote token than the one used in the warm-up
+ // phase.
+ wg1.Add(1)
+ for i := 0; i < creqs; i++ {
+ wg2.Add(1)
+ go func() {
+ defer wg2.Done()
+ wg1.Wait()
+ // Retrieve the remote collection from cluster z2222.
+ _, err := conn2.UserGetCurrent(userctx1, arvados.GetOptions{})
+ c.Check(err, check.IsNil, check.Commentf("testing phase failed"))
+ }()
+ }
+ wg1.Done()
+ wg2.Wait()
+}
+
func (s *IntegrationSuite) TestS3WithFederatedToken(c *check.C) {
if _, err := exec.LookPath("s3cmd"); err != nil {
c.Skip("s3cmd not in PATH")
}
}
+func (s *IntegrationSuite) TestRequestIDHeader(c *check.C) {
+ conn1 := s.testClusters["z1111"].Conn()
+ rootctx1, _, _ := s.testClusters["z1111"].RootClients()
+ userctx1, ac1, _, _ := s.testClusters["z1111"].UserClients(rootctx1, c, conn1, "user@example.com", true)
+
+ coll, err := conn1.CollectionCreate(userctx1, arvados.CreateOptions{})
+ c.Check(err, check.IsNil)
+ specimen, err := conn1.SpecimenCreate(userctx1, arvados.CreateOptions{})
+ c.Check(err, check.IsNil)
+
+ tests := []struct {
+ path string
+ reqIdProvided bool
+ notFoundRequest bool
+ }{
+ {"/arvados/v1/collections", false, false},
+ {"/arvados/v1/collections", true, false},
+ {"/arvados/v1/nonexistant", false, true},
+ {"/arvados/v1/nonexistant", true, true},
+ {"/arvados/v1/collections/" + coll.UUID, false, false},
+ {"/arvados/v1/collections/" + coll.UUID, true, false},
+ {"/arvados/v1/specimens/" + specimen.UUID, false, false},
+ {"/arvados/v1/specimens/" + specimen.UUID, true, false},
+ // new code path (lib/controller/router etc) - single-cluster request
+ {"/arvados/v1/collections/z1111-4zz18-0123456789abcde", false, true},
+ {"/arvados/v1/collections/z1111-4zz18-0123456789abcde", true, true},
+ // new code path (lib/controller/router etc) - federated request
+ {"/arvados/v1/collections/z2222-4zz18-0123456789abcde", false, true},
+ {"/arvados/v1/collections/z2222-4zz18-0123456789abcde", true, true},
+ // old code path (proxyRailsAPI) - single-cluster request
+ {"/arvados/v1/specimens/z1111-j58dm-0123456789abcde", false, true},
+ {"/arvados/v1/specimens/z1111-j58dm-0123456789abcde", true, true},
+ // old code path (setupProxyRemoteCluster) - federated request
+ {"/arvados/v1/workflows/z2222-7fd4e-0123456789abcde", false, true},
+ {"/arvados/v1/workflows/z2222-7fd4e-0123456789abcde", true, true},
+ }
+
+ for _, tt := range tests {
+ c.Log(c.TestName() + " " + tt.path)
+ req, err := http.NewRequest("GET", "https://"+ac1.APIHost+tt.path, nil)
+ c.Assert(err, check.IsNil)
+ customReqId := "abcdeG"
+ if !tt.reqIdProvided {
+ c.Assert(req.Header.Get("X-Request-Id"), check.Equals, "")
+ } else {
+ req.Header.Set("X-Request-Id", customReqId)
+ }
+ resp, err := ac1.Do(req)
+ c.Assert(err, check.IsNil)
+ if tt.notFoundRequest {
+ c.Check(resp.StatusCode, check.Equals, http.StatusNotFound)
+ } else {
+ c.Check(resp.StatusCode, check.Equals, http.StatusOK)
+ }
+ respHdr := resp.Header.Get("X-Request-Id")
+ if tt.reqIdProvided {
+ c.Check(respHdr, check.Equals, customReqId)
+ } else {
+ c.Check(respHdr, check.Matches, `req-[0-9a-zA-Z]{20}`)
+ }
+ if tt.notFoundRequest {
+ var jresp httpserver.ErrorResponse
+ err := json.NewDecoder(resp.Body).Decode(&jresp)
+ c.Check(err, check.IsNil)
+ c.Assert(jresp.Errors, check.HasLen, 1)
+ c.Check(jresp.Errors[0], check.Matches, `.*\(`+respHdr+`\).*`)
+ }
+ }
+}
+
// We test the direct access to the database
-// normally an integration test would not have a database access, but in this case we need
+// normally an integration test would not have a database access, but in this case we need
// to test tokens that are secret, so there is no API response that will give them back
func (s *IntegrationSuite) dbConn(c *check.C, clusterID string) (*sql.DB, *sql.Conn) {
ctx := context.Background()
}
}
+// Test for #17785
+func (s *IntegrationSuite) TestFederatedApiClientAuthHandling(c *check.C) {
+ rootctx1, rootclnt1, _ := s.testClusters["z1111"].RootClients()
+ conn1 := s.testClusters["z1111"].Conn()
+
+ // Make sure LoginCluster is properly configured
+ for _, cls := range []string{"z1111", "z3333"} {
+ c.Check(
+ s.testClusters[cls].Config.Clusters[cls].Login.LoginCluster,
+ check.Equals, "z1111",
+ check.Commentf("incorrect LoginCluster config on cluster %q", cls))
+ }
+ // Get user's UUID & attempt to create a token for it on the remote cluster
+ _, _, _, user := s.testClusters["z1111"].UserClients(rootctx1, c, conn1,
+ "user@example.com", true)
+ _, rootclnt3, _ := s.testClusters["z3333"].ClientsWithToken(rootclnt1.AuthToken)
+ var resp arvados.APIClientAuthorization
+ err := rootclnt3.RequestAndDecode(
+ &resp, "POST", "arvados/v1/api_client_authorizations", nil,
+ map[string]interface{}{
+ "api_client_authorization": map[string]string{
+ "owner_uuid": user.UUID,
+ },
+ },
+ )
+ c.Assert(err, check.IsNil)
+ newTok := resp.TokenV2()
+ c.Assert(newTok, check.Not(check.Equals), "")
+
+ // Confirm the token is from z1111
+ c.Assert(strings.HasPrefix(newTok, "v2/z1111-gj3su-"), check.Equals, true)
+
+ // Confirm the token works and is from the correct user
+ _, rootclnt3bis, _ := s.testClusters["z3333"].ClientsWithToken(newTok)
+ var curUser arvados.User
+ err = rootclnt3bis.RequestAndDecode(
+ &curUser, "GET", "arvados/v1/users/current", nil, nil,
+ )
+ c.Assert(err, check.IsNil)
+ c.Assert(curUser.UUID, check.Equals, user.UUID)
+}
+
+// Test for bug #18076
+func (s *IntegrationSuite) TestStaleCachedUserRecord(c *check.C) {
+ rootctx1, _, _ := s.testClusters["z1111"].RootClients()
+ _, rootclnt3, _ := s.testClusters["z3333"].RootClients()
+ conn1 := s.testClusters["z1111"].Conn()
+ conn3 := s.testClusters["z3333"].Conn()
+
+ // Make sure LoginCluster is properly configured
+ for _, cls := range []string{"z1111", "z3333"} {
+ c.Check(
+ s.testClusters[cls].Config.Clusters[cls].Login.LoginCluster,
+ check.Equals, "z1111",
+ check.Commentf("incorrect LoginCluster config on cluster %q", cls))
+ }
+
+ for testCaseNr, testCase := range []struct {
+ name string
+ withRepository bool
+ }{
+ {"User without local repository", false},
+ {"User with local repository", true},
+ } {
+ c.Log(c.TestName() + " " + testCase.name)
+ // Create some users, request them on the federated cluster so they're cached.
+ var users []arvados.User
+ for userNr := 0; userNr < 2; userNr++ {
+ _, _, _, user := s.testClusters["z1111"].UserClients(
+ rootctx1,
+ c,
+ conn1,
+ fmt.Sprintf("user%d%d@example.com", testCaseNr, userNr),
+ true)
+ c.Assert(user.Username, check.Not(check.Equals), "")
+ users = append(users, user)
+
+ lst, err := conn3.UserList(rootctx1, arvados.ListOptions{Limit: -1})
+ c.Assert(err, check.Equals, nil)
+ userFound := false
+ for _, fedUser := range lst.Items {
+ if fedUser.UUID == user.UUID {
+ c.Assert(fedUser.Username, check.Equals, user.Username)
+ userFound = true
+ break
+ }
+ }
+ c.Assert(userFound, check.Equals, true)
+
+ if testCase.withRepository {
+ var repo interface{}
+ err = rootclnt3.RequestAndDecode(
+ &repo, "POST", "arvados/v1/repositories", nil,
+ map[string]interface{}{
+ "repository": map[string]string{
+ "name": fmt.Sprintf("%s/test", user.Username),
+ "owner_uuid": user.UUID,
+ },
+ },
+ )
+ c.Assert(err, check.IsNil)
+ }
+ }
+
+ // Swap the usernames
+ _, err := conn1.UserUpdate(rootctx1, arvados.UpdateOptions{
+ UUID: users[0].UUID,
+ Attrs: map[string]interface{}{
+ "username": "",
+ },
+ })
+ c.Assert(err, check.Equals, nil)
+ _, err = conn1.UserUpdate(rootctx1, arvados.UpdateOptions{
+ UUID: users[1].UUID,
+ Attrs: map[string]interface{}{
+ "username": users[0].Username,
+ },
+ })
+ c.Assert(err, check.Equals, nil)
+ _, err = conn1.UserUpdate(rootctx1, arvados.UpdateOptions{
+ UUID: users[0].UUID,
+ Attrs: map[string]interface{}{
+ "username": users[1].Username,
+ },
+ })
+ c.Assert(err, check.Equals, nil)
+
+ // Re-request the list on the federated cluster & check for updates
+ lst, err := conn3.UserList(rootctx1, arvados.ListOptions{Limit: -1})
+ c.Assert(err, check.Equals, nil)
+ var user0Found, user1Found bool
+ for _, user := range lst.Items {
+ if user.UUID == users[0].UUID {
+ user0Found = true
+ c.Assert(user.Username, check.Equals, users[1].Username)
+ } else if user.UUID == users[1].UUID {
+ user1Found = true
+ c.Assert(user.Username, check.Equals, users[0].Username)
+ }
+ }
+ c.Assert(user0Found, check.Equals, true)
+ c.Assert(user1Found, check.Equals, true)
+ }
+}
+
// Test for bug #16263
func (s *IntegrationSuite) TestListUsers(c *check.C) {
rootctx1, _, _ := s.testClusters["z1111"].RootClients()
for _, user := range lst.Items {
if user.Username == "" {
nullUsername = true
+ break
}
}
c.Assert(nullUsername, check.Equals, true)
}
c.Check(found, check.Equals, true)
- // Deactivated user can see is_active==false via "get current
- // user" API
+ // Deactivated user no longer has working token
user1, err = conn3.UserGetCurrent(userctx1, arvados.GetOptions{})
- c.Assert(err, check.IsNil)
- c.Check(user1.IsActive, check.Equals, false)
+ c.Assert(err, check.ErrorMatches, `.*401 Unauthorized.*`)
}
func (s *IntegrationSuite) TestSetupUserWithVM(c *check.C) {
accesstoken := s.oidcprovider.ValidAccessToken()
for _, clusterID := range []string{"z1111", "z2222"} {
- c.Logf("trying clusterid %s", clusterID)
-
- conn := s.testClusters[clusterID].Conn()
- ctx, ac, kc := s.testClusters[clusterID].ClientsWithToken(accesstoken)
var coll arvados.Collection
// Write some file data and create a collection
{
+ c.Logf("save collection to %s", clusterID)
+
+ conn := s.testClusters[clusterID].Conn()
+ ctx, ac, kc := s.testClusters[clusterID].ClientsWithToken(accesstoken)
+
fs, err := coll.FileSystem(ac, kc)
c.Assert(err, check.IsNil)
f, err := fs.OpenFile("test.txt", os.O_CREATE|os.O_RDWR, 0777)
c.Assert(err, check.IsNil)
}
- // Read the collection & file data
- {
+ // Read the collection & file data -- both from the
+ // cluster where it was created, and from the other
+ // cluster.
+ for _, readClusterID := range []string{"z1111", "z2222", "z3333"} {
+ c.Logf("retrieve %s from %s", coll.UUID, readClusterID)
+
+ conn := s.testClusters[readClusterID].Conn()
+ ctx, ac, kc := s.testClusters[readClusterID].ClientsWithToken(accesstoken)
+
user, err := conn.UserGetCurrent(ctx, arvados.GetOptions{})
c.Assert(err, check.IsNil)
c.Check(user.FullName, check.Equals, "Example User")
- coll, err = conn.CollectionGet(ctx, arvados.GetOptions{UUID: coll.UUID})
+ readcoll, err := conn.CollectionGet(ctx, arvados.GetOptions{UUID: coll.UUID})
c.Assert(err, check.IsNil)
- c.Check(coll.ManifestText, check.Not(check.Equals), "")
- fs, err := coll.FileSystem(ac, kc)
+ c.Check(readcoll.ManifestText, check.Not(check.Equals), "")
+ fs, err := readcoll.FileSystem(ac, kc)
c.Assert(err, check.IsNil)
f, err := fs.Open("test.txt")
c.Assert(err, check.IsNil)
}
}
}
+
+// z3333 should not forward a locally-issued container runtime token,
+// associated with a z1111 user, to its login cluster z1111. z1111
+// would only call back to z3333 and then reject the response because
+// the user ID does not match the token prefix. See
+// dev.arvados.org/issues/18346
+func (s *IntegrationSuite) TestForwardRuntimeTokenToLoginCluster(c *check.C) {
+ db3, db3conn := s.dbConn(c, "z3333")
+ defer db3.Close()
+ defer db3conn.Close()
+ rootctx1, _, _ := s.testClusters["z1111"].RootClients()
+ rootctx3, _, _ := s.testClusters["z3333"].RootClients()
+ conn1 := s.testClusters["z1111"].Conn()
+ conn3 := s.testClusters["z3333"].Conn()
+ userctx1, _, _, _ := s.testClusters["z1111"].UserClients(rootctx1, c, conn1, "user@example.com", true)
+
+ user1, err := conn1.UserGetCurrent(userctx1, arvados.GetOptions{})
+ c.Assert(err, check.IsNil)
+ c.Logf("user1 %+v", user1)
+
+ imageColl, err := conn3.CollectionCreate(userctx1, arvados.CreateOptions{Attrs: map[string]interface{}{
+ "manifest_text": ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.tar\n",
+ }})
+ c.Assert(err, check.IsNil)
+ c.Logf("imageColl %+v", imageColl)
+
+ cr, err := conn3.ContainerRequestCreate(userctx1, arvados.CreateOptions{Attrs: map[string]interface{}{
+ "state": "Committed",
+ "command": []string{"echo"},
+ "container_image": imageColl.PortableDataHash,
+ "cwd": "/",
+ "output_path": "/",
+ "priority": 1,
+ "runtime_constraints": arvados.RuntimeConstraints{
+ VCPUs: 1,
+ RAM: 1000000000,
+ },
+ }})
+ c.Assert(err, check.IsNil)
+ c.Logf("container request %+v", cr)
+ ctr, err := conn3.ContainerLock(rootctx3, arvados.GetOptions{UUID: cr.ContainerUUID})
+ c.Assert(err, check.IsNil)
+ c.Logf("container %+v", ctr)
+
+ // We could use conn3.ContainerAuth() here, but that API
+ // hasn't been added to sdk/go/arvados/api.go yet.
+ row := db3conn.QueryRowContext(context.Background(), `SELECT api_token from api_client_authorizations where uuid=$1`, ctr.AuthUUID)
+ c.Check(row, check.NotNil)
+ var val sql.NullString
+ row.Scan(&val)
+ c.Assert(val.Valid, check.Equals, true)
+ runtimeToken := "v2/" + ctr.AuthUUID + "/" + val.String
+ ctrctx, _, _ := s.testClusters["z3333"].ClientsWithToken(runtimeToken)
+ c.Logf("container runtime token %+v", runtimeToken)
+
+ _, err = conn3.UserGet(ctrctx, arvados.GetOptions{UUID: user1.UUID})
+ c.Assert(err, check.NotNil)
+ c.Check(err, check.ErrorMatches, `request failed: .* 401 Unauthorized: cannot use a locally issued token to forward a request to our login cluster \(z1111\)`)
+ c.Check(err, check.Not(check.ErrorMatches), `(?ms).*127\.0\.0\.11.*`)
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+)
+
+// CollectionGet defers to railsProxy for everything except blob
+// signatures.
+func (conn *Conn) CollectionGet(ctx context.Context, opts arvados.GetOptions) (arvados.Collection, error) {
+ if len(opts.Select) > 0 {
+ // We need to know IsTrashed and TrashAt to implement
+ // signing properly, even if the caller doesn't want
+ // them.
+ opts.Select = append([]string{"is_trashed", "trash_at"}, opts.Select...)
+ }
+ resp, err := conn.railsProxy.CollectionGet(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ conn.signCollection(ctx, &resp)
+ return resp, nil
+}
+
+// CollectionList defers to railsProxy for everything except blob
+// signatures.
+func (conn *Conn) CollectionList(ctx context.Context, opts arvados.ListOptions) (arvados.CollectionList, error) {
+ if len(opts.Select) > 0 {
+ // We need to know IsTrashed and TrashAt to implement
+ // signing properly, even if the caller doesn't want
+ // them.
+ opts.Select = append([]string{"is_trashed", "trash_at"}, opts.Select...)
+ }
+ resp, err := conn.railsProxy.CollectionList(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ for i := range resp.Items {
+ conn.signCollection(ctx, &resp.Items[i])
+ }
+ return resp, nil
+}
+
+// CollectionCreate defers to railsProxy for everything except blob
+// signatures and vocabulary checking.
+func (conn *Conn) CollectionCreate(ctx context.Context, opts arvados.CreateOptions) (arvados.Collection, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Collection{}, err
+ }
+ if len(opts.Select) > 0 {
+ // We need to know IsTrashed and TrashAt to implement
+ // signing properly, even if the caller doesn't want
+ // them.
+ opts.Select = append([]string{"is_trashed", "trash_at"}, opts.Select...)
+ }
+ resp, err := conn.railsProxy.CollectionCreate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ conn.signCollection(ctx, &resp)
+ return resp, nil
+}
+
+// CollectionUpdate defers to railsProxy for everything except blob
+// signatures and vocabulary checking.
+func (conn *Conn) CollectionUpdate(ctx context.Context, opts arvados.UpdateOptions) (arvados.Collection, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Collection{}, err
+ }
+ if len(opts.Select) > 0 {
+ // We need to know IsTrashed and TrashAt to implement
+ // signing properly, even if the caller doesn't want
+ // them.
+ opts.Select = append([]string{"is_trashed", "trash_at"}, opts.Select...)
+ }
+ resp, err := conn.railsProxy.CollectionUpdate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ conn.signCollection(ctx, &resp)
+ return resp, nil
+}
+
+func (conn *Conn) signCollection(ctx context.Context, coll *arvados.Collection) {
+ if coll.IsTrashed || coll.ManifestText == "" || !conn.cluster.Collections.BlobSigning {
+ return
+ }
+ var token string
+ if creds, ok := auth.FromContext(ctx); ok && len(creds.Tokens) > 0 {
+ token = creds.Tokens[0]
+ }
+ if token == "" {
+ return
+ }
+ ttl := conn.cluster.Collections.BlobSigningTTL.Duration()
+ exp := time.Now().Add(ttl)
+ if coll.TrashAt != nil && !coll.TrashAt.IsZero() && coll.TrashAt.Before(exp) {
+ exp = *coll.TrashAt
+ }
+ coll.ManifestText = arvados.SignManifest(coll.ManifestText, token, exp, ttl, []byte(conn.cluster.Collections.BlobSigningKey))
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+ "regexp"
+ "strconv"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&CollectionSuite{})
+
+type CollectionSuite struct {
+ cluster *arvados.Cluster
+ localdb *Conn
+ railsSpy *arvadostest.Proxy
+}
+
+func (s *CollectionSuite) TearDownSuite(c *check.C) {
+ // Undo any changes/additions to the user database so they
+ // don't affect subsequent tests.
+ arvadostest.ResetEnv()
+ c.Check(arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil), check.IsNil)
+}
+
+func (s *CollectionSuite) SetUpTest(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ s.cluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ s.localdb = NewConn(s.cluster)
+ s.railsSpy = arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
+ *s.localdb.railsProxy = *rpc.NewConn(s.cluster.ClusterID, s.railsSpy.URL, true, rpc.PassthroughTokenProvider)
+}
+
+func (s *CollectionSuite) TearDownTest(c *check.C) {
+ s.railsSpy.Close()
+}
+
+func (s *CollectionSuite) setUpVocabulary(c *check.C, testVocabulary string) {
+ if testVocabulary == "" {
+ testVocabulary = `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCES": {
+ "strict": true,
+ "labels": [{"label": "Importance"}, {"label": "Priority"}],
+ "values": {
+ "IDVALIMPORTANCES1": { "labels": [{"label": "Critical"}, {"label": "Urgent"}, {"label": "High"}] },
+ "IDVALIMPORTANCES2": { "labels": [{"label": "Normal"}, {"label": "Moderate"}] },
+ "IDVALIMPORTANCES3": { "labels": [{"label": "Low"}] }
+ }
+ }
+ }
+ }`
+ }
+ voc, err := arvados.NewVocabulary([]byte(testVocabulary), []string{})
+ c.Assert(err, check.IsNil)
+ s.cluster.API.VocabularyPath = "foo"
+ s.localdb.vocabularyCache = voc
+}
+
+func (s *CollectionSuite) TestCollectionCreateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+
+ coll, err := s.localdb.CollectionCreate(ctx, arvados.CreateOptions{
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(coll.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
+
+func (s *CollectionSuite) TestCollectionUpdateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ coll, err := s.localdb.CollectionCreate(ctx, arvados.CreateOptions{})
+ c.Assert(err, check.IsNil)
+ coll, err = s.localdb.CollectionUpdate(ctx, arvados.UpdateOptions{
+ UUID: coll.UUID,
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(coll.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
+
+func (s *CollectionSuite) TestSignatures(c *check.C) {
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ resp, err := s.localdb.CollectionGet(ctx, arvados.GetOptions{UUID: arvadostest.FooCollection})
+ c.Check(err, check.IsNil)
+ c.Check(resp.ManifestText, check.Matches, `(?ms).* acbd[^ ]*\+3\+A[0-9a-f]+@[0-9a-f]+ 0:.*`)
+ s.checkSignatureExpiry(c, resp.ManifestText, time.Hour*24*7*2)
+
+ resp, err = s.localdb.CollectionGet(ctx, arvados.GetOptions{UUID: arvadostest.FooCollection, Select: []string{"manifest_text"}})
+ c.Check(err, check.IsNil)
+ c.Check(resp.ManifestText, check.Matches, `(?ms).* acbd[^ ]*\+3\+A[0-9a-f]+@[0-9a-f]+ 0:.*`)
+
+ lresp, err := s.localdb.CollectionList(ctx, arvados.ListOptions{Limit: -1, Filters: []arvados.Filter{{"uuid", "=", arvadostest.FooCollection}}})
+ c.Check(err, check.IsNil)
+ if c.Check(lresp.Items, check.HasLen, 1) {
+ c.Check(lresp.Items[0].UUID, check.Equals, arvadostest.FooCollection)
+ c.Check(lresp.Items[0].ManifestText, check.Equals, "")
+ c.Check(lresp.Items[0].UnsignedManifestText, check.Equals, "")
+ }
+
+ lresp, err = s.localdb.CollectionList(ctx, arvados.ListOptions{Limit: -1, Filters: []arvados.Filter{{"uuid", "=", arvadostest.FooCollection}}, Select: []string{"manifest_text"}})
+ c.Check(err, check.IsNil)
+ if c.Check(lresp.Items, check.HasLen, 1) {
+ c.Check(lresp.Items[0].ManifestText, check.Matches, `(?ms).* acbd[^ ]*\+3\+A[0-9a-f]+@[0-9a-f]+ 0:.*`)
+ c.Check(lresp.Items[0].UnsignedManifestText, check.Equals, "")
+ }
+
+ lresp, err = s.localdb.CollectionList(ctx, arvados.ListOptions{Limit: -1, Filters: []arvados.Filter{{"uuid", "=", arvadostest.FooCollection}}, Select: []string{"unsigned_manifest_text"}})
+ c.Check(err, check.IsNil)
+ if c.Check(lresp.Items, check.HasLen, 1) {
+ c.Check(lresp.Items[0].ManifestText, check.Equals, "")
+ c.Check(lresp.Items[0].UnsignedManifestText, check.Matches, `(?ms).* acbd[^ ]*\+3 0:.*`)
+ }
+
+ // early trash date causes lower signature TTL (even if
+ // trash_at and is_trashed fields are unselected)
+ trashed, err := s.localdb.CollectionCreate(ctx, arvados.CreateOptions{
+ Select: []string{"uuid", "manifest_text"},
+ Attrs: map[string]interface{}{
+ "manifest_text": ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:foo\n",
+ "trash_at": time.Now().UTC().Add(time.Hour),
+ }})
+ c.Assert(err, check.IsNil)
+ s.checkSignatureExpiry(c, trashed.ManifestText, time.Hour)
+ resp, err = s.localdb.CollectionGet(ctx, arvados.GetOptions{UUID: trashed.UUID})
+ c.Assert(err, check.IsNil)
+ s.checkSignatureExpiry(c, resp.ManifestText, time.Hour)
+
+ // distant future trash date does not cause higher signature TTL
+ trashed, err = s.localdb.CollectionUpdate(ctx, arvados.UpdateOptions{
+ UUID: trashed.UUID,
+ Attrs: map[string]interface{}{
+ "trash_at": time.Now().UTC().Add(time.Hour * 24 * 365),
+ }})
+ c.Assert(err, check.IsNil)
+ s.checkSignatureExpiry(c, trashed.ManifestText, time.Hour*24*7*2)
+ resp, err = s.localdb.CollectionGet(ctx, arvados.GetOptions{UUID: trashed.UUID})
+ c.Assert(err, check.IsNil)
+ s.checkSignatureExpiry(c, resp.ManifestText, time.Hour*24*7*2)
+
+ // Make sure groups/contents doesn't return manifest_text with
+ // collections (if it did, we'd need to sign it).
+ gresp, err := s.localdb.GroupContents(ctx, arvados.GroupContentsOptions{
+ Limit: -1,
+ Filters: []arvados.Filter{{"uuid", "=", arvadostest.FooCollection}},
+ Select: []string{"uuid", "manifest_text"},
+ })
+ if err != nil {
+ c.Check(err, check.ErrorMatches, `.*Invalid attribute.*manifest_text.*`)
+ } else if c.Check(gresp.Items, check.HasLen, 1) {
+ c.Check(gresp.Items[0].(map[string]interface{})["uuid"], check.Equals, arvadostest.FooCollection)
+ c.Check(gresp.Items[0].(map[string]interface{})["manifest_text"], check.Equals, nil)
+ }
+}
+
+func (s *CollectionSuite) checkSignatureExpiry(c *check.C, manifestText string, expectedTTL time.Duration) {
+ m := regexp.MustCompile(`@([[:xdigit:]]+)`).FindStringSubmatch(manifestText)
+ c.Assert(m, check.HasLen, 2)
+ sigexp, err := strconv.ParseInt(m[1], 16, 64)
+ c.Assert(err, check.IsNil)
+ expectedExp := time.Now().Add(expectedTTL).Unix()
+ c.Check(sigexp > expectedExp-60, check.Equals, true)
+ c.Check(sigexp <= expectedExp, check.Equals, true)
+}
+
+func (s *CollectionSuite) TestSignaturesDisabled(c *check.C) {
+ s.localdb.cluster.Collections.BlobSigning = false
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ resp, err := s.localdb.CollectionGet(ctx, arvados.GetOptions{UUID: arvadostest.FooCollection})
+ c.Check(err, check.IsNil)
+ c.Check(resp.ManifestText, check.Matches, `(?ms).* acbd[^ +]*\+3 0:.*`)
+}
import (
"context"
+ "encoding/json"
+ "fmt"
+ "net/http"
+ "os"
+ "strings"
+ "time"
"git.arvados.org/arvados.git/lib/controller/railsproxy"
"git.arvados.org/arvados.git/lib/controller/rpc"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "github.com/sirupsen/logrus"
)
type railsProxy = rpc.Conn
type Conn struct {
- cluster *arvados.Cluster
- *railsProxy // handles API methods that aren't defined on Conn itself
+ cluster *arvados.Cluster
+ *railsProxy // handles API methods that aren't defined on Conn itself
+ vocabularyCache *arvados.Vocabulary
+ vocabularyFileModTime time.Time
+ lastVocabularyRefreshCheck time.Time
+ lastVocabularyError error
loginController
}
func NewConn(cluster *arvados.Cluster) *Conn {
railsProxy := railsproxy.NewConn(cluster)
- var conn Conn
- conn = Conn{
+ railsProxy.RedactHostInErrors = true
+ conn := Conn{
cluster: cluster,
railsProxy: railsProxy,
}
return &conn
}
+func (conn *Conn) checkProperties(ctx context.Context, properties interface{}) error {
+ if properties == nil {
+ return nil
+ }
+ var props map[string]interface{}
+ switch properties := properties.(type) {
+ case string:
+ err := json.Unmarshal([]byte(properties), &props)
+ if err != nil {
+ return err
+ }
+ case map[string]interface{}:
+ props = properties
+ default:
+ return fmt.Errorf("unexpected properties type %T", properties)
+ }
+ voc, err := conn.VocabularyGet(ctx)
+ if err != nil {
+ return err
+ }
+ err = voc.Check(props)
+ if err != nil {
+ return httpErrorf(http.StatusBadRequest, voc.Check(props).Error())
+ }
+ return nil
+}
+
+func (conn *Conn) maybeRefreshVocabularyCache(logger logrus.FieldLogger) error {
+ if conn.lastVocabularyRefreshCheck.Add(time.Second).After(time.Now()) {
+ // Throttle the access to disk to at most once per second.
+ return nil
+ }
+ conn.lastVocabularyRefreshCheck = time.Now()
+ fi, err := os.Stat(conn.cluster.API.VocabularyPath)
+ if err != nil {
+ err = fmt.Errorf("couldn't stat vocabulary file %q: %v", conn.cluster.API.VocabularyPath, err)
+ conn.lastVocabularyError = err
+ return err
+ }
+ if fi.ModTime().After(conn.vocabularyFileModTime) {
+ err = conn.loadVocabularyFile()
+ if err != nil {
+ conn.lastVocabularyError = err
+ return err
+ }
+ conn.vocabularyFileModTime = fi.ModTime()
+ conn.lastVocabularyError = nil
+ logger.Info("vocabulary file reloaded successfully")
+ }
+ return nil
+}
+
+func (conn *Conn) loadVocabularyFile() error {
+ vf, err := os.ReadFile(conn.cluster.API.VocabularyPath)
+ if err != nil {
+ return fmt.Errorf("couldn't reading the vocabulary file: %v", err)
+ }
+ mk := make([]string, 0, len(conn.cluster.Collections.ManagedProperties))
+ for k := range conn.cluster.Collections.ManagedProperties {
+ mk = append(mk, k)
+ }
+ voc, err := arvados.NewVocabulary(vf, mk)
+ if err != nil {
+ return fmt.Errorf("while loading vocabulary file %q: %s", conn.cluster.API.VocabularyPath, err)
+ }
+ conn.vocabularyCache = voc
+ return nil
+}
+
+// LastVocabularyError returns the last error encountered while loading the
+// vocabulary file.
+// Implements health.Func
+func (conn *Conn) LastVocabularyError() error {
+ conn.maybeRefreshVocabularyCache(ctxlog.FromContext(context.Background()))
+ return conn.lastVocabularyError
+}
+
+// VocabularyGet refreshes the vocabulary cache if necessary and returns it.
+func (conn *Conn) VocabularyGet(ctx context.Context) (arvados.Vocabulary, error) {
+ if conn.cluster.API.VocabularyPath == "" {
+ return arvados.Vocabulary{
+ Tags: map[string]arvados.VocabularyTag{},
+ }, nil
+ }
+ logger := ctxlog.FromContext(ctx)
+ if conn.vocabularyCache == nil {
+ // Initial load of vocabulary file.
+ err := conn.loadVocabularyFile()
+ if err != nil {
+ logger.WithError(err).Error("error loading vocabulary file")
+ return arvados.Vocabulary{}, err
+ }
+ }
+ err := conn.maybeRefreshVocabularyCache(logger)
+ if err != nil {
+ logger.WithError(err).Error("error reloading vocabulary file - ignoring")
+ }
+ return *conn.vocabularyCache, nil
+}
+
// Logout handles the logout of conn giving to the appropriate loginController
func (conn *Conn) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
return conn.loginController.Logout(ctx, opts)
func (conn *Conn) UserAuthenticate(ctx context.Context, opts arvados.UserAuthenticateOptions) (arvados.APIClientAuthorization, error) {
return conn.loginController.UserAuthenticate(ctx, opts)
}
+
+func (conn *Conn) GroupContents(ctx context.Context, options arvados.GroupContentsOptions) (arvados.ObjectList, error) {
+ // The requested UUID can be a user (virtual home project), which we just pass on to
+ // the API server.
+ if strings.Index(options.UUID, "-j7d0g-") != 5 {
+ return conn.railsProxy.GroupContents(ctx, options)
+ }
+
+ var resp arvados.ObjectList
+
+ // Get the group object
+ respGroup, err := conn.GroupGet(ctx, arvados.GetOptions{UUID: options.UUID})
+ if err != nil {
+ return resp, err
+ }
+
+ // If the group has groupClass 'filter', apply the filters before getting the contents.
+ if respGroup.GroupClass == "filter" {
+ if filters, ok := respGroup.Properties["filters"].([]interface{}); ok {
+ for _, f := range filters {
+ // f is supposed to be a []string
+ tmp, ok2 := f.([]interface{})
+ if !ok2 || len(tmp) < 3 {
+ return resp, fmt.Errorf("filter unparsable: %T, %+v, original field: %T, %+v\n", tmp, tmp, f, f)
+ }
+ var filter arvados.Filter
+ if attr, ok2 := tmp[0].(string); ok2 {
+ filter.Attr = attr
+ } else {
+ return resp, fmt.Errorf("filter unparsable: attribute must be string: %T, %+v, filter: %T, %+v\n", tmp[0], tmp[0], f, f)
+ }
+ if operator, ok2 := tmp[1].(string); ok2 {
+ filter.Operator = operator
+ } else {
+ return resp, fmt.Errorf("filter unparsable: operator must be string: %T, %+v, filter: %T, %+v\n", tmp[1], tmp[1], f, f)
+ }
+ filter.Operand = tmp[2]
+ options.Filters = append(options.Filters, filter)
+ }
+ } else {
+ return resp, fmt.Errorf("filter unparsable: not an array\n")
+ }
+ // Use the generic /groups/contents endpoint for filter groups
+ options.UUID = ""
+ }
+
+ return conn.railsProxy.GroupContents(ctx, options)
+}
+
+func httpErrorf(code int, format string, args ...interface{}) error {
+ return httpserver.ErrorWithStatus(fmt.Errorf(format, args...), code)
+}
"crypto/sha256"
"fmt"
"io"
+ "io/ioutil"
+ "net"
"time"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
"git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "golang.org/x/crypto/ssh"
check "gopkg.in/check.v1"
)
authKey := fmt.Sprintf("%x", h.Sum(nil))
s.gw = &crunchrun.Gateway{
- DockerContainerID: new(string),
- ContainerUUID: s.ctrUUID,
- AuthSecret: authKey,
- Address: "localhost:0",
- Log: ctxlog.TestLogger(c),
+ DockerContainerID: new(string),
+ ContainerUUID: s.ctrUUID,
+ AuthSecret: authKey,
+ Address: "localhost:0",
+ Log: ctxlog.TestLogger(c),
+ ContainerIPAddress: func() (string, error) { return "localhost", nil },
}
c.Assert(s.gw.Start(), check.IsNil)
rootctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{s.cluster.SystemRootToken}})
}
}
+func (s *ContainerGatewaySuite) TestDirectTCP(c *check.C) {
+ // Set up servers on a few TCP ports
+ var addrs []string
+ for i := 0; i < 3; i++ {
+ ln, err := net.Listen("tcp", ":0")
+ c.Assert(err, check.IsNil)
+ defer ln.Close()
+ addrs = append(addrs, ln.Addr().String())
+ go func() {
+ for {
+ conn, err := ln.Accept()
+ if err != nil {
+ return
+ }
+ var gotAddr string
+ fmt.Fscanf(conn, "%s\n", &gotAddr)
+ c.Logf("stub server listening at %s received string %q from remote %s", ln.Addr().String(), gotAddr, conn.RemoteAddr())
+ if gotAddr == ln.Addr().String() {
+ fmt.Fprintf(conn, "%s\n", ln.Addr().String())
+ }
+ conn.Close()
+ }
+ }()
+ }
+
+ c.Logf("connecting to %s", s.gw.Address)
+ sshconn, err := s.localdb.ContainerSSH(s.ctx, arvados.ContainerSSHOptions{UUID: s.ctrUUID})
+ c.Assert(err, check.IsNil)
+ c.Assert(sshconn.Conn, check.NotNil)
+ defer sshconn.Conn.Close()
+ conn, chans, reqs, err := ssh.NewClientConn(sshconn.Conn, "zzzz-dz642-abcdeabcdeabcde", &ssh.ClientConfig{
+ HostKeyCallback: func(hostname string, remote net.Addr, key ssh.PublicKey) error { return nil },
+ })
+ c.Assert(err, check.IsNil)
+ client := ssh.NewClient(conn, chans, reqs)
+ for _, expectAddr := range addrs {
+ _, port, err := net.SplitHostPort(expectAddr)
+ c.Assert(err, check.IsNil)
+
+ c.Logf("trying foo:%s", port)
+ {
+ conn, err := client.Dial("tcp", "foo:"+port)
+ c.Assert(err, check.IsNil)
+ conn.SetDeadline(time.Now().Add(time.Second))
+ buf, err := ioutil.ReadAll(conn)
+ c.Check(err, check.IsNil)
+ c.Check(string(buf), check.Equals, "")
+ }
+
+ c.Logf("trying localhost:%s", port)
+ {
+ conn, err := client.Dial("tcp", "localhost:"+port)
+ c.Assert(err, check.IsNil)
+ conn.SetDeadline(time.Now().Add(time.Second))
+ conn.Write([]byte(expectAddr + "\n"))
+ var gotAddr string
+ fmt.Fscanf(conn, "%s\n", &gotAddr)
+ c.Check(gotAddr, check.Equals, expectAddr)
+ }
+ }
+}
+
func (s *ContainerGatewaySuite) TestConnect(c *check.C) {
c.Logf("connecting to %s", s.gw.Address)
sshconn, err := s.localdb.ContainerSSH(s.ctx, arvados.ContainerSSHOptions{UUID: s.ctrUUID})
// Receive binary
_, err = io.ReadFull(sshconn.Conn, buf[:4])
c.Check(err, check.IsNil)
- c.Check(buf[:4], check.DeepEquals, []byte{0, 0, 1, 0xfc})
// If we can get this far into an SSH handshake...
- c.Log("success, tunnel is working")
+ c.Logf("was able to read %x -- success, tunnel is working", buf[:4])
}()
select {
case <-done:
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// ContainerRequestCreate defers to railsProxy for everything except
+// vocabulary checking.
+func (conn *Conn) ContainerRequestCreate(ctx context.Context, opts arvados.CreateOptions) (arvados.ContainerRequest, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.ContainerRequest{}, err
+ }
+ resp, err := conn.railsProxy.ContainerRequestCreate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
+
+// ContainerRequestUpdate defers to railsProxy for everything except
+// vocabulary checking.
+func (conn *Conn) ContainerRequestUpdate(ctx context.Context, opts arvados.UpdateOptions) (arvados.ContainerRequest, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.ContainerRequest{}, err
+ }
+ resp, err := conn.railsProxy.ContainerRequestUpdate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&ContainerRequestSuite{})
+
+type ContainerRequestSuite struct {
+ cluster *arvados.Cluster
+ localdb *Conn
+ railsSpy *arvadostest.Proxy
+}
+
+func (s *ContainerRequestSuite) TearDownSuite(c *check.C) {
+ // Undo any changes/additions to the user database so they
+ // don't affect subsequent tests.
+ arvadostest.ResetEnv()
+ c.Check(arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil), check.IsNil)
+}
+
+func (s *ContainerRequestSuite) SetUpTest(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ s.cluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ s.localdb = NewConn(s.cluster)
+ s.railsSpy = arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
+ *s.localdb.railsProxy = *rpc.NewConn(s.cluster.ClusterID, s.railsSpy.URL, true, rpc.PassthroughTokenProvider)
+}
+
+func (s *ContainerRequestSuite) TearDownTest(c *check.C) {
+ s.railsSpy.Close()
+}
+
+func (s *ContainerRequestSuite) setUpVocabulary(c *check.C, testVocabulary string) {
+ if testVocabulary == "" {
+ testVocabulary = `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCES": {
+ "strict": true,
+ "labels": [{"label": "Importance"}, {"label": "Priority"}],
+ "values": {
+ "IDVALIMPORTANCES1": { "labels": [{"label": "Critical"}, {"label": "Urgent"}, {"label": "High"}] },
+ "IDVALIMPORTANCES2": { "labels": [{"label": "Normal"}, {"label": "Moderate"}] },
+ "IDVALIMPORTANCES3": { "labels": [{"label": "Low"}] }
+ }
+ }
+ }
+ }`
+ }
+ voc, err := arvados.NewVocabulary([]byte(testVocabulary), []string{})
+ c.Assert(err, check.IsNil)
+ s.localdb.vocabularyCache = voc
+ s.cluster.API.VocabularyPath = "foo"
+}
+
+func (s *ContainerRequestSuite) TestCRCreateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+
+ cnt, err := s.localdb.ContainerRequestCreate(ctx, arvados.CreateOptions{
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "command": []string{"echo", "foo"},
+ "container_image": "arvados/apitestfixture:latest",
+ "cwd": "/tmp",
+ "environment": map[string]string{},
+ "mounts": map[string]interface{}{
+ "/out": map[string]interface{}{
+ "kind": "tmp",
+ "capacity": 1000000,
+ },
+ },
+ "output_path": "/out",
+ "runtime_constraints": map[string]interface{}{
+ "vcpus": 1,
+ "ram": 2,
+ },
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(cnt.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
+
+func (s *ContainerRequestSuite) TestCRUpdateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ cnt, err := s.localdb.ContainerRequestCreate(ctx, arvados.CreateOptions{
+ Attrs: map[string]interface{}{
+ "command": []string{"echo", "foo"},
+ "container_image": "arvados/apitestfixture:latest",
+ "cwd": "/tmp",
+ "environment": map[string]string{},
+ "mounts": map[string]interface{}{
+ "/out": map[string]interface{}{
+ "kind": "tmp",
+ "capacity": 1000000,
+ },
+ },
+ "output_path": "/out",
+ "runtime_constraints": map[string]interface{}{
+ "vcpus": 1,
+ "ram": 2,
+ },
+ },
+ })
+ c.Assert(err, check.IsNil)
+ cnt, err = s.localdb.ContainerRequestUpdate(ctx, arvados.UpdateOptions{
+ UUID: cnt.UUID,
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(cnt.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// GroupCreate defers to railsProxy for everything except vocabulary
+// checking.
+func (conn *Conn) GroupCreate(ctx context.Context, opts arvados.CreateOptions) (arvados.Group, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Group{}, err
+ }
+ resp, err := conn.railsProxy.GroupCreate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
+
+// GroupUpdate defers to railsProxy for everything except vocabulary
+// checking.
+func (conn *Conn) GroupUpdate(ctx context.Context, opts arvados.UpdateOptions) (arvados.Group, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Group{}, err
+ }
+ resp, err := conn.railsProxy.GroupUpdate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&GroupSuite{})
+
+type GroupSuite struct {
+ cluster *arvados.Cluster
+ localdb *Conn
+ railsSpy *arvadostest.Proxy
+}
+
+func (s *GroupSuite) TearDownSuite(c *check.C) {
+ // Undo any changes/additions to the user database so they
+ // don't affect subsequent tests.
+ arvadostest.ResetEnv()
+ c.Check(arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil), check.IsNil)
+}
+
+func (s *GroupSuite) SetUpTest(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ s.cluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ s.localdb = NewConn(s.cluster)
+ s.railsSpy = arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
+ *s.localdb.railsProxy = *rpc.NewConn(s.cluster.ClusterID, s.railsSpy.URL, true, rpc.PassthroughTokenProvider)
+}
+
+func (s *GroupSuite) TearDownTest(c *check.C) {
+ s.railsSpy.Close()
+}
+
+func (s *GroupSuite) setUpVocabulary(c *check.C, testVocabulary string) {
+ if testVocabulary == "" {
+ testVocabulary = `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCES": {
+ "strict": true,
+ "labels": [{"label": "Importance"}, {"label": "Priority"}],
+ "values": {
+ "IDVALIMPORTANCES1": { "labels": [{"label": "Critical"}, {"label": "Urgent"}, {"label": "High"}] },
+ "IDVALIMPORTANCES2": { "labels": [{"label": "Normal"}, {"label": "Moderate"}] },
+ "IDVALIMPORTANCES3": { "labels": [{"label": "Low"}] }
+ }
+ }
+ }
+ }`
+ }
+ voc, err := arvados.NewVocabulary([]byte(testVocabulary), []string{})
+ c.Assert(err, check.IsNil)
+ s.localdb.vocabularyCache = voc
+ s.cluster.API.VocabularyPath = "foo"
+}
+
+func (s *GroupSuite) TestGroupCreateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+
+ grp, err := s.localdb.GroupCreate(ctx, arvados.CreateOptions{
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "group_class": "project",
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(grp.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
+
+func (s *GroupSuite) TestGroupUpdateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ grp, err := s.localdb.GroupCreate(ctx, arvados.CreateOptions{
+ Attrs: map[string]interface{}{
+ "group_class": "project",
+ },
+ })
+ c.Assert(err, check.IsNil)
+ grp, err = s.localdb.GroupUpdate(ctx, arvados.UpdateOptions{
+ UUID: grp.UUID,
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(grp.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// LinkCreate defers to railsProxy for everything except vocabulary
+// checking.
+func (conn *Conn) LinkCreate(ctx context.Context, opts arvados.CreateOptions) (arvados.Link, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Link{}, err
+ }
+ resp, err := conn.railsProxy.LinkCreate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
+
+// LinkUpdate defers to railsProxy for everything except vocabulary
+// checking.
+func (conn *Conn) LinkUpdate(ctx context.Context, opts arvados.UpdateOptions) (arvados.Link, error) {
+ err := conn.checkProperties(ctx, opts.Attrs["properties"])
+ if err != nil {
+ return arvados.Link{}, err
+ }
+ resp, err := conn.railsProxy.LinkUpdate(ctx, opts)
+ if err != nil {
+ return resp, err
+ }
+ return resp, nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&LinkSuite{})
+
+type LinkSuite struct {
+ cluster *arvados.Cluster
+ localdb *Conn
+ railsSpy *arvadostest.Proxy
+}
+
+func (s *LinkSuite) TearDownSuite(c *check.C) {
+ // Undo any changes/additions to the user database so they
+ // don't affect subsequent tests.
+ arvadostest.ResetEnv()
+ c.Check(arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil), check.IsNil)
+}
+
+func (s *LinkSuite) SetUpTest(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ s.cluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ s.localdb = NewConn(s.cluster)
+ s.railsSpy = arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
+ *s.localdb.railsProxy = *rpc.NewConn(s.cluster.ClusterID, s.railsSpy.URL, true, rpc.PassthroughTokenProvider)
+}
+
+func (s *LinkSuite) TearDownTest(c *check.C) {
+ s.railsSpy.Close()
+}
+
+func (s *LinkSuite) setUpVocabulary(c *check.C, testVocabulary string) {
+ if testVocabulary == "" {
+ testVocabulary = `{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGIMPORTANCES": {
+ "strict": true,
+ "labels": [{"label": "Importance"}, {"label": "Priority"}],
+ "values": {
+ "IDVALIMPORTANCES1": { "labels": [{"label": "Critical"}, {"label": "Urgent"}, {"label": "High"}] },
+ "IDVALIMPORTANCES2": { "labels": [{"label": "Normal"}, {"label": "Moderate"}] },
+ "IDVALIMPORTANCES3": { "labels": [{"label": "Low"}] }
+ }
+ }
+ }
+ }`
+ }
+ voc, err := arvados.NewVocabulary([]byte(testVocabulary), []string{})
+ c.Assert(err, check.IsNil)
+ s.localdb.vocabularyCache = voc
+ s.cluster.API.VocabularyPath = "foo"
+}
+
+func (s *LinkSuite) TestLinkCreateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+
+ lnk, err := s.localdb.LinkCreate(ctx, arvados.CreateOptions{
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "link_class": "star",
+ "tail_uuid": "zzzzz-j7d0g-publicfavorites",
+ "head_uuid": arvadostest.FooCollection,
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(lnk.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
+
+func (s *LinkSuite) TestLinkUpdateWithProperties(c *check.C) {
+ s.setUpVocabulary(c, "")
+ ctx := auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+
+ tests := []struct {
+ name string
+ props map[string]interface{}
+ success bool
+ }{
+ {"Invalid prop key", map[string]interface{}{"Priority": "IDVALIMPORTANCES1"}, false},
+ {"Invalid prop value", map[string]interface{}{"IDTAGIMPORTANCES": "high"}, false},
+ {"Valid prop key & value", map[string]interface{}{"IDTAGIMPORTANCES": "IDVALIMPORTANCES1"}, true},
+ {"Empty properties", map[string]interface{}{}, true},
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ lnk, err := s.localdb.LinkCreate(ctx, arvados.CreateOptions{
+ Attrs: map[string]interface{}{
+ "link_class": "star",
+ "tail_uuid": "zzzzz-j7d0g-publicfavorites",
+ "head_uuid": arvadostest.FooCollection,
+ },
+ })
+ c.Assert(err, check.IsNil)
+ lnk, err = s.localdb.LinkUpdate(ctx, arvados.UpdateOptions{
+ UUID: lnk.UUID,
+ Select: []string{"uuid", "properties"},
+ Attrs: map[string]interface{}{
+ "properties": tt.props,
+ }})
+ if tt.success {
+ c.Assert(err, check.IsNil)
+ c.Assert(lnk.Properties, check.DeepEquals, tt.props)
+ } else {
+ c.Assert(err, check.NotNil)
+ }
+ }
+}
func chooseLoginController(cluster *arvados.Cluster, parent *Conn) loginController {
wantGoogle := cluster.Login.Google.Enable
wantOpenIDConnect := cluster.Login.OpenIDConnect.Enable
- wantSSO := cluster.Login.SSO.Enable
wantPAM := cluster.Login.PAM.Enable
wantLDAP := cluster.Login.LDAP.Enable
wantTest := cluster.Login.Test.Enable
wantLoginCluster := cluster.Login.LoginCluster != "" && cluster.Login.LoginCluster != cluster.ClusterID
switch {
- case 1 != countTrue(wantGoogle, wantOpenIDConnect, wantSSO, wantPAM, wantLDAP, wantTest, wantLoginCluster):
+ case 1 != countTrue(wantGoogle, wantOpenIDConnect, wantPAM, wantLDAP, wantTest, wantLoginCluster):
return errorLoginController{
- error: errors.New("configuration problem: exactly one of Login.Google, Login.OpenIDConnect, Login.SSO, Login.PAM, Login.LDAP, Login.Test, or Login.LoginCluster must be set"),
+ error: errors.New("configuration problem: exactly one of Login.Google, Login.OpenIDConnect, Login.PAM, Login.LDAP, Login.Test, or Login.LoginCluster must be set"),
}
case wantGoogle:
return &oidcLoginController{
}
case wantOpenIDConnect:
return &oidcLoginController{
- Cluster: cluster,
- Parent: parent,
- Issuer: cluster.Login.OpenIDConnect.Issuer,
- ClientID: cluster.Login.OpenIDConnect.ClientID,
- ClientSecret: cluster.Login.OpenIDConnect.ClientSecret,
- AuthParams: cluster.Login.OpenIDConnect.AuthenticationRequestParameters,
- EmailClaim: cluster.Login.OpenIDConnect.EmailClaim,
- EmailVerifiedClaim: cluster.Login.OpenIDConnect.EmailVerifiedClaim,
- UsernameClaim: cluster.Login.OpenIDConnect.UsernameClaim,
+ Cluster: cluster,
+ Parent: parent,
+ Issuer: cluster.Login.OpenIDConnect.Issuer,
+ ClientID: cluster.Login.OpenIDConnect.ClientID,
+ ClientSecret: cluster.Login.OpenIDConnect.ClientSecret,
+ AuthParams: cluster.Login.OpenIDConnect.AuthenticationRequestParameters,
+ EmailClaim: cluster.Login.OpenIDConnect.EmailClaim,
+ EmailVerifiedClaim: cluster.Login.OpenIDConnect.EmailVerifiedClaim,
+ UsernameClaim: cluster.Login.OpenIDConnect.UsernameClaim,
+ AcceptAccessToken: cluster.Login.OpenIDConnect.AcceptAccessToken,
+ AcceptAccessTokenScope: cluster.Login.OpenIDConnect.AcceptAccessTokenScope,
}
- case wantSSO:
- return &ssoLoginController{Parent: parent}
case wantPAM:
return &pamLoginController{Cluster: cluster, Parent: parent}
case wantLDAP:
return n
}
-// Login and Logout are passed through to the parent's railsProxy;
-// UserAuthenticate is rejected.
-type ssoLoginController struct{ Parent *Conn }
-
-func (ctrl *ssoLoginController) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
- return ctrl.Parent.railsProxy.Login(ctx, opts)
-}
-func (ctrl *ssoLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return ctrl.Parent.railsProxy.Logout(ctx, opts)
-}
-func (ctrl *ssoLoginController) UserAuthenticate(ctx context.Context, opts arvados.UserAuthenticateOptions) (arvados.APIClientAuthorization, error) {
- return arvados.APIClientAuthorization{}, httpserver.ErrorWithStatus(errors.New("username/password authentication is not available"), http.StatusBadRequest)
-}
-
type errorLoginController struct{ error }
func (ctrl errorLoginController) Login(context.Context, arvados.LoginOptions) (arvados.LoginResponse, error) {
func (ctrl federatedLoginController) Login(context.Context, arvados.LoginOptions) (arvados.LoginResponse, error) {
return arvados.LoginResponse{}, httpserver.ErrorWithStatus(errors.New("Should have been redirected to login cluster"), http.StatusBadRequest)
}
-func (ctrl federatedLoginController) Logout(_ context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return noopLogout(ctrl.Cluster, opts)
+func (ctrl federatedLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ return logout(ctx, ctrl.Cluster, opts)
}
func (ctrl federatedLoginController) UserAuthenticate(context.Context, arvados.UserAuthenticateOptions) (arvados.APIClientAuthorization, error) {
return arvados.APIClientAuthorization{}, httpserver.ErrorWithStatus(errors.New("username/password authentication is not available"), http.StatusBadRequest)
}
-func noopLogout(cluster *arvados.Cluster, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- target := opts.ReturnTo
- if target == "" {
- if cluster.Services.Workbench2.ExternalURL.Host != "" {
- target = cluster.Services.Workbench2.ExternalURL.String()
- } else {
- target = cluster.Services.Workbench1.ExternalURL.String()
- }
- }
- return arvados.LogoutResponse{RedirectLocation: target}, nil
-}
-
func (conn *Conn) CreateAPIClientAuthorization(ctx context.Context, rootToken string, authinfo rpc.UserSessionAuthInfo) (resp arvados.APIClientAuthorization, err error) {
if rootToken == "" {
return arvados.APIClientAuthorization{}, errors.New("configuration error: empty SystemRootToken")
tokensecret = tokenparts[2]
}
}
- var exp sql.NullString
+ var exp sql.NullTime
var scopes []byte
err = tx.QueryRowxContext(ctx, "select uuid, api_token, expires_at, scopes from api_client_authorizations where api_token=$1", tokensecret).Scan(&resp.UUID, &resp.APIToken, &exp, &scopes)
if err != nil {
return
}
- resp.ExpiresAt = exp.String
+ resp.ExpiresAt = exp.Time
if len(scopes) > 0 {
err = json.Unmarshal(scopes, &resp.Scopes)
if err != nil {
}
func (ctrl *ldapLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return noopLogout(ctrl.Cluster, opts)
+ return logout(ctx, ctrl.Cluster, opts)
}
func (ctrl *ldapLoginController) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
--name=${ldapctr} \
osixia/openldap:1.3.0
docker logs --follow ${ldapctr} 2>$debug >$debug &
-ldaphostport=$(docker port ${ldapctr} 389/tcp)
-ldapport=${ldaphostport##*:}
+ldaphostports=$(docker port ${ldapctr} 389/tcp)
+ldapport=${ldaphostports##*:}
ldapurl="ldap://${hostname}:${ldapport}"
passwordhash="$(docker exec -i ${ldapctr} slappasswd -s "secret")"
debian:10 \
bash -c "${setup_pam_ldap:-true} && arvados-server controller"
docker logs --follow ${ctrlctr} 2>$debug >$debug &
-ctrlhostport=$(docker port ${ctrlctr} 9999/tcp)
+ctrlhostports=$(docker port ${ctrlctr} 9999/tcp)
+ctrlport=${ctrlhostports##*:}
echo >&2 "Waiting for arvados controller to come up..."
for f in $(seq 1 20); do
- if curl -s "http://${ctrlhostport}/arvados/v1/config" >/dev/null; then
+ if curl -s "http://0.0.0.0:${ctrlport}/arvados/v1/config" >/dev/null; then
break
else
sleep 1
echo -n >&2 .
done
echo >&2
-echo >&2 "Arvados controller is up at http://${ctrlhostport}"
+echo >&2 "Arvados controller is up at http://0.0.0.0:${ctrlport}"
check_contains() {
resp="${1}"
set +x
echo >&2 "Testing authentication failure"
-resp="$(set -x; curl -s --include -d username=foo-bar -d password=nosecret "http://${ctrlhostport}/arvados/v1/users/authenticate" | tee $debug)"
+resp="$(set -x; curl -s --include -d username=foo-bar -d password=nosecret "http://0.0.0.0:${ctrlport}/arvados/v1/users/authenticate" | tee $debug)"
check_contains "${resp}" "HTTP/1.1 401"
if [[ "${config_method}" = ldap ]]; then
check_contains "${resp}" '{"errors":["LDAP: Authentication failure (with username \"foo-bar\" and password)"]}'
fi
echo >&2 "Testing authentication success"
-resp="$(set -x; curl -s --include -d username=foo-bar -d password=secret "http://${ctrlhostport}/arvados/v1/users/authenticate" | tee $debug)"
+resp="$(set -x; curl -s --include -d username=foo-bar -d password=secret "http://0.0.0.0:${ctrlport}/arvados/v1/users/authenticate" | tee $debug)"
check_contains "${resp}" "HTTP/1.1 200"
check_contains "${resp}" '"api_token":"'
check_contains "${resp}" '"scopes":["all"]'
token="v2/$uuid/$secret"
echo >&2 "New token is ${token}"
-resp="$(set -x; curl -s --include -H "Authorization: Bearer ${token}" "http://${ctrlhostport}/arvados/v1/users/current" | tee $debug)"
+resp="$(set -x; curl -s --include -H "Authorization: Bearer ${token}" "http://0.0.0.0:${ctrlport}/arvados/v1/users/current" | tee $debug)"
check_contains "${resp}" "HTTP/1.1 200"
if [[ "${config_method}" = ldap ]]; then
# user fields come from LDAP attributes
"golang.org/x/oauth2"
"google.golang.org/api/option"
"google.golang.org/api/people/v1"
+ "gopkg.in/square/go-jose.v2/jwt"
)
var (
)
type oidcLoginController struct {
- Cluster *arvados.Cluster
- Parent *Conn
- Issuer string // OIDC issuer URL, e.g., "https://accounts.google.com"
- ClientID string
- ClientSecret string
- UseGooglePeopleAPI bool // Use Google People API to look up alternate email addresses
- EmailClaim string // OpenID claim to use as email address; typically "email"
- EmailVerifiedClaim string // If non-empty, ensure claim value is true before accepting EmailClaim; typically "email_verified"
- UsernameClaim string // If non-empty, use as preferred username
- AuthParams map[string]string // Additional parameters to pass with authentication request
+ Cluster *arvados.Cluster
+ Parent *Conn
+ Issuer string // OIDC issuer URL, e.g., "https://accounts.google.com"
+ ClientID string
+ ClientSecret string
+ UseGooglePeopleAPI bool // Use Google People API to look up alternate email addresses
+ EmailClaim string // OpenID claim to use as email address; typically "email"
+ EmailVerifiedClaim string // If non-empty, ensure claim value is true before accepting EmailClaim; typically "email_verified"
+ UsernameClaim string // If non-empty, use as preferred username
+ AcceptAccessToken bool // Accept access tokens as API tokens
+ AcceptAccessTokenScope string // If non-empty, don't accept access tokens as API tokens unless they contain this scope
+ AuthParams map[string]string // Additional parameters to pass with authentication request
// override Google People API base URL for testing purposes
// (normally empty, set by google pkg to
}
func (ctrl *oidcLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return noopLogout(ctrl.Cluster, opts)
+ return logout(ctx, ctrl.Cluster, opts)
}
func (ctrl *oidcLoginController) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
if err != nil {
return loginError(fmt.Errorf("error in OAuth2 exchange: %s", err))
}
+ ctxlog.FromContext(ctx).WithField("oauth2Token", oauth2Token).Debug("oauth2 exchange succeeded")
rawIDToken, ok := oauth2Token.Extra("id_token").(string)
if !ok {
return loginError(errors.New("error in OAuth2 exchange: no ID token in OAuth2 token"))
}
+ ctxlog.FromContext(ctx).WithField("rawIDToken", rawIDToken).Debug("oauth2Token provided ID token")
idToken, err := ctrl.verifier.Verify(ctx, rawIDToken)
if err != nil {
return loginError(fmt.Errorf("error verifying ID token: %s", err))
} else if verified, _ := claims[ctrl.EmailVerifiedClaim].(bool); verified || ctrl.EmailVerifiedClaim == "" {
// Fall back to this info if the People API call
// (below) doesn't return a primary && verified email.
- name, _ := claims["name"].(string)
- if names := strings.Fields(strings.TrimSpace(name)); len(names) > 1 {
- ret.FirstName = strings.Join(names[0:len(names)-1], " ")
- ret.LastName = names[len(names)-1]
- } else if len(names) > 0 {
- ret.FirstName = names[0]
+ givenName, _ := claims["given_name"].(string)
+ familyName, _ := claims["family_name"].(string)
+ if givenName != "" && familyName != "" {
+ ret.FirstName = givenName
+ ret.LastName = familyName
+ } else {
+ name, _ := claims["name"].(string)
+ if names := strings.Fields(strings.TrimSpace(name)); len(names) > 1 {
+ ret.FirstName = strings.Join(names[0:len(names)-1], " ")
+ ret.LastName = names[len(names)-1]
+ } else if len(names) > 0 {
+ ret.FirstName = names[0]
+ }
}
ret.Email, _ = claims[ctrl.EmailClaim].(string)
}
// cached positive result
aca := cached.(arvados.APIClientAuthorization)
var expiring bool
- if aca.ExpiresAt != "" {
- t, err := time.Parse(time.RFC3339Nano, aca.ExpiresAt)
- if err != nil {
- return fmt.Errorf("error parsing expires_at value: %w", err)
- }
+ if !aca.ExpiresAt.IsZero() {
+ t := aca.ExpiresAt
expiring = t.Before(time.Now().Add(time.Minute))
}
if !expiring {
if err != nil {
return fmt.Errorf("error setting up OpenID Connect provider: %s", err)
}
+ if ok, err := ta.checkAccessTokenScope(ctx, tok); err != nil || !ok {
+ ta.cache.Add(tok, time.Now().Add(tokenCacheNegativeTTL))
+ return err
+ }
oauth2Token := &oauth2.Token{
AccessToken: tok,
}
if err != nil {
return err
}
- aca.ExpiresAt = exp.Format(time.RFC3339Nano)
+ aca.ExpiresAt = exp
ta.cache.Add(tok, aca)
return nil
}
+
+// Check that the provided access token is a JWT with the required
+// scope. If it is a valid JWT but missing the required scope, we
+// return a 403 error, otherwise true (acceptable as an API token) or
+// false (pass through unmodified).
+//
+// Return false if configured not to accept access tokens at all.
+//
+// Note we don't check signature or expiry here. We are relying on the
+// caller to verify those separately (e.g., by calling the UserInfo
+// endpoint).
+func (ta *oidcTokenAuthorizer) checkAccessTokenScope(ctx context.Context, tok string) (bool, error) {
+ if !ta.ctrl.AcceptAccessToken {
+ return false, nil
+ } else if ta.ctrl.AcceptAccessTokenScope == "" {
+ return true, nil
+ }
+ var claims struct {
+ Scope string `json:"scope"`
+ }
+ if t, err := jwt.ParseSigned(tok); err != nil {
+ ctxlog.FromContext(ctx).WithError(err).Debug("error parsing jwt")
+ return false, nil
+ } else if err = t.UnsafeClaimsWithoutVerification(&claims); err != nil {
+ ctxlog.FromContext(ctx).WithError(err).Debug("error extracting jwt claims")
+ return false, nil
+ }
+ for _, s := range strings.Split(claims.Scope, " ") {
+ if s == ta.ctrl.AcceptAccessTokenScope {
+ return true, nil
+ }
+ }
+ ctxlog.FromContext(ctx).WithFields(logrus.Fields{"have": claims.Scope, "need": ta.ctrl.AcceptAccessTokenScope}).Infof("unacceptable access token scope")
+ return false, httpserver.ErrorWithStatus(errors.New("unacceptable access token scope"), http.StatusUnauthorized)
+}
s.fakeProvider.AuthEmail = "active-user@arvados.local"
s.fakeProvider.AuthEmailVerified = true
s.fakeProvider.AuthName = "Fake User Name"
+ s.fakeProvider.AuthGivenName = "Fake"
+ s.fakeProvider.AuthFamilyName = "User Name"
s.fakeProvider.ValidCode = fmt.Sprintf("abcdefgh-%d", time.Now().Unix())
s.fakeProvider.PeopleAPIResponse = map[string]interface{}{}
c.Assert(err, check.IsNil)
s.cluster, err = cfg.GetCluster("")
c.Assert(err, check.IsNil)
- s.cluster.Login.SSO.Enable = false
+ s.cluster.Login.Test.Enable = false
s.cluster.Login.Google.Enable = true
s.cluster.Login.Google.ClientID = "test%client$id"
s.cluster.Login.Google.ClientSecret = "test#client/secret"
json.Unmarshal([]byte(fmt.Sprintf("%q", s.fakeProvider.Issuer.URL)), &s.cluster.Login.OpenIDConnect.Issuer)
s.cluster.Login.OpenIDConnect.ClientID = "oidc#client#id"
s.cluster.Login.OpenIDConnect.ClientSecret = "oidc#client#secret"
+ s.cluster.Login.OpenIDConnect.AcceptAccessToken = true
+ s.cluster.Login.OpenIDConnect.AcceptAccessTokenScope = ""
s.fakeProvider.ValidClientID = "oidc#client#id"
s.fakeProvider.ValidClientSecret = "oidc#client#secret"
db := arvadostest.DB(c, s.cluster)
tokenCacheTTL = time.Millisecond
tokenCacheRaceWindow = time.Millisecond
+ tokenCacheNegativeTTL = time.Millisecond
oidcAuthorizer := OIDCAccessTokenAuthorizer(s.cluster, func(context.Context) (*sqlx.DB, error) { return db, nil })
accessToken := s.fakeProvider.ValidAccessToken()
mac := hmac.New(sha256.New, []byte(s.cluster.SystemRootToken))
io.WriteString(mac, accessToken)
- hmac := fmt.Sprintf("%x", mac.Sum(nil))
+ apiToken := fmt.Sprintf("%x", mac.Sum(nil))
cleanup := func() {
- _, err := db.Exec(`delete from api_client_authorizations where api_token=$1`, hmac)
+ _, err := db.Exec(`delete from api_client_authorizations where api_token=$1`, apiToken)
c.Check(err, check.IsNil)
}
cleanup()
c.Assert(creds.Tokens, check.HasLen, 1)
c.Check(creds.Tokens[0], check.Equals, accessToken)
- err := db.QueryRowContext(ctx, `select expires_at at time zone 'UTC' from api_client_authorizations where api_token=$1`, hmac).Scan(&exp1)
+ err := db.QueryRowContext(ctx, `select expires_at at time zone 'UTC' from api_client_authorizations where api_token=$1`, apiToken).Scan(&exp1)
c.Check(err, check.IsNil)
c.Check(exp1.Sub(time.Now()) > -time.Second, check.Equals, true)
c.Check(exp1.Sub(time.Now()) < time.Second, check.Equals, true)
})(ctx, nil)
// If the token is used again after the in-memory cache
- // expires, oidcAuthorizer must re-checks the token and update
+ // expires, oidcAuthorizer must re-check the token and update
// the expires_at value in the database.
time.Sleep(3 * time.Millisecond)
oidcAuthorizer.WrapCalls(func(ctx context.Context, opts interface{}) (interface{}, error) {
var exp time.Time
- err := db.QueryRowContext(ctx, `select expires_at at time zone 'UTC' from api_client_authorizations where api_token=$1`, hmac).Scan(&exp)
+ err := db.QueryRowContext(ctx, `select expires_at at time zone 'UTC' from api_client_authorizations where api_token=$1`, apiToken).Scan(&exp)
c.Check(err, check.IsNil)
c.Check(exp.Sub(exp1) > 0, check.Equals, true)
c.Check(exp.Sub(exp1) < time.Second, check.Equals, true)
return nil, nil
})(ctx, nil)
+
+ s.fakeProvider.AccessTokenPayload = map[string]interface{}{"scope": "openid profile foobar"}
+ accessToken = s.fakeProvider.ValidAccessToken()
+ ctx = auth.NewContext(context.Background(), &auth.Credentials{Tokens: []string{accessToken}})
+
+ mac = hmac.New(sha256.New, []byte(s.cluster.SystemRootToken))
+ io.WriteString(mac, accessToken)
+ apiToken = fmt.Sprintf("%x", mac.Sum(nil))
+
+ for _, trial := range []struct {
+ configEnable bool
+ configScope string
+ acceptable bool
+ shouldRun bool
+ }{
+ {true, "foobar", true, true},
+ {true, "foo", false, false},
+ {true, "", true, true},
+ {false, "", false, true},
+ {false, "foobar", false, true},
+ } {
+ c.Logf("trial = %+v", trial)
+ cleanup()
+ s.cluster.Login.OpenIDConnect.AcceptAccessToken = trial.configEnable
+ s.cluster.Login.OpenIDConnect.AcceptAccessTokenScope = trial.configScope
+ oidcAuthorizer = OIDCAccessTokenAuthorizer(s.cluster, func(context.Context) (*sqlx.DB, error) { return db, nil })
+ checked := false
+ oidcAuthorizer.WrapCalls(func(ctx context.Context, opts interface{}) (interface{}, error) {
+ var n int
+ err := db.QueryRowContext(ctx, `select count(*) from api_client_authorizations where api_token=$1`, apiToken).Scan(&n)
+ c.Check(err, check.IsNil)
+ if trial.acceptable {
+ c.Check(n, check.Equals, 1)
+ } else {
+ c.Check(n, check.Equals, 0)
+ }
+ checked = true
+ return nil, nil
+ })(ctx, nil)
+ c.Check(checked, check.Equals, trial.shouldRun)
+ }
}
func (s *OIDCLoginSuite) TestGenericOIDCLogin(c *check.C) {
c.Check(token, check.Matches, `v2/zzzzz-gj3su-.{15}/.{32,50}`)
authinfo := getCallbackAuthInfo(c, s.railsSpy)
- c.Check(authinfo.FirstName, check.Equals, "Fake User")
- c.Check(authinfo.LastName, check.Equals, "Name")
+ c.Check(authinfo.FirstName, check.Equals, "Fake")
+ c.Check(authinfo.LastName, check.Equals, "User Name")
c.Check(authinfo.Email, check.Equals, "active-user@arvados.local")
c.Check(authinfo.AlternateEmails, check.HasLen, 0)
func (s *OIDCLoginSuite) TestGoogleLogin_RealName(c *check.C) {
s.fakeProvider.AuthEmail = "joe.smith@primary.example.com"
+ s.fakeProvider.AuthEmailVerified = true
s.fakeProvider.PeopleAPIResponse = map[string]interface{}{
"names": []map[string]interface{}{
{
c.Check(authinfo.LastName, check.Equals, "Psmith")
}
-func (s *OIDCLoginSuite) TestGoogleLogin_OIDCRealName(c *check.C) {
+func (s *OIDCLoginSuite) TestGoogleLogin_OIDCNameWithoutGivenAndFamilyNames(c *check.C) {
s.fakeProvider.AuthName = "Joe P. Smith"
+ s.fakeProvider.AuthGivenName = ""
+ s.fakeProvider.AuthFamilyName = ""
s.fakeProvider.AuthEmail = "joe.smith@primary.example.com"
state := s.startLogin(c)
s.localdb.Login(context.Background(), arvados.LoginOptions{
}
func (ctrl *pamLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return noopLogout(ctrl.Cluster, opts)
+ return logout(ctx, ctrl.Cluster, opts)
}
func (ctrl *pamLoginController) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
}
func (ctrl *testLoginController) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
- return noopLogout(ctrl.Cluster, opts)
+ return logout(ctx, ctrl.Cluster, opts)
}
func (ctrl *testLoginController) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
<h3>Arvados test login</h3>
<form method="POST">
<input id="return_to" type="hidden" name="return_to" value="{{.ReturnTo}}">
- username <input id="username" type="text" name="username" size=16>
+ username <input id="username" type="text" name="username" autofocus size=16>
password <input id="password" type="password" name="password" size=16>
<input type="submit" value="Log in">
<br>
import (
"context"
+ "database/sql"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/controller/rpc"
"git.arvados.org/arvados.git/lib/ctrlctx"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/jmoiron/sqlx"
check "gopkg.in/check.v1"
db *sqlx.DB
// transaction context
- ctx context.Context
- rollback func() error
+ ctx context.Context
+ tx *sqlx.Tx
}
func (s *TestUserSuite) SetUpSuite(c *check.C) {
tx, err := s.db.Beginx()
c.Assert(err, check.IsNil)
s.ctx = ctrlctx.NewWithTransaction(context.Background(), tx)
- s.rollback = tx.Rollback
+ s.tx = tx
}
func (s *TestUserSuite) TearDownTest(c *check.C) {
- if s.rollback != nil {
- s.rollback()
- }
+ s.tx.Rollback()
}
func (s *TestUserSuite) TestLogin(c *check.C) {
c.Check(resp.HTML.String(), check.Matches, `(?ms).*<form method="POST".*`)
c.Check(resp.HTML.String(), check.Matches, `(?ms).*<input id="return_to" type="hidden" name="return_to" value="https://localhost:12345/example">.*`)
}
+
+func (s *TestUserSuite) TestExpireTokenOnLogout(c *check.C) {
+ returnTo := "https://localhost:12345/logout"
+ for _, trial := range []struct {
+ requestToken string
+ expiringTokenUUID string
+ shouldExpireToken bool
+ }{
+ // v2 token
+ {arvadostest.ActiveTokenV2, arvadostest.ActiveTokenUUID, true},
+ // v1 token
+ {arvadostest.AdminToken, arvadostest.AdminTokenUUID, true},
+ // inexistent v1 token -- logout shouldn't fail
+ {"thisdoesntexistasatoken", "", false},
+ // inexistent v2 token -- logout shouldn't fail
+ {"v2/some-fake-uuid/thisdoesntexistasatoken", "", false},
+ } {
+ c.Logf("=== %#v", trial)
+ ctx := auth.NewContext(s.ctx, &auth.Credentials{
+ Tokens: []string{trial.requestToken},
+ })
+
+ var tokenUUID string
+ var err error
+ qry := `SELECT uuid FROM api_client_authorizations WHERE uuid=$1 AND (expires_at IS NULL OR expires_at > current_timestamp AT TIME ZONE 'UTC') LIMIT 1`
+
+ if trial.shouldExpireToken {
+ err = s.tx.QueryRowContext(ctx, qry, trial.expiringTokenUUID).Scan(&tokenUUID)
+ c.Check(err, check.IsNil)
+ }
+
+ resp, err := s.ctrl.Logout(ctx, arvados.LogoutOptions{
+ ReturnTo: returnTo,
+ })
+ c.Check(err, check.IsNil)
+ c.Check(resp.RedirectLocation, check.Equals, returnTo)
+
+ if trial.shouldExpireToken {
+ err = s.tx.QueryRowContext(ctx, qry, trial.expiringTokenUUID).Scan(&tokenUUID)
+ c.Check(err, check.Equals, sql.ErrNoRows)
+ }
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package localdb
+
+import (
+ "context"
+ "database/sql"
+ "errors"
+ "fmt"
+ "net/http"
+ "strings"
+
+ "git.arvados.org/arvados.git/lib/ctrlctx"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+)
+
+func logout(ctx context.Context, cluster *arvados.Cluster, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ err := expireAPIClientAuthorization(ctx)
+ if err != nil {
+ ctxlog.FromContext(ctx).Errorf("attempting to expire token on logout: %q", err)
+ return arvados.LogoutResponse{}, httpserver.ErrorWithStatus(errors.New("could not expire token on logout"), http.StatusInternalServerError)
+ }
+
+ target := opts.ReturnTo
+ if target == "" {
+ if cluster.Services.Workbench2.ExternalURL.Host != "" {
+ target = cluster.Services.Workbench2.ExternalURL.String()
+ } else {
+ target = cluster.Services.Workbench1.ExternalURL.String()
+ }
+ }
+ return arvados.LogoutResponse{RedirectLocation: target}, nil
+}
+
+func expireAPIClientAuthorization(ctx context.Context) error {
+ creds, ok := auth.FromContext(ctx)
+ if !ok {
+ // Tests could be passing empty contexts
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization: credentials not found from context")
+ return nil
+ }
+
+ if len(creds.Tokens) == 0 {
+ // Old client may not have provided the token to expire
+ return nil
+ }
+
+ tx, err := ctrlctx.CurrentTx(ctx)
+ if err != nil {
+ return err
+ }
+
+ token := creds.Tokens[0]
+ tokenSecret := token
+ var tokenUuid string
+ if strings.HasPrefix(token, "v2/") {
+ tokenParts := strings.Split(token, "/")
+ if len(tokenParts) >= 3 {
+ tokenUuid = tokenParts[1]
+ tokenSecret = tokenParts[2]
+ }
+ }
+
+ var retrievedUuid string
+ err = tx.QueryRowContext(ctx, `SELECT uuid FROM api_client_authorizations WHERE api_token=$1 AND (expires_at IS NULL OR expires_at > current_timestamp AT TIME ZONE 'UTC') LIMIT 1`, tokenSecret).Scan(&retrievedUuid)
+ if err == sql.ErrNoRows {
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization(%s): not found in database", token)
+ return nil
+ } else if err != nil {
+ ctxlog.FromContext(ctx).WithError(err).Debugf("expireAPIClientAuthorization(%s): database error", token)
+ return err
+ }
+
+ if tokenUuid != "" && retrievedUuid != tokenUuid {
+ // secret part matches, but UUID doesn't -- somewhat surprising
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization(%s): secret part found, but with different UUID: %s", tokenSecret, retrievedUuid)
+ return nil
+ }
+
+ res, err := tx.ExecContext(ctx, "UPDATE api_client_authorizations SET expires_at=current_timestamp AT TIME ZONE 'UTC' WHERE uuid=$1", retrievedUuid)
+ if err != nil {
+ return err
+ }
+
+ rows, err := res.RowsAffected()
+ if err != nil {
+ return err
+ }
+ if rows == 0 {
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization(%s): no rows were updated", tokenSecret)
+ return fmt.Errorf("couldn't expire provided token")
+ } else if rows > 1 {
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization(%s): multiple (%d) rows updated", tokenSecret, rows)
+ } else {
+ ctxlog.FromContext(ctx).Debugf("expireAPIClientAuthorization(%s): ok", tokenSecret)
+ }
+
+ return nil
+}
"Accept-Encoding": true,
"Content-Encoding": true,
"Transfer-Encoding": true,
+
+ // Content-Length depends on encoding.
+ "Content-Length": true,
}
type ResponseFilter func(*http.Response, error) (*http.Response, error)
func (rtr *router) loadRequestParams(req *http.Request, attrsKey string) (map[string]interface{}, error) {
err := req.ParseForm()
if err != nil {
- return nil, httpError(http.StatusBadRequest, err)
+ if err.Error() == "http: request body too large" {
+ return nil, httpError(http.StatusRequestEntityTooLarge, err)
+ } else {
+ return nil, httpError(http.StatusBadRequest, err)
+ }
}
params := map[string]interface{}{}
"redirect_to_new_user": true,
"send_notification_email": true,
"bypass_federation": true,
+ "recursive": true,
+ "exclude_home_project": true,
}
func stringToBool(s string) bool {
func (rtr *router) responseOptions(opts interface{}) (responseOptions, error) {
var rOpts responseOptions
switch opts := opts.(type) {
+ case *arvados.CreateOptions:
+ rOpts.Select = opts.Select
+ case *arvados.UpdateOptions:
+ rOpts.Select = opts.Select
case *arvados.GetOptions:
rOpts.Select = opts.Select
case *arvados.ListOptions:
rOpts.Select = opts.Select
rOpts.Count = opts.Count
+ case *arvados.GroupContentsOptions:
+ rOpts.Select = opts.Select
+ rOpts.Count = opts.Count
}
return rOpts, nil
}
if respKind != "" {
tmp["kind"] = respKind
}
+ if included, ok := tmp["included"]; ok && included == nil {
+ tmp["included"] = make([]interface{}, 0)
+ }
defaultItemKind := ""
if strings.HasSuffix(respKind, "List") {
defaultItemKind = strings.TrimSuffix(respKind, "List")
}
- if items, ok := tmp["items"].([]interface{}); ok {
- for i, item := range items {
- // Fill in "kind" by inspecting UUID/PDH if
- // possible; fall back on assuming each
- // Items[] entry in an "arvados#fooList"
- // response should have kind="arvados#foo".
- item, _ := item.(map[string]interface{})
- infix := ""
- if uuid, _ := item["uuid"].(string); len(uuid) == 27 {
- infix = uuid[6:11]
- }
- if k := kind(infixMap[infix]); k != "" {
- item["kind"] = k
- } else if pdh, _ := item["portable_data_hash"].(string); pdh != "" {
- item["kind"] = "arvados#collection"
- } else if defaultItemKind != "" {
- item["kind"] = defaultItemKind
+ if _, isListResponse := tmp["items"].([]interface{}); isListResponse {
+ items, _ := tmp["items"].([]interface{})
+ included, _ := tmp["included"].([]interface{})
+ for _, slice := range [][]interface{}{items, included} {
+ for i, item := range slice {
+ // Fill in "kind" by inspecting UUID/PDH if
+ // possible; fall back on assuming each
+ // Items[] entry in an "arvados#fooList"
+ // response should have kind="arvados#foo".
+ item, _ := item.(map[string]interface{})
+ infix := ""
+ if uuid, _ := item["uuid"].(string); len(uuid) == 27 {
+ infix = uuid[6:11]
+ }
+ if k := kind(infixMap[infix]); k != "" {
+ item["kind"] = k
+ } else if pdh, _ := item["portable_data_hash"].(string); pdh != "" {
+ item["kind"] = "arvados#collection"
+ } else if defaultItemKind != "" {
+ item["kind"] = defaultItemKind
+ }
+ item = applySelectParam(opts.Select, item)
+ rtr.mungeItemFields(item)
+ slice[i] = item
}
- item = applySelectParam(opts.Select, item)
- rtr.mungeItemFields(item)
- items[i] = item
}
if opts.Count == "none" {
delete(tmp, "items_available")
}
var infixMap = map[string]interface{}{
+ "gj3su": arvados.APIClientAuthorization{},
"4zz18": arvados.Collection{},
+ "xvhdp": arvados.ContainerRequest{},
+ "dz642": arvados.Container{},
"j7d0g": arvados.Group{},
+ "8i9sb": arvados.Job{},
+ "d1hrv": arvados.PipelineInstance{},
+ "p5p6p": arvados.PipelineTemplate{},
+ "j58dm": arvados.Specimen{},
+ "q1cn2": arvados.Trait{},
+ "7fd4e": arvados.Workflow{},
+}
+
+var specialKindTransforms = map[string]string{
+ "arvados.APIClientAuthorization": "arvados#apiClientAuthorization",
+ "arvados.APIClientAuthorizationList": "arvados#apiClientAuthorizationList",
}
var mungeKind = regexp.MustCompile(`\..`)
if !strings.HasPrefix(t, "arvados.") {
return ""
}
+ if k, ok := specialKindTransforms[t]; ok {
+ return k
+ }
return mungeKind.ReplaceAllStringFunc(t, func(s string) string {
// "arvados.CollectionList" => "arvados#collectionList"
return "#" + strings.ToLower(s[1:])
import (
"context"
"fmt"
+ "math"
"net/http"
"strings"
)
type router struct {
- mux *mux.Router
- backend arvados.API
- wrapCalls func(api.RoutableFunc) api.RoutableFunc
+ mux *mux.Router
+ backend arvados.API
+ config Config
+}
+
+type Config struct {
+ // Return an error if request body exceeds this size. 0 means
+ // unlimited.
+ MaxRequestSize int
+
+ // If wrapCalls is not nil, it is called once for each API
+ // method, and the returned method is used in its place. This
+ // can be used to install hooks before and after each API call
+ // and alter responses; see localdb.WrapCallsInTransaction for
+ // an example.
+ WrapCalls func(api.RoutableFunc) api.RoutableFunc
}
// New returns a new router (which implements the http.Handler
// interface) that serves requests by calling Arvados API methods on
// the given backend.
-//
-// If wrapCalls is not nil, it is called once for each API method, and
-// the returned method is used in its place. This can be used to
-// install hooks before and after each API call and alter responses;
-// see localdb.WrapCallsInTransaction for an example.
-func New(backend arvados.API, wrapCalls func(api.RoutableFunc) api.RoutableFunc) *router {
+func New(backend arvados.API, config Config) *router {
rtr := &router{
- mux: mux.NewRouter(),
- backend: backend,
- wrapCalls: wrapCalls,
+ mux: mux.NewRouter(),
+ backend: backend,
+ config: config,
}
rtr.addRoutes()
return rtr
return rtr.backend.ConfigGet(ctx)
},
},
+ {
+ arvados.EndpointVocabularyGet,
+ func() interface{} { return &struct{}{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.VocabularyGet(ctx)
+ },
+ },
{
arvados.EndpointLogin,
func() interface{} { return &arvados.LoginOptions{} },
return rtr.backend.ContainerSSH(ctx, *opts.(*arvados.ContainerSSHOptions))
},
},
+ {
+ arvados.EndpointGroupCreate,
+ func() interface{} { return &arvados.CreateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupCreate(ctx, *opts.(*arvados.CreateOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupUpdate,
+ func() interface{} { return &arvados.UpdateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupUpdate(ctx, *opts.(*arvados.UpdateOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupList,
+ func() interface{} { return &arvados.ListOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupList(ctx, *opts.(*arvados.ListOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupContents,
+ func() interface{} { return &arvados.GroupContentsOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupContents(ctx, *opts.(*arvados.GroupContentsOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupContentsUUIDInPath,
+ func() interface{} { return &arvados.GroupContentsOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupContents(ctx, *opts.(*arvados.GroupContentsOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupShared,
+ func() interface{} { return &arvados.ListOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupShared(ctx, *opts.(*arvados.ListOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupGet,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupGet(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupDelete,
+ func() interface{} { return &arvados.DeleteOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupDelete(ctx, *opts.(*arvados.DeleteOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupTrash,
+ func() interface{} { return &arvados.DeleteOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupTrash(ctx, *opts.(*arvados.DeleteOptions))
+ },
+ },
+ {
+ arvados.EndpointGroupUntrash,
+ func() interface{} { return &arvados.UntrashOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.GroupUntrash(ctx, *opts.(*arvados.UntrashOptions))
+ },
+ },
+ {
+ arvados.EndpointLinkCreate,
+ func() interface{} { return &arvados.CreateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.LinkCreate(ctx, *opts.(*arvados.CreateOptions))
+ },
+ },
+ {
+ arvados.EndpointLinkUpdate,
+ func() interface{} { return &arvados.UpdateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.LinkUpdate(ctx, *opts.(*arvados.UpdateOptions))
+ },
+ },
+ {
+ arvados.EndpointLinkList,
+ func() interface{} { return &arvados.ListOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.LinkList(ctx, *opts.(*arvados.ListOptions))
+ },
+ },
+ {
+ arvados.EndpointLinkGet,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.LinkGet(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointLinkDelete,
+ func() interface{} { return &arvados.DeleteOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.LinkDelete(ctx, *opts.(*arvados.DeleteOptions))
+ },
+ },
{
arvados.EndpointSpecimenCreate,
func() interface{} { return &arvados.CreateOptions{} },
return rtr.backend.SpecimenDelete(ctx, *opts.(*arvados.DeleteOptions))
},
},
+ {
+ arvados.EndpointAPIClientAuthorizationCreate,
+ func() interface{} { return &arvados.CreateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationCreate(ctx, *opts.(*arvados.CreateOptions))
+ },
+ },
+ {
+ arvados.EndpointAPIClientAuthorizationUpdate,
+ func() interface{} { return &arvados.UpdateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationUpdate(ctx, *opts.(*arvados.UpdateOptions))
+ },
+ },
+ {
+ arvados.EndpointAPIClientAuthorizationDelete,
+ func() interface{} { return &arvados.DeleteOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationDelete(ctx, *opts.(*arvados.DeleteOptions))
+ },
+ },
+ {
+ arvados.EndpointAPIClientAuthorizationList,
+ func() interface{} { return &arvados.ListOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationList(ctx, *opts.(*arvados.ListOptions))
+ },
+ },
+ {
+ arvados.EndpointAPIClientAuthorizationCurrent,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationCurrent(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointAPIClientAuthorizationGet,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.backend.APIClientAuthorizationGet(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
{
arvados.EndpointUserCreate,
func() interface{} { return &arvados.CreateOptions{} },
return rtr.backend.UserGet(ctx, *opts.(*arvados.GetOptions))
},
},
- {
- arvados.EndpointUserUpdateUUID,
- func() interface{} { return &arvados.UpdateUUIDOptions{} },
- func(ctx context.Context, opts interface{}) (interface{}, error) {
- return rtr.backend.UserUpdateUUID(ctx, *opts.(*arvados.UpdateUUIDOptions))
- },
- },
{
arvados.EndpointUserUpdate,
func() interface{} { return &arvados.UpdateOptions{} },
},
} {
exec := route.exec
- if rtr.wrapCalls != nil {
- exec = rtr.wrapCalls(exec)
+ if rtr.config.WrapCalls != nil {
+ exec = rtr.config.WrapCalls(exec)
}
rtr.addRoute(route.endpoint, route.defaultOpts, exec)
}
if r.Method == "OPTIONS" {
return
}
+ if r.Body != nil {
+ // Wrap r.Body in a http.MaxBytesReader(), otherwise
+ // r.ParseForm() uses a default max request body size
+ // of 10 megabytes. Note we rely on the Nginx
+ // configuration to enforce the real max body size.
+ max := int64(rtr.config.MaxRequestSize)
+ if max < 1 {
+ max = math.MaxInt64 - 1
+ }
+ r.Body = http.MaxBytesReader(w, r.Body, max)
+ }
if r.Method == "POST" {
- r.ParseForm()
+ err := r.ParseForm()
+ if err != nil {
+ if err.Error() == "http: request body too large" {
+ err = httpError(http.StatusRequestEntityTooLarge, err)
+ }
+ rtr.sendError(w, err)
+ return
+ }
if m := r.FormValue("_method"); m != "" {
r2 := *r
r = &r2
func (s *RouterSuite) TestOptions(c *check.C) {
token := arvadostest.ActiveToken
for _, trial := range []struct {
+ comment string // unparsed -- only used to help match test failures to trials
method string
path string
header http.Header
shouldCall: "CollectionList",
withOptions: arvados.ListOptions{Limit: 123, Offset: 456, IncludeTrash: true, IncludeOldVersions: true},
},
+ {
+ comment: "form-encoded expression filter in query string",
+ method: "GET",
+ path: "/arvados/v1/collections?filters=[%22(foo<bar)%22]",
+ shouldCall: "CollectionList",
+ withOptions: arvados.ListOptions{Limit: -1, Filters: []arvados.Filter{{"(foo<bar)", "=", true}}},
+ },
+ {
+ comment: "form-encoded expression filter in POST body",
+ method: "POST",
+ path: "/arvados/v1/collections",
+ body: "filters=[\"(foo<bar)\"]&_method=GET",
+ header: http.Header{"Content-Type": {"application/x-www-form-urlencoded"}},
+ shouldCall: "CollectionList",
+ withOptions: arvados.ListOptions{Limit: -1, Filters: []arvados.Filter{{"(foo<bar)", "=", true}}},
+ },
+ {
+ comment: "json-encoded expression filter in POST body",
+ method: "POST",
+ path: "/arvados/v1/collections?_method=GET",
+ body: `{"filters":["(foo<bar)",["bar","=","baz"]],"limit":2}`,
+ header: http.Header{"Content-Type": {"application/json"}},
+ shouldCall: "CollectionList",
+ withOptions: arvados.ListOptions{Limit: 2, Filters: []arvados.Filter{{"(foo<bar)", "=", true}, {"bar", "=", "baz"}}},
+ },
+ {
+ comment: "json-encoded select param in query string",
+ method: "GET",
+ path: "/arvados/v1/collections/" + arvadostest.FooCollection + "?select=[%22portable_data_hash%22]",
+ shouldCall: "CollectionGet",
+ withOptions: arvados.GetOptions{UUID: arvadostest.FooCollection, Select: []string{"portable_data_hash"}},
+ },
{
method: "PATCH",
path: "/arvados/v1/collections",
// Reset calls captured in previous trial
s.stub = arvadostest.APIStub{}
- c.Logf("trial: %#v", trial)
+ c.Logf("trial: %+v", trial)
+ comment := check.Commentf("trial comment: %s", trial.comment)
+
_, rr, _ := doRequest(c, s.rtr, token, trial.method, trial.path, trial.header, bytes.NewBufferString(trial.body))
if trial.shouldStatus == 0 {
- c.Check(rr.Code, check.Equals, http.StatusOK)
+ c.Check(rr.Code, check.Equals, http.StatusOK, comment)
} else {
- c.Check(rr.Code, check.Equals, trial.shouldStatus)
+ c.Check(rr.Code, check.Equals, trial.shouldStatus, comment)
}
calls := s.stub.Calls(nil)
if trial.shouldCall == "" {
- c.Check(calls, check.HasLen, 0)
+ c.Check(calls, check.HasLen, 0, comment)
} else if len(calls) != 1 {
- c.Check(calls, check.HasLen, 1)
+ c.Check(calls, check.HasLen, 1, comment)
} else {
- c.Check(calls[0].Method, isMethodNamed, trial.shouldCall)
- c.Check(calls[0].Options, check.DeepEquals, trial.withOptions)
+ c.Check(calls[0].Method, isMethodNamed, trial.shouldCall, comment)
+ c.Check(calls[0].Options, check.DeepEquals, trial.withOptions, comment)
}
}
}
cluster.TLS.Insecure = true
arvadostest.SetServiceURL(&cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
url, _ := url.Parse("https://" + os.Getenv("ARVADOS_TEST_API_HOST"))
- s.rtr = New(rpc.NewConn("zzzzz", url, true, rpc.PassthroughTokenProvider), nil)
+ s.rtr = New(rpc.NewConn("zzzzz", url, true, rpc.PassthroughTokenProvider), Config{})
}
func (s *RouterIntegrationSuite) TearDownSuite(c *check.C) {
c.Check(jresp["kind"], check.Equals, "arvados#collection")
}
+func (s *RouterIntegrationSuite) TestMaxRequestSize(c *check.C) {
+ token := arvadostest.ActiveTokenV2
+ for _, maxRequestSize := range []int{
+ // Ensure 5M limit is enforced.
+ 5000000,
+ // Ensure 50M limit is enforced, and that a >25M body
+ // is accepted even though the default Go request size
+ // limit is 10M.
+ 50000000,
+ } {
+ s.rtr.config.MaxRequestSize = maxRequestSize
+ okstr := "a"
+ for len(okstr) < maxRequestSize/2 {
+ okstr = okstr + okstr
+ }
+
+ hdr := http.Header{"Content-Type": {"application/x-www-form-urlencoded"}}
+
+ body := bytes.NewBufferString(url.Values{"foo_bar": {okstr}}.Encode())
+ _, rr, _ := doRequest(c, s.rtr, token, "POST", `/arvados/v1/collections`, hdr, body)
+ c.Check(rr.Code, check.Equals, http.StatusOK)
+
+ body = bytes.NewBufferString(url.Values{"foo_bar": {okstr + okstr}}.Encode())
+ _, rr, _ = doRequest(c, s.rtr, token, "POST", `/arvados/v1/collections`, hdr, body)
+ c.Check(rr.Code, check.Equals, http.StatusRequestEntityTooLarge)
+ }
+}
+
func (s *RouterIntegrationSuite) TestContainerList(c *check.C) {
token := arvadostest.ActiveTokenV2
func (s *RouterIntegrationSuite) TestSelectParam(c *check.C) {
uuid := arvadostest.QueuedContainerUUID
token := arvadostest.ActiveTokenV2
+ // GET
for _, sel := range [][]string{
{"uuid", "command"},
{"uuid", "command", "uuid"},
- {"", "command", "uuid"},
} {
j, err := json.Marshal(sel)
c.Assert(err, check.IsNil)
c.Check(rr.Code, check.Equals, http.StatusOK)
c.Check(resp["kind"], check.Equals, "arvados#container")
- c.Check(resp["etag"], check.FitsTypeOf, "")
- c.Check(resp["etag"], check.Not(check.Equals), "")
c.Check(resp["uuid"], check.HasLen, 27)
c.Check(resp["command"], check.HasLen, 2)
c.Check(resp["mounts"], check.IsNil)
_, hasMounts := resp["mounts"]
c.Check(hasMounts, check.Equals, false)
}
+ // POST & PUT
+ uuid = arvadostest.FooCollection
+ j, err := json.Marshal([]string{"uuid", "description"})
+ c.Assert(err, check.IsNil)
+ for _, method := range []string{"PUT", "POST"} {
+ desc := "Today is " + time.Now().String()
+ reqBody := "{\"description\":\"" + desc + "\"}"
+ var resp map[string]interface{}
+ var rr *httptest.ResponseRecorder
+ if method == "PUT" {
+ _, rr, resp = doRequest(c, s.rtr, token, method, "/arvados/v1/collections/"+uuid+"?select="+string(j), nil, bytes.NewReader([]byte(reqBody)))
+ } else {
+ _, rr, resp = doRequest(c, s.rtr, token, method, "/arvados/v1/collections?select="+string(j), nil, bytes.NewReader([]byte(reqBody)))
+ }
+ c.Check(rr.Code, check.Equals, http.StatusOK)
+ c.Check(resp["kind"], check.Equals, "arvados#collection")
+ c.Check(resp["uuid"], check.HasLen, 27)
+ c.Check(resp["description"], check.Equals, desc)
+ c.Check(resp["manifest_text"], check.IsNil)
+ }
}
func (s *RouterIntegrationSuite) TestHEAD(c *check.C) {
}
type Conn struct {
- SendHeader http.Header
+ SendHeader http.Header
+ RedactHostInErrors bool
+
clusterID string
httpClient http.Client
baseURL url.URL
path = strings.Replace(path, "/{uuid}", "/"+uuid, 1)
delete(params, "uuid")
}
- return aClient.RequestAndDecodeContext(ctx, dst, ep.Method, path, body, params)
+ err = aClient.RequestAndDecodeContext(ctx, dst, ep.Method, path, body, params)
+ if err != nil && conn.RedactHostInErrors {
+ redacted := strings.Replace(err.Error(), strings.TrimSuffix(conn.baseURL.String(), "/"), "//railsapi.internal", -1)
+ if strings.HasPrefix(redacted, "request failed: ") {
+ redacted = strings.Replace(redacted, "request failed: ", "", -1)
+ }
+ if redacted != err.Error() {
+ if err, ok := err.(httpStatusError); ok {
+ return wrapHTTPStatusError(err, redacted)
+ } else {
+ return errors.New(redacted)
+ }
+ }
+ }
+ return err
}
func (conn *Conn) BaseURL() url.URL {
return resp, err
}
+func (conn *Conn) VocabularyGet(ctx context.Context) (arvados.Vocabulary, error) {
+ ep := arvados.EndpointVocabularyGet
+ var resp arvados.Vocabulary
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, nil)
+ return resp, err
+}
+
func (conn *Conn) Login(ctx context.Context, options arvados.LoginOptions) (arvados.LoginResponse, error) {
ep := arvados.EndpointLogin
var resp arvados.LoginResponse
return resp, err
}
+func (conn *Conn) GroupCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupCreate
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupUpdate
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupGet(ctx context.Context, options arvados.GetOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupGet
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupList(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ ep := arvados.EndpointGroupList
+ var resp arvados.GroupList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupContents(ctx context.Context, options arvados.GroupContentsOptions) (arvados.ObjectList, error) {
+ ep := arvados.EndpointGroupContents
+ var resp arvados.ObjectList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupShared(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ ep := arvados.EndpointGroupShared
+ var resp arvados.GroupList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupDelete
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupTrash(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupTrash
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) GroupUntrash(ctx context.Context, options arvados.UntrashOptions) (arvados.Group, error) {
+ ep := arvados.EndpointGroupUntrash
+ var resp arvados.Group
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) LinkCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Link, error) {
+ ep := arvados.EndpointLinkCreate
+ var resp arvados.Link
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) LinkUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Link, error) {
+ ep := arvados.EndpointLinkUpdate
+ var resp arvados.Link
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) LinkGet(ctx context.Context, options arvados.GetOptions) (arvados.Link, error) {
+ ep := arvados.EndpointLinkGet
+ var resp arvados.Link
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) LinkList(ctx context.Context, options arvados.ListOptions) (arvados.LinkList, error) {
+ ep := arvados.EndpointLinkList
+ var resp arvados.LinkList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
+func (conn *Conn) LinkDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Link, error) {
+ ep := arvados.EndpointLinkDelete
+ var resp arvados.Link
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
func (conn *Conn) SpecimenCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Specimen, error) {
ep := arvados.EndpointSpecimenCreate
var resp arvados.Specimen
return resp, err
}
+func (conn *Conn) SysTrashSweep(ctx context.Context, options struct{}) (struct{}, error) {
+ ep := arvados.EndpointSysTrashSweep
+ var resp struct{}
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
func (conn *Conn) UserCreate(ctx context.Context, options arvados.CreateOptions) (arvados.User, error) {
ep := arvados.EndpointUserCreate
var resp arvados.User
err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
return resp, err
}
-func (conn *Conn) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
- ep := arvados.EndpointUserUpdateUUID
- var resp arvados.User
- err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
- return resp, err
-}
func (conn *Conn) UserMerge(ctx context.Context, options arvados.UserMergeOptions) (arvados.User, error) {
ep := arvados.EndpointUserMerge
var resp arvados.User
err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
return resp, err
}
+func (conn *Conn) APIClientAuthorizationCreate(ctx context.Context, options arvados.CreateOptions) (arvados.APIClientAuthorization, error) {
+ ep := arvados.EndpointAPIClientAuthorizationCreate
+ var resp arvados.APIClientAuthorization
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) APIClientAuthorizationUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.APIClientAuthorization, error) {
+ ep := arvados.EndpointAPIClientAuthorizationUpdate
+ var resp arvados.APIClientAuthorization
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) APIClientAuthorizationDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.APIClientAuthorization, error) {
+ ep := arvados.EndpointAPIClientAuthorizationDelete
+ var resp arvados.APIClientAuthorization
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) APIClientAuthorizationList(ctx context.Context, options arvados.ListOptions) (arvados.APIClientAuthorizationList, error) {
+ ep := arvados.EndpointAPIClientAuthorizationList
+ var resp arvados.APIClientAuthorizationList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) APIClientAuthorizationGet(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
+ ep := arvados.EndpointAPIClientAuthorizationGet
+ var resp arvados.APIClientAuthorization
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
type UserSessionAuthInfo struct {
UserUUID string `json:"user_uuid"`
err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
return resp, err
}
+
+// httpStatusError is an error with an HTTP status code that can be
+// propagated by lib/controller/router, etc.
+type httpStatusError interface {
+ error
+ HTTPStatus() int
+}
+
+// wrappedHTTPStatusError is used to augment/replace an error message
+// while preserving the HTTP status code indicated by the original
+// error.
+type wrappedHTTPStatusError struct {
+ httpStatusError
+ message string
+}
+
+func wrapHTTPStatusError(err httpStatusError, message string) httpStatusError {
+ return wrappedHTTPStatusError{err, message}
+}
+
+func (err wrappedHTTPStatusError) Error() string {
+ return err.message
+}
opts := arvados.LoginOptions{
ReturnTo: "https://foo.example.com/bar",
}
- resp, err := s.conn.Login(s.ctx, opts)
- c.Check(err, check.IsNil)
- c.Check(resp.RedirectLocation, check.Equals, "/auth/joshid?return_to="+url.QueryEscape(","+opts.ReturnTo))
+ _, err := s.conn.Login(s.ctx, opts)
+ c.Check(err.(*arvados.TransactionError).StatusCode, check.Equals, 404)
}
func (s *RPCSuite) TestLogout(c *check.C) {
}
resp, err := s.conn.Logout(s.ctx, opts)
c.Check(err, check.IsNil)
- c.Check(resp.RedirectLocation, check.Equals, "http://localhost:3002/users/sign_out?redirect_uri="+url.QueryEscape(opts.ReturnTo))
+ c.Check(resp.RedirectLocation, check.Equals, opts.ReturnTo)
}
func (s *RPCSuite) TestCollectionCreate(c *check.C) {
import (
"context"
+ "net"
"net/http"
"os"
"path/filepath"
+ "time"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
// provided by the integration-testing environment.
func newServerFromIntegrationTestEnv(c *check.C) *httpserver.Server {
log := ctxlog.TestLogger(c)
-
- handler := &Handler{Cluster: &arvados.Cluster{
- ClusterID: "zzzzz",
- PostgreSQL: integrationTestCluster().PostgreSQL,
- ForceLegacyAPI14: forceLegacyAPI14,
- }}
+ ctx := ctxlog.Context(context.Background(), log)
+ handler := &Handler{
+ Cluster: &arvados.Cluster{
+ ClusterID: "zzzzz",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
+ },
+ BackgroundContext: ctx,
+ }
handler.Cluster.TLS.Insecure = true
+ handler.Cluster.Collections.BlobSigning = true
+ handler.Cluster.Collections.BlobSigningKey = arvadostest.BlobSigningKey
+ handler.Cluster.Collections.BlobSigningTTL = arvados.Duration(time.Hour * 24 * 14)
arvadostest.SetServiceURL(&handler.Cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
arvadostest.SetServiceURL(&handler.Cluster.Services.Controller, "http://localhost:/")
srv := &httpserver.Server{
Server: http.Server{
- Handler: httpserver.HandlerWithContext(
- ctxlog.Context(context.Background(), log),
- httpserver.AddRequestIDs(httpserver.LogRequests(handler))),
+ BaseContext: func(net.Listener) context.Context { return ctx },
+ Handler: httpserver.AddRequestIDs(httpserver.LogRequests(handler)),
},
Addr: ":",
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package controller
+
+import (
+ "time"
+
+ "git.arvados.org/arvados.git/lib/controller/dblock"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+)
+
+func (h *Handler) trashSweepWorker() {
+ sleep := h.Cluster.Collections.TrashSweepInterval.Duration()
+ logger := ctxlog.FromContext(h.BackgroundContext).WithField("worker", "trash sweep")
+ ctx := ctxlog.Context(h.BackgroundContext, logger)
+ if sleep <= 0 {
+ logger.Debugf("Collections.TrashSweepInterval is %v, not running worker", sleep)
+ return
+ }
+ dblock.TrashSweep.Lock(ctx, h.db)
+ defer dblock.TrashSweep.Unlock()
+ for time.Sleep(sleep); ctx.Err() == nil; time.Sleep(sleep) {
+ dblock.TrashSweep.Check()
+ ctx := auth.NewContext(ctx, &auth.Credentials{Tokens: []string{h.Cluster.SystemRootToken}})
+ _, err := h.federation.SysTrashSweep(ctx, struct{}{})
+ if err != nil {
+ logger.WithError(err).Info("trash sweep failed")
+ }
+ }
+}
import (
"io"
+ "time"
- "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
- "github.com/sirupsen/logrus"
)
-var Command command
+var Command = command{}
-type command struct{}
-
-type NoPrefixFormatter struct{}
-
-func (f *NoPrefixFormatter) Format(entry *logrus.Entry) ([]byte, error) {
- return []byte(entry.Message), nil
+type command struct {
+ uuids arrayFlags
+ resultsDir string
+ cache bool
+ begin time.Time
+ end time.Time
}
// RunCommand implements the subcommand "costanalyzer <collection> <collection> ..."
-func (command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+func (c command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
var err error
logger := ctxlog.New(stderr, "text", "info")
- defer func() {
- if err != nil {
- logger.Error("\n" + err.Error() + "\n")
- }
- }()
-
- logger.SetFormatter(new(NoPrefixFormatter))
-
- loader := config.NewLoader(stdin, logger)
- loader.SkipLegacy = true
-
- exitcode, err := costanalyzer(prog, args, loader, logger, stdout, stderr)
+ logger.SetFormatter(cmd.NoPrefixFormatter{})
+ exitcode, err := c.costAnalyzer(prog, args, logger, stdout, stderr)
+ if err != nil {
+ logger.Error("\n" + err.Error())
+ }
return exitcode
}
"errors"
"flag"
"fmt"
- "git.arvados.org/arvados.git/lib/config"
- "git.arvados.org/arvados.git/sdk/go/arvados"
- "git.arvados.org/arvados.git/sdk/go/arvadosclient"
- "git.arvados.org/arvados.git/sdk/go/keepclient"
"io"
"io/ioutil"
"net/http"
"strings"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/sirupsen/logrus"
)
+const timestampFormat = "2006-01-02T15:04:05"
+
type nodeInfo struct {
// Legacy (records created by Arvados Node Manager with Arvados <= 1.4.3)
Properties struct {
Preemptible bool
}
+type consumption struct {
+ cost float64
+ duration float64
+}
+
+func (c *consumption) Add(n consumption) {
+ c.cost += n.cost
+ c.duration += n.duration
+}
+
type arrayFlags []string
func (i *arrayFlags) String() string {
return nil
}
-func parseFlags(prog string, args []string, loader *config.Loader, logger *logrus.Logger, stderr io.Writer) (exitCode int, uuids arrayFlags, resultsDir string, cache bool, err error) {
+func (c *command) parseFlags(prog string, args []string, logger *logrus.Logger, stderr io.Writer) (ok bool, exitCode int) {
+ var beginStr, endStr string
flags := flag.NewFlagSet("", flag.ContinueOnError)
- flags.SetOutput(stderr)
flags.Usage = func() {
fmt.Fprintf(flags.Output(), `
Usage:
- %s [options ...] <uuid> ...
+ %s [options ...] [UUID ...]
- This program analyzes the cost of Arvados container requests. For each uuid
- supplied, it creates a CSV report that lists all the containers used to
- fulfill the container request, together with the machine type and cost of
- each container. At least one uuid must be specified.
+ This program analyzes the cost of Arvados container requests and calculates
+ the total cost across all requests. At least one UUID or a timestamp range
+ must be specified.
- When supplied with the uuid of a container request, it will calculate the
+ When the '-output' option is specified, a set of CSV files with cost details
+ will be written to the provided directory. Each file is a CSV report that lists
+ all the containers used to fulfill the container request, together with the
+ machine type and cost of each container.
+
+ When supplied with the UUID of a container request, it will calculate the
cost of that container request and all its children.
- When supplied with the uuid of a collection, it will see if there is a
- container_request uuid in the properties of the collection, and if so, it
+ When supplied with the UUID of a collection, it will see if there is a
+ container_request UUID in the properties of the collection, and if so, it
will calculate the cost of that container request and all its children.
- When supplied with a project uuid or when supplied with multiple container
- request or collection uuids, it will create a CSV report for each supplied
- uuid, as well as a CSV file with aggregate cost accounting for all supplied
- uuids. The aggregate cost report takes container reuse into account: if a
- container was reused between several container requests, its cost will only
- be counted once.
+ When supplied with a project UUID or when supplied with multiple container
+ request or collection UUIDs, it will calculate the total cost for all
+ supplied UUIDs.
+
+ When supplied with a 'begin' and 'end' timestamp (format:
+ %s), it will calculate the cost for all top-level container
+ requests whose containers finished during the specified interval.
- To get the node costs, the progam queries the Arvados API for current cost
- data for each node type used. This means that the reported cost always
- reflects the cost data as currently defined in the Arvados API configuration
- file.
+ The total cost calculation takes container reuse into account: if a container
+ was reused between several container requests, its cost will only be counted
+ once.
Caveats:
- - the Arvados API configuration cost data may be out of sync with the cloud
- provider.
- - when generating reports for older container requests, the cost data in the
- Arvados API configuration file may have changed since the container request
- was fulfilled. This program uses the cost data stored at the time of the
+
+ - This program uses the cost data from config.yml at the time of the
execution of the container, stored in the 'node.json' file in its log
- collection.
- - if a container was run on a preemptible ("spot") instance, the cost data
+ collection. If the cost data was not correctly configured at the time the
+ container was executed, the output from this program will be incorrect.
+
+ - If a container was run on a preemptible ("spot") instance, the cost data
reported by this program may be wildly inaccurate, because it does not have
access to the spot pricing in effect for the node then the container ran. The
UUID report file that is generated when the '-output' option is specified has
a column that indicates the preemptible state of the instance that ran the
container.
- In order to get the data for the uuids supplied, the ARVADOS_API_HOST and
+ - This program does not take into account overhead costs like the time spent
+ starting and stopping compute nodes that run containers, the cost of the
+ permanent cloud nodes that provide the Arvados services, the cost of data
+ stored in Arvados, etc.
+
+ - When provided with a project UUID, subprojects will not be considered.
+
+ In order to get the data for the UUIDs supplied, the ARVADOS_API_HOST and
ARVADOS_API_TOKEN environment variables must be set.
This program prints the total dollar amount from the aggregate cost
- accounting across all provided uuids on stdout.
-
- When the '-output' option is specified, a set of CSV files with cost details
- will be written to the provided directory.
+ accounting across all provided UUIDs on stdout.
Options:
-`, prog)
+`, prog, timestampFormat)
flags.PrintDefaults()
}
loglevel := flags.String("log-level", "info", "logging `level` (debug, info, ...)")
- flags.StringVar(&resultsDir, "output", "", "output `directory` for the CSV reports")
- flags.BoolVar(&cache, "cache", true, "create and use a local disk cache of Arvados objects")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- exitCode = 1
- return
- } else if err != nil {
- exitCode = 2
- return
+ flags.StringVar(&c.resultsDir, "output", "", "output `directory` for the CSV reports")
+ flags.StringVar(&beginStr, "begin", "", fmt.Sprintf("timestamp `begin` for date range operation (format: %s)", timestampFormat))
+ flags.StringVar(&endStr, "end", "", fmt.Sprintf("timestamp `end` for date range operation (format: %s)", timestampFormat))
+ flags.BoolVar(&c.cache, "cache", true, "create and use a local disk cache of Arvados objects")
+ if ok, code := cmd.ParseFlags(flags, prog, args, "[uuid ...]", stderr); !ok {
+ return false, code
+ }
+ c.uuids = flags.Args()
+
+ if (len(beginStr) != 0 && len(endStr) == 0) || (len(beginStr) == 0 && len(endStr) != 0) {
+ fmt.Fprintf(stderr, "When specifying a date range, both begin and end must be specified (try -help)\n")
+ return false, 2
+ }
+
+ if len(beginStr) != 0 {
+ var errB, errE error
+ c.begin, errB = time.Parse(timestampFormat, beginStr)
+ c.end, errE = time.Parse(timestampFormat, endStr)
+ if (errB != nil) || (errE != nil) {
+ fmt.Fprintf(stderr, "When specifying a date range, both begin and end must be of the format %s %+v, %+v\n", timestampFormat, errB, errE)
+ return false, 2
+ }
}
- uuids = flags.Args()
- if len(uuids) < 1 {
- flags.Usage()
- err = fmt.Errorf("error: no uuid(s) provided")
- exitCode = 2
- return
+ if (len(c.uuids) < 1) && (len(beginStr) == 0) {
+ fmt.Fprintf(stderr, "error: no uuid(s) provided (try -help)\n")
+ return false, 2
}
lvl, err := logrus.ParseLevel(*loglevel)
if err != nil {
- exitCode = 2
- return
+ fmt.Fprintf(stderr, "invalid argument to -log-level: %s\n", err)
+ return false, 2
}
logger.SetLevel(lvl)
- if !cache {
- logger.Debug("Caching disabled\n")
+ if !c.cache {
+ logger.Debug("Caching disabled")
}
- return
+ return true, 0
}
func ensureDirectory(logger *logrus.Logger, dir string) (err error) {
return
}
-func addContainerLine(logger *logrus.Logger, node nodeInfo, cr arvados.ContainerRequest, container arvados.Container) (csv string, cost float64) {
+func addContainerLine(logger *logrus.Logger, node nodeInfo, cr arvados.ContainerRequest, container arvados.Container) (string, consumption) {
+ var csv string
+ var containerConsumption consumption
csv = cr.UUID + ","
csv += cr.Name + ","
csv += container.UUID + ","
if container.FinishedAt != nil {
csv += container.FinishedAt.String() + ","
delta = container.FinishedAt.Sub(*container.StartedAt)
- csv += strconv.FormatFloat(delta.Seconds(), 'f', 0, 64) + ","
+ csv += strconv.FormatFloat(delta.Seconds(), 'f', 3, 64) + ","
} else {
csv += ",,"
}
price = node.Price
size = node.ProviderType
}
- cost = delta.Seconds() / 3600 * price
- csv += size + "," + fmt.Sprintf("%+v", node.Preemptible) + "," + strconv.FormatFloat(price, 'f', 8, 64) + "," + strconv.FormatFloat(cost, 'f', 8, 64) + "\n"
- return
+ containerConsumption.cost = delta.Seconds() / 3600 * price
+ containerConsumption.duration = delta.Seconds()
+ csv += size + "," + fmt.Sprintf("%+v", node.Preemptible) + "," + strconv.FormatFloat(price, 'f', 8, 64) + "," + strconv.FormatFloat(containerConsumption.cost, 'f', 8, 64) + "\n"
+ return csv, containerConsumption
}
func loadCachedObject(logger *logrus.Logger, file string, uuid string, object interface{}) (reload bool) {
case *arvados.ContainerRequest:
if v.State == arvados.ContainerRequestStateFinal {
reload = false
- logger.Debugf("Loaded object %s from local cache (%s)\n", uuid, file)
+ logger.Debugf("Loaded object %s from local cache (%s)", uuid, file)
}
case *arvados.Container:
if v.State == arvados.ContainerStateComplete || v.State == arvados.ContainerStateCancelled {
reload = false
- logger.Debugf("Loaded object %s from local cache (%s)\n", uuid, file)
+ logger.Debugf("Loaded object %s from local cache (%s)", uuid, file)
}
}
return
return
}
-func handleProject(logger *logrus.Logger, uuid string, arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclient.KeepClient, resultsDir string, cache bool) (cost map[string]float64, err error) {
- cost = make(map[string]float64)
+func handleProject(logger *logrus.Logger, uuid string, arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclient.KeepClient, resultsDir string, cache bool) (cost map[string]consumption, err error) {
+ cost = make(map[string]consumption)
var project arvados.Group
err = loadObject(logger, ac, uuid, uuid, cache, &project)
return nil, fmt.Errorf("error querying container_requests: %s", err.Error())
}
if value, ok := childCrs["items"]; ok {
- logger.Infof("Collecting top level container requests in project %s\n", uuid)
+ logger.Infof("Collecting top level container requests in project %s", uuid)
items := value.([]interface{})
for _, item := range items {
itemMap := item.(map[string]interface{})
- crCsv, err := generateCrCsv(logger, itemMap["uuid"].(string), arv, ac, kc, resultsDir, cache)
+ crInfo, err := generateCrInfo(logger, itemMap["uuid"].(string), arv, ac, kc, resultsDir, cache)
if err != nil {
return nil, fmt.Errorf("error generating container_request CSV: %s", err.Error())
}
- for k, v := range crCsv {
+ for k, v := range crInfo {
cost[k] = v
}
}
} else {
- logger.Infof("No top level container requests found in project %s\n", uuid)
+ logger.Infof("No top level container requests found in project %s", uuid)
}
return
}
-func generateCrCsv(logger *logrus.Logger, uuid string, arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclient.KeepClient, resultsDir string, cache bool) (cost map[string]float64, err error) {
+func generateCrInfo(logger *logrus.Logger, uuid string, arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclient.KeepClient, resultsDir string, cache bool) (cost map[string]consumption, err error) {
- cost = make(map[string]float64)
+ cost = make(map[string]consumption)
csv := "CR UUID,CR name,Container UUID,State,Started At,Finished At,Duration in seconds,Compute node type,Preemptible,Hourly node cost,Total cost\n"
var tmpCsv string
- var tmpTotalCost float64
- var totalCost float64
+ var total, tmpTotal consumption
+ logger.Debugf("Processing %s", uuid)
var crUUID = uuid
if strings.Contains(uuid, "-4zz18-") {
if err != nil {
return nil, fmt.Errorf("error loading cr object %s: %s", uuid, err)
}
+ if len(cr.ContainerUUID) == 0 {
+ // Nothing to do! E.g. a CR in 'Uncommitted' state.
+ logger.Infof("No container associated with container request %s, skipping", crUUID)
+ return nil, nil
+ }
var container arvados.Container
err = loadObject(logger, ac, crUUID, cr.ContainerUUID, cache, &container)
if err != nil {
topNode, err := getNode(arv, ac, kc, cr)
if err != nil {
- return nil, fmt.Errorf("error getting node %s: %s", cr.UUID, err)
+ logger.Errorf("Skipping container request %s: error getting node %s: %s", cr.UUID, cr.UUID, err)
+ return nil, nil
}
- tmpCsv, totalCost = addContainerLine(logger, topNode, cr, container)
+ tmpCsv, total = addContainerLine(logger, topNode, cr, container)
csv += tmpCsv
- totalCost += tmpTotalCost
- cost[container.UUID] = totalCost
+ cost[container.UUID] = total
// Find all container requests that have the container we found above as requesting_container_uuid
var childCrs arvados.ContainerRequestList
if err != nil {
return nil, fmt.Errorf("error querying container_requests: %s", err.Error())
}
- logger.Infof("Collecting child containers for container request %s", crUUID)
- for _, cr2 := range childCrs.Items {
- logger.Info(".")
+ logger.Infof("Collecting child containers for container request %s (%s)", crUUID, container.FinishedAt)
+ progressTicker := time.NewTicker(5 * time.Second)
+ defer progressTicker.Stop()
+ for i, cr2 := range childCrs.Items {
+ select {
+ case <-progressTicker.C:
+ logger.Infof("... %d of %d", i+1, len(childCrs.Items))
+ default:
+ }
node, err := getNode(arv, ac, kc, cr2)
if err != nil {
- return nil, fmt.Errorf("error getting node %s: %s", cr2.UUID, err)
+ logger.Errorf("Skipping container request %s: error getting node %s: %s", cr2.UUID, cr2.UUID, err)
+ continue
}
- logger.Debug("\nChild container: " + cr2.ContainerUUID + "\n")
+ logger.Debug("Child container: " + cr2.ContainerUUID)
var c2 arvados.Container
err = loadObject(logger, ac, cr.UUID, cr2.ContainerUUID, cache, &c2)
if err != nil {
return nil, fmt.Errorf("error loading object %s: %s", cr2.ContainerUUID, err)
}
- tmpCsv, tmpTotalCost = addContainerLine(logger, node, cr2, c2)
- cost[cr2.ContainerUUID] = tmpTotalCost
+ tmpCsv, tmpTotal = addContainerLine(logger, node, cr2, c2)
+ cost[cr2.ContainerUUID] = tmpTotal
csv += tmpCsv
- totalCost += tmpTotalCost
+ total.Add(tmpTotal)
}
- logger.Info(" done\n")
+ logger.Debug("Done collecting child containers")
- csv += "TOTAL,,,,,,,,," + strconv.FormatFloat(totalCost, 'f', 8, 64) + "\n"
+ csv += "TOTAL,,,,,," + strconv.FormatFloat(total.duration, 'f', 3, 64) + ",,,," + strconv.FormatFloat(total.cost, 'f', 2, 64) + "\n"
if resultsDir != "" {
// Write the resulting CSV file
if err != nil {
return nil, fmt.Errorf("error writing file with path %s: %s", fName, err.Error())
}
- logger.Infof("\nUUID report in %s\n\n", fName)
+ logger.Infof("\nUUID report in %s", fName)
}
return
}
-func costanalyzer(prog string, args []string, loader *config.Loader, logger *logrus.Logger, stdout, stderr io.Writer) (exitcode int, err error) {
- exitcode, uuids, resultsDir, cache, err := parseFlags(prog, args, loader, logger, stderr)
- if exitcode != 0 {
+func (c *command) costAnalyzer(prog string, args []string, logger *logrus.Logger, stdout, stderr io.Writer) (exitcode int, err error) {
+ var ok bool
+ ok, exitcode = c.parseFlags(prog, args, logger, stderr)
+ if !ok {
return
}
- if resultsDir != "" {
- err = ensureDirectory(logger, resultsDir)
+ if c.resultsDir != "" {
+ err = ensureDirectory(logger, c.resultsDir)
if err != nil {
exitcode = 3
return
}
}
+ uuidChannel := make(chan string)
+
// Arvados Client setup
arv, err := arvadosclient.MakeArvadosClient()
if err != nil {
ac := arvados.NewClientFromEnv()
- cost := make(map[string]float64)
- for _, uuid := range uuids {
+ // Populate uuidChannel with the requested uuid list
+ go func() {
+ defer close(uuidChannel)
+ for _, uuid := range c.uuids {
+ uuidChannel <- uuid
+ }
+
+ if !c.begin.IsZero() {
+ initialParams := arvados.ResourceListParams{
+ Filters: []arvados.Filter{{"container.finished_at", ">=", c.begin}, {"container.finished_at", "<", c.end}, {"requesting_container_uuid", "=", nil}},
+ Order: "created_at",
+ }
+ params := initialParams
+ for {
+ // This list variable must be a new one declared
+ // inside the loop: otherwise, items in the API
+ // response would get deep-merged into the items
+ // loaded in previous iterations.
+ var list arvados.ContainerRequestList
+
+ err := ac.RequestAndDecode(&list, "GET", "arvados/v1/container_requests", nil, params)
+ if err != nil {
+ logger.Errorf("Error getting container request list from Arvados API: %s", err)
+ break
+ }
+ if len(list.Items) == 0 {
+ break
+ }
+
+ for _, i := range list.Items {
+ uuidChannel <- i.UUID
+ }
+ params.Offset += len(list.Items)
+ }
+
+ }
+ }()
+
+ cost := make(map[string]consumption)
+
+ for uuid := range uuidChannel {
+ logger.Debugf("Considering %s", uuid)
if strings.Contains(uuid, "-j7d0g-") {
// This is a project (group)
- cost, err = handleProject(logger, uuid, arv, ac, kc, resultsDir, cache)
+ cost, err = handleProject(logger, uuid, arv, ac, kc, c.resultsDir, c.cache)
if err != nil {
exitcode = 1
return
cost[k] = v
}
} else if strings.Contains(uuid, "-xvhdp-") || strings.Contains(uuid, "-4zz18-") {
- // This is a container request
- var crCsv map[string]float64
- crCsv, err = generateCrCsv(logger, uuid, arv, ac, kc, resultsDir, cache)
+ // This is a container request or collection
+ var crInfo map[string]consumption
+ crInfo, err = generateCrInfo(logger, uuid, arv, ac, kc, c.resultsDir, c.cache)
if err != nil {
err = fmt.Errorf("error generating CSV for uuid %s: %s", uuid, err.Error())
exitcode = 2
return
}
- for k, v := range crCsv {
+ for k, v := range crInfo {
cost[k] = v
}
} else if strings.Contains(uuid, "-tpzed-") {
// keep going.
logger.Errorf("cost analysis is not supported for the 'Home' project: %s", uuid)
} else {
- logger.Errorf("this argument does not look like a uuid: %s\n", uuid)
+ logger.Errorf("this argument does not look like a uuid: %s", uuid)
exitcode = 3
return
}
}
if len(cost) == 0 {
- logger.Info("Nothing to do!\n")
+ logger.Info("Nothing to do!")
return
}
var csv string
- csv = "# Aggregate cost accounting for uuids:\n"
- for _, uuid := range uuids {
+ csv = "# Aggregate cost accounting for uuids:\n# UUID, Duration in seconds, Total cost\n"
+ for _, uuid := range c.uuids {
csv += "# " + uuid + "\n"
}
- var total float64
+ var total consumption
for k, v := range cost {
- csv += k + "," + strconv.FormatFloat(v, 'f', 8, 64) + "\n"
- total += v
+ csv += k + "," + strconv.FormatFloat(v.duration, 'f', 3, 64) + "," + strconv.FormatFloat(v.cost, 'f', 8, 64) + "\n"
+ total.Add(v)
}
- csv += "TOTAL," + strconv.FormatFloat(total, 'f', 8, 64) + "\n"
+ csv += "TOTAL," + strconv.FormatFloat(total.duration, 'f', 3, 64) + "," + strconv.FormatFloat(total.cost, 'f', 2, 64) + "\n"
- if resultsDir != "" {
+ if c.resultsDir != "" {
// Write the resulting CSV file
- aFile := resultsDir + "/" + time.Now().Format("2006-01-02-15-04-05") + "-aggregate-costaccounting.csv"
+ aFile := c.resultsDir + "/" + time.Now().Format("2006-01-02-15-04-05") + "-aggregate-costaccounting.csv"
err = ioutil.WriteFile(aFile, []byte(csv), 0644)
if err != nil {
err = fmt.Errorf("error writing file with path %s: %s", aFile, err.Error())
exitcode = 1
return
}
- logger.Infof("Aggregate cost accounting for all supplied uuids in %s\n", aFile)
+ logger.Infof("Aggregate cost accounting for all supplied uuids in %s", aFile)
}
// Output the total dollar amount on stdout
- fmt.Fprintf(stdout, "%s\n", strconv.FormatFloat(total, 'f', 8, 64))
+ fmt.Fprintf(stdout, "%s\n", strconv.FormatFloat(total.cost, 'f', 2, 64))
return
}
}
func (s *Suite) SetUpSuite(c *check.C) {
- arvadostest.StartAPI()
arvadostest.StartKeep(2, true)
// Get the various arvados, arvadosclient, and keep client objects
func (*Suite) TestUsage(c *check.C) {
var stdout, stderr bytes.Buffer
exitcode := Command.RunCommand("costanalyzer.test", []string{"-help", "-log-level=debug"}, &bytes.Buffer{}, &stdout, &stderr)
- c.Check(exitcode, check.Equals, 1)
+ c.Check(exitcode, check.Equals, 0)
c.Check(stdout.String(), check.Equals, "")
c.Check(stderr.String(), check.Matches, `(?ms).*Usage:.*`)
}
+func (*Suite) TestTimestampRange(c *check.C) {
+ var stdout, stderr bytes.Buffer
+ resultsDir := c.MkDir()
+ // Run costanalyzer with a timestamp range. This should pick up two container requests in "Final" state.
+ exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", resultsDir, "-begin", "2020-11-02T00:00:00", "-end", "2020-11-03T23:59:00"}, &bytes.Buffer{}, &stdout, &stderr)
+ c.Check(exitcode, check.Equals, 0)
+ c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
+
+ uuidReport, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedDiagnosticsContainerRequest1UUID + ".csv")
+ c.Assert(err, check.IsNil)
+ uuid2Report, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedDiagnosticsContainerRequest2UUID + ".csv")
+ c.Assert(err, check.IsNil)
+
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,763.467,,,,0.01")
+ c.Check(string(uuid2Report), check.Matches, "(?ms).*TOTAL,,,,,,488.775,,,,0.01")
+ re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
+ matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
+
+ aggregateCostReport, err := ioutil.ReadFile(matches[1])
+ c.Assert(err, check.IsNil)
+
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,1245.564,0.01")
+}
+
func (*Suite) TestContainerRequestUUID(c *check.C) {
var stdout, stderr bytes.Buffer
resultsDir := c.MkDir()
c.Assert(err, check.IsNil)
// Make sure the 'preemptible' flag was picked up
c.Check(string(uuidReport), check.Matches, "(?ms).*,Standard_E4s_v3,true,.*")
- c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,7.01302889")
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,7.01")
re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
aggregateCostReport, err := ioutil.ReadFile(matches[1])
c.Assert(err, check.IsNil)
- c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,7.01302889")
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,86462.000,7.01")
}
func (*Suite) TestCollectionUUID(c *check.C) {
var stdout, stderr bytes.Buffer
-
resultsDir := c.MkDir()
- // Run costanalyzer with 1 collection uuid, without 'container_request' property
- exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", resultsDir, arvadostest.FooCollection}, &bytes.Buffer{}, &stdout, &stderr)
- c.Check(exitcode, check.Equals, 2)
- c.Assert(stderr.String(), check.Matches, "(?ms).*does not have a 'container_request' property.*")
- // Update the collection, attach a 'container_request' property
+ // Create a collection with no container_request property
ac := arvados.NewClientFromEnv()
var coll arvados.Collection
+ err := ac.RequestAndDecode(&coll, "POST", "arvados/v1/collections", nil, nil)
+ c.Assert(err, check.IsNil)
- // Update collection record
- err := ac.RequestAndDecode(&coll, "PUT", "arvados/v1/collections/"+arvadostest.FooCollection, nil, map[string]interface{}{
+ exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", resultsDir, coll.UUID}, &bytes.Buffer{}, &stdout, &stderr)
+ c.Check(exitcode, check.Equals, 2)
+ c.Assert(stderr.String(), check.Matches, "(?ms).*does not have a 'container_request' property.*")
+
+ stdout.Truncate(0)
+ stderr.Truncate(0)
+
+ // Add a container_request property
+ err = ac.RequestAndDecode(&coll, "PATCH", "arvados/v1/collections/"+coll.UUID, nil, map[string]interface{}{
"collection": map[string]interface{}{
"properties": map[string]interface{}{
"container_request": arvadostest.CompletedContainerRequestUUID,
})
c.Assert(err, check.IsNil)
- stdout.Truncate(0)
- stderr.Truncate(0)
-
- // Run costanalyzer with 1 collection uuid
+ // Re-run costanalyzer on the updated collection
resultsDir = c.MkDir()
- exitcode = Command.RunCommand("costanalyzer.test", []string{"-output", resultsDir, arvadostest.FooCollection}, &bytes.Buffer{}, &stdout, &stderr)
+ exitcode = Command.RunCommand("costanalyzer.test", []string{"-output", resultsDir, coll.UUID}, &bytes.Buffer{}, &stdout, &stderr)
c.Check(exitcode, check.Equals, 0)
c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
uuidReport, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedContainerRequestUUID + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,7.01302889")
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,7.01")
re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
aggregateCostReport, err := ioutil.ReadFile(matches[1])
c.Assert(err, check.IsNil)
- c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,7.01302889")
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,86462.000,7.01")
}
func (*Suite) TestDoubleContainerRequestUUID(c *check.C) {
uuidReport, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedContainerRequestUUID + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,7.01302889")
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,7.01")
uuidReport2, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedContainerRequestUUID2 + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,,,,42.27031111")
+ c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,42.27")
re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
aggregateCostReport, err := ioutil.ReadFile(matches[1])
c.Assert(err, check.IsNil)
- c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,49.28334000")
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,172924.000,49.28")
stdout.Truncate(0)
stderr.Truncate(0)
uuidReport, err = ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedContainerRequestUUID + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,7.01302889")
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,7.01")
uuidReport2, err = ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedContainerRequestUUID2 + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,,,,42.27031111")
+ c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,86462.000,,,,42.27")
re = regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
matches = re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
aggregateCostReport, err = ioutil.ReadFile(matches[1])
c.Assert(err, check.IsNil)
- c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,49.28334000")
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,172924.000,49.28")
+}
+
+func (*Suite) TestUncommittedContainerRequest(c *check.C) {
+ var stdout, stderr bytes.Buffer
+ // Run costanalyzer with 2 container request uuids, one of which is in the Uncommitted state, without output directory specified
+ exitcode := Command.RunCommand("costanalyzer.test", []string{arvadostest.UncommittedContainerRequestUUID, arvadostest.CompletedDiagnosticsContainerRequest2UUID}, &bytes.Buffer{}, &stdout, &stderr)
+ c.Check(exitcode, check.Equals, 0)
+ c.Assert(stderr.String(), check.Not(check.Matches), "(?ms).*supplied uuids in .*")
+ c.Assert(stderr.String(), check.Matches, "(?ms).*No container associated with container request .*")
+
+ // Check that the total amount was printed to stdout
+ c.Check(stdout.String(), check.Matches, "0.01\n")
}
func (*Suite) TestMultipleContainerRequestUUIDWithReuse(c *check.C) {
c.Assert(stderr.String(), check.Not(check.Matches), "(?ms).*supplied uuids in .*")
// Check that the total amount was printed to stdout
- c.Check(stdout.String(), check.Matches, "0.01492030\n")
+ c.Check(stdout.String(), check.Matches, "0.01\n")
stdout.Truncate(0)
stderr.Truncate(0)
uuidReport, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedDiagnosticsContainerRequest1UUID + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,0.00916192")
+ c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,763.467,,,,0.01")
uuidReport2, err := ioutil.ReadFile(resultsDir + "/" + arvadostest.CompletedDiagnosticsContainerRequest2UUID + ".csv")
c.Assert(err, check.IsNil)
- c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,,,,0.00588088")
+ c.Check(string(uuidReport2), check.Matches, "(?ms).*TOTAL,,,,,,488.775,,,,0.01")
re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
aggregateCostReport, err := ioutil.ReadFile(matches[1])
c.Assert(err, check.IsNil)
- c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,0.01492030")
+ c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,1245.564,0.01")
}
//
// Stdout and stderr in the child process are sent to the systemd
// journal using the systemd-cat program.
-func Detach(uuid string, prog string, args []string, stdout, stderr io.Writer) int {
- return exitcode(stderr, detach(uuid, prog, args, stdout, stderr))
+func Detach(uuid string, prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ return exitcode(stderr, detach(uuid, prog, args, stdin, stdout))
}
-func detach(uuid string, prog string, args []string, stdout, stderr io.Writer) error {
+func detach(uuid string, prog string, args []string, stdin io.Reader, stdout io.Writer) error {
lockfile, err := func() (*os.File, error) {
// We must hold the dir-level lock between
// opening/creating the lockfile and acquiring LOCK_EX
// invoked as "/path/to/crunch-run"
execargs = append([]string{prog}, execargs...)
}
- execargs = append([]string{
- // Here, if the inner systemd-cat can't exec
- // crunch-run, it writes an error message to stderr,
- // and the outer systemd-cat writes it to the journal
- // where the operator has a chance to discover it. (If
- // we only used one systemd-cat command, it would be
- // up to us to report the error -- but we are going to
- // detach and exit, not wait for something to appear
- // on stderr.) Note these systemd-cat calls don't
- // result in additional processes -- they just connect
- // stderr/stdout to sockets and call exec().
- "systemd-cat", "--identifier=crunch-run",
- "systemd-cat", "--identifier=crunch-run",
- }, execargs...)
+ if _, err := exec.LookPath("systemd-cat"); err == nil {
+ execargs = append([]string{
+ // Here, if the inner systemd-cat can't exec
+ // crunch-run, it writes an error message to
+ // stderr, and the outer systemd-cat writes it
+ // to the journal where the operator has a
+ // chance to discover it. (If we only used one
+ // systemd-cat command, it would be up to us
+ // to report the error -- but we are going to
+ // detach and exit, not wait for something to
+ // appear on stderr.) Note these systemd-cat
+ // calls don't result in additional processes
+ // -- they just connect stderr/stdout to
+ // sockets and call exec().
+ "systemd-cat", "--identifier=crunch-run",
+ "systemd-cat", "--identifier=crunch-run",
+ }, execargs...)
+ }
cmd := exec.Command(execargs[0], execargs[1:]...)
// Child inherits lockfile.
// from parent (sshd) while sending lockfile content to
// caller.
cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
+ // We need to manage our own OS pipe here to ensure the child
+ // process reads all of our stdin pipe before we return.
+ piper, pipew, err := os.Pipe()
+ if err != nil {
+ return err
+ }
+ defer pipew.Close()
+ cmd.Stdin = piper
err = cmd.Start()
if err != nil {
return fmt.Errorf("exec %s: %s", cmd.Path, err)
}
+ _, err = io.Copy(pipew, stdin)
+ if err != nil {
+ return err
+ }
+ err = pipew.Close()
+ if err != nil {
+ return err
+ }
w := io.MultiWriter(stdout, lockfile)
return json.NewEncoder(w).Encode(procinfo{
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "bytes"
+ "io"
+ "sync"
+)
+
+type bufThenWrite struct {
+ buf bytes.Buffer
+ w io.Writer
+ mtx sync.Mutex
+}
+
+func (btw *bufThenWrite) SetWriter(w io.Writer) error {
+ btw.mtx.Lock()
+ defer btw.mtx.Unlock()
+ btw.w = w
+ _, err := io.Copy(w, &btw.buf)
+ return err
+}
+
+func (btw *bufThenWrite) Write(p []byte) (int, error) {
+ btw.mtx.Lock()
+ defer btw.mtx.Unlock()
+ if btw.w == nil {
+ btw.w = &btw.buf
+ }
+ return btw.w.Write(p)
+}
"os"
"os/exec"
"sync"
+ "sync/atomic"
"syscall"
+ "time"
"git.arvados.org/arvados.git/lib/selfsigned"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/creack/pty"
+ dockerclient "github.com/docker/docker/client"
"github.com/google/shlex"
"golang.org/x/crypto/ssh"
+ "golang.org/x/net/context"
)
type Gateway struct {
Log interface {
Printf(fmt string, args ...interface{})
}
+ // return local ip address of running container, or "" if not available
+ ContainerIPAddress func() (string, error)
sshConfig ssh.ServerConfig
requestAuth string
respondAuth string
}
-// startGatewayServer starts an http server that allows authenticated
-// clients to open an interactive "docker exec" session and (in
-// future) connect to tcp ports inside the docker container.
+// Start starts an http server that allows authenticated clients to open an
+// interactive "docker exec" session and (in future) connect to tcp ports
+// inside the docker container.
func (gw *Gateway) Start() error {
gw.sshConfig = ssh.ServerConfig{
NoClientAuth: true,
PasswordCallback: func(c ssh.ConnMetadata, pass []byte) (*ssh.Permissions, error) {
if c.User() == "_" {
return nil, nil
- } else {
- return nil, fmt.Errorf("cannot specify user %q via ssh client", c.User())
}
+ return nil, fmt.Errorf("cannot specify user %q via ssh client", c.User())
},
PublicKeyCallback: func(c ssh.ConnMetadata, pubKey ssh.PublicKey) (*ssh.Permissions, error) {
if c.User() == "_" {
"pubkey-fp": ssh.FingerprintSHA256(pubKey),
},
}, nil
- } else {
- return nil, fmt.Errorf("cannot specify user %q via ssh client", c.User())
}
+ return nil, fmt.Errorf("cannot specify user %q via ssh client", c.User())
},
}
pvt, err := rsa.GenerateKey(rand.Reader, 2048)
defer conn.Close()
go ssh.DiscardRequests(reqs)
for newch := range newchans {
- if newch.ChannelType() != "session" {
- newch.Reject(ssh.UnknownChannelType, fmt.Sprintf("unsupported channel type %q", newch.ChannelType()))
- continue
+ switch newch.ChannelType() {
+ case "direct-tcpip":
+ go gw.handleDirectTCPIP(ctx, newch)
+ case "session":
+ go gw.handleSession(ctx, newch, detachKeys, username)
+ default:
+ go newch.Reject(ssh.UnknownChannelType, fmt.Sprintf("unsupported channel type %q", newch.ChannelType()))
}
- ch, reqs, err := newch.Accept()
+ }
+}
+
+func (gw *Gateway) handleDirectTCPIP(ctx context.Context, newch ssh.NewChannel) {
+ ch, reqs, err := newch.Accept()
+ if err != nil {
+ gw.Log.Printf("accept direct-tcpip channel: %s", err)
+ return
+ }
+ defer ch.Close()
+ go ssh.DiscardRequests(reqs)
+
+ // RFC 4254 7.2 (copy of channelOpenDirectMsg in
+ // golang.org/x/crypto/ssh)
+ var msg struct {
+ Raddr string
+ Rport uint32
+ Laddr string
+ Lport uint32
+ }
+ err = ssh.Unmarshal(newch.ExtraData(), &msg)
+ if err != nil {
+ fmt.Fprintf(ch.Stderr(), "unmarshal direct-tcpip extradata: %s\n", err)
+ return
+ }
+ switch msg.Raddr {
+ case "localhost", "0.0.0.0", "127.0.0.1", "::1", "::":
+ default:
+ fmt.Fprintf(ch.Stderr(), "cannot forward to ports on %q, only localhost\n", msg.Raddr)
+ return
+ }
+
+ var dstaddr string
+ if gw.ContainerIPAddress != nil {
+ dstaddr, err = gw.ContainerIPAddress()
if err != nil {
- gw.Log.Printf("accept channel: %s", err)
+ fmt.Fprintf(ch.Stderr(), "container has no IP address: %s\n", err)
return
}
- var pty0, tty0 *os.File
- go func() {
- // Where to send errors/messages for the
- // client to see
- logw := io.Writer(ch.Stderr())
- // How to end lines when sending
- // errors/messages to the client (changes to
- // \r\n when using a pty)
- eol := "\n"
- // Env vars to add to child process
- termEnv := []string(nil)
- for req := range reqs {
- ok := false
- switch req.Type {
- case "shell", "exec":
- ok = true
- var payload struct {
- Command string
- }
- ssh.Unmarshal(req.Payload, &payload)
- execargs, err := shlex.Split(payload.Command)
- if err != nil {
- fmt.Fprintf(logw, "error parsing supplied command: %s"+eol, err)
- return
- }
- if len(execargs) == 0 {
- execargs = []string{"/bin/bash", "-login"}
- }
- go func() {
- cmd := exec.CommandContext(ctx, "docker", "exec", "-i", "--detach-keys="+detachKeys, "--user="+username)
- cmd.Stdin = ch
- cmd.Stdout = ch
- cmd.Stderr = ch.Stderr()
- if tty0 != nil {
- cmd.Args = append(cmd.Args, "-t")
- cmd.Stdin = tty0
- cmd.Stdout = tty0
- cmd.Stderr = tty0
- var wg sync.WaitGroup
- defer wg.Wait()
- wg.Add(2)
- go func() { io.Copy(ch, pty0); wg.Done() }()
- go func() { io.Copy(pty0, ch); wg.Done() }()
- // Send our own debug messages to tty as well.
- logw = tty0
- }
- cmd.Args = append(cmd.Args, *gw.DockerContainerID)
- cmd.Args = append(cmd.Args, execargs...)
- cmd.SysProcAttr = &syscall.SysProcAttr{
- Setctty: tty0 != nil,
- Setsid: true,
- }
- cmd.Env = append(os.Environ(), termEnv...)
- err := cmd.Run()
- var resp struct {
- Status uint32
- }
- if exiterr, ok := err.(*exec.ExitError); ok {
- if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
- resp.Status = uint32(status.ExitStatus())
- }
- } else if err != nil {
- // Propagate errors like `exec: "docker": executable file not found in $PATH`
- fmt.Fprintln(ch.Stderr(), err)
- }
- errClose := ch.CloseWrite()
- if resp.Status == 0 && (err != nil || errClose != nil) {
- resp.Status = 1
- }
- ch.SendRequest("exit-status", false, ssh.Marshal(&resp))
- ch.Close()
- }()
- case "pty-req":
- eol = "\r\n"
- p, t, err := pty.Open()
- if err != nil {
- fmt.Fprintf(ch.Stderr(), "pty failed: %s"+eol, err)
- break
- }
- defer p.Close()
- defer t.Close()
- pty0, tty0 = p, t
- ok = true
- var payload struct {
- Term string
- Cols uint32
- Rows uint32
- X uint32
- Y uint32
- }
- ssh.Unmarshal(req.Payload, &payload)
- termEnv = []string{"TERM=" + payload.Term, "USE_TTY=1"}
- err = pty.Setsize(pty0, &pty.Winsize{Rows: uint16(payload.Rows), Cols: uint16(payload.Cols), X: uint16(payload.X), Y: uint16(payload.Y)})
- if err != nil {
- fmt.Fprintf(logw, "pty-req: setsize failed: %s"+eol, err)
- }
- case "window-change":
- var payload struct {
- Cols uint32
- Rows uint32
- X uint32
- Y uint32
- }
- ssh.Unmarshal(req.Payload, &payload)
- err := pty.Setsize(pty0, &pty.Winsize{Rows: uint16(payload.Rows), Cols: uint16(payload.Cols), X: uint16(payload.X), Y: uint16(payload.Y)})
- if err != nil {
- fmt.Fprintf(logw, "window-change: setsize failed: %s"+eol, err)
- break
+ }
+ if dstaddr == "" {
+ fmt.Fprintf(ch.Stderr(), "container has no IP address\n")
+ return
+ }
+
+ dst := net.JoinHostPort(dstaddr, fmt.Sprintf("%d", msg.Rport))
+ tcpconn, err := net.Dial("tcp", dst)
+ if err != nil {
+ fmt.Fprintf(ch.Stderr(), "%s: %s\n", dst, err)
+ return
+ }
+ go func() {
+ n, _ := io.Copy(ch, tcpconn)
+ ctxlog.FromContext(ctx).Debugf("tcpip: sent %d bytes\n", n)
+ ch.CloseWrite()
+ }()
+ n, _ := io.Copy(tcpconn, ch)
+ ctxlog.FromContext(ctx).Debugf("tcpip: received %d bytes\n", n)
+}
+
+func (gw *Gateway) handleSession(ctx context.Context, newch ssh.NewChannel, detachKeys, username string) {
+ ch, reqs, err := newch.Accept()
+ if err != nil {
+ gw.Log.Printf("accept session channel: %s", err)
+ return
+ }
+ var pty0, tty0 *os.File
+ // Where to send errors/messages for the client to see
+ logw := io.Writer(ch.Stderr())
+ // How to end lines when sending errors/messages to the client
+ // (changes to \r\n when using a pty)
+ eol := "\n"
+ // Env vars to add to child process
+ termEnv := []string(nil)
+ for req := range reqs {
+ ok := false
+ switch req.Type {
+ case "shell", "exec":
+ ok = true
+ var payload struct {
+ Command string
+ }
+ ssh.Unmarshal(req.Payload, &payload)
+ execargs, err := shlex.Split(payload.Command)
+ if err != nil {
+ fmt.Fprintf(logw, "error parsing supplied command: %s"+eol, err)
+ return
+ }
+ if len(execargs) == 0 {
+ execargs = []string{"/bin/bash", "-login"}
+ }
+ go func() {
+ cmd := exec.CommandContext(ctx, "docker", "exec", "-i", "--detach-keys="+detachKeys, "--user="+username)
+ cmd.Stdin = ch
+ cmd.Stdout = ch
+ cmd.Stderr = ch.Stderr()
+ if tty0 != nil {
+ cmd.Args = append(cmd.Args, "-t")
+ cmd.Stdin = tty0
+ cmd.Stdout = tty0
+ cmd.Stderr = tty0
+ var wg sync.WaitGroup
+ defer wg.Wait()
+ wg.Add(2)
+ go func() { io.Copy(ch, pty0); wg.Done() }()
+ go func() { io.Copy(pty0, ch); wg.Done() }()
+ // Send our own debug messages to tty as well.
+ logw = tty0
+ }
+ cmd.Args = append(cmd.Args, *gw.DockerContainerID)
+ cmd.Args = append(cmd.Args, execargs...)
+ cmd.SysProcAttr = &syscall.SysProcAttr{
+ Setctty: tty0 != nil,
+ Setsid: true,
+ }
+ cmd.Env = append(os.Environ(), termEnv...)
+ err := cmd.Run()
+ var resp struct {
+ Status uint32
+ }
+ if exiterr, ok := err.(*exec.ExitError); ok {
+ if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
+ resp.Status = uint32(status.ExitStatus())
}
- ok = true
- case "env":
- // TODO: implement "env"
- // requests by setting env
- // vars in the docker-exec
- // command (not docker-exec's
- // own environment, which
- // would be a gaping security
- // hole).
- default:
- // fmt.Fprintf(logw, "declining %q req"+eol, req.Type)
+ } else if err != nil {
+ // Propagate errors like `exec: "docker": executable file not found in $PATH`
+ fmt.Fprintln(ch.Stderr(), err)
}
- if req.WantReply {
- req.Reply(ok, nil)
+ errClose := ch.CloseWrite()
+ if resp.Status == 0 && (err != nil || errClose != nil) {
+ resp.Status = 1
}
+ ch.SendRequest("exit-status", false, ssh.Marshal(&resp))
+ ch.Close()
+ }()
+ case "pty-req":
+ eol = "\r\n"
+ p, t, err := pty.Open()
+ if err != nil {
+ fmt.Fprintf(ch.Stderr(), "pty failed: %s"+eol, err)
+ break
+ }
+ defer p.Close()
+ defer t.Close()
+ pty0, tty0 = p, t
+ ok = true
+ var payload struct {
+ Term string
+ Cols uint32
+ Rows uint32
+ X uint32
+ Y uint32
+ }
+ ssh.Unmarshal(req.Payload, &payload)
+ termEnv = []string{"TERM=" + payload.Term, "USE_TTY=1"}
+ err = pty.Setsize(pty0, &pty.Winsize{Rows: uint16(payload.Rows), Cols: uint16(payload.Cols), X: uint16(payload.X), Y: uint16(payload.Y)})
+ if err != nil {
+ fmt.Fprintf(logw, "pty-req: setsize failed: %s"+eol, err)
}
- }()
+ case "window-change":
+ var payload struct {
+ Cols uint32
+ Rows uint32
+ X uint32
+ Y uint32
+ }
+ ssh.Unmarshal(req.Payload, &payload)
+ err := pty.Setsize(pty0, &pty.Winsize{Rows: uint16(payload.Rows), Cols: uint16(payload.Cols), X: uint16(payload.X), Y: uint16(payload.Y)})
+ if err != nil {
+ fmt.Fprintf(logw, "window-change: setsize failed: %s"+eol, err)
+ break
+ }
+ ok = true
+ case "env":
+ // TODO: implement "env"
+ // requests by setting env
+ // vars in the docker-exec
+ // command (not docker-exec's
+ // own environment, which
+ // would be a gaping security
+ // hole).
+ default:
+ // fmt.Fprintf(logw, "declining %q req"+eol, req.Type)
+ }
+ if req.WantReply {
+ req.Reply(ok, nil)
+ }
+ }
+}
+
+func dockerContainerIPAddress(containerID *string) func() (string, error) {
+ var saved atomic.Value
+ return func() (string, error) {
+ if ip, ok := saved.Load().(*string); ok {
+ return *ip, nil
+ }
+ docker, err := dockerclient.NewClient(dockerclient.DefaultDockerHost, "1.21", nil, nil)
+ if err != nil {
+ return "", fmt.Errorf("cannot create docker client: %s", err)
+ }
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Minute))
+ defer cancel()
+ ctr, err := docker.ContainerInspect(ctx, *containerID)
+ if err != nil {
+ return "", fmt.Errorf("cannot get docker container info: %s", err)
+ }
+ ip := ctr.NetworkSettings.IPAddress
+ if ip == "" {
+ // TODO: try to enable networking if it wasn't
+ // already enabled when the container was
+ // created.
+ return "", fmt.Errorf("container has no IP address")
+ }
+ saved.Store(&ip)
+ return ip, nil
}
}
keepClient IKeepClient
hostOutputDir string
ctrOutputDir string
- binds []string
+ bindmounts map[string]bindmount
mounts map[string]arvados.Mount
secretMounts map[string]arvados.Mount
logger printfer
})
return nil
}
-
- return fmt.Errorf("Unsupported file type (mode %o) in output dir: %q", fi.Mode(), src)
+ cp.logger.Printf("Skipping unsupported file type (mode %o) in output dir: %q", fi.Mode(), src)
+ return nil
}
// Return the host path that was mounted at the given path in the
if ctrRoot == cp.ctrOutputDir {
return cp.hostOutputDir, nil
}
- for _, bind := range cp.binds {
- tokens := strings.Split(bind, ":")
- if len(tokens) >= 2 && tokens[1] == ctrRoot {
- return tokens[0], nil
- }
+ if mnt, ok := cp.bindmounts[ctrRoot]; ok {
+ return mnt.HostPath, nil
}
return "", fmt.Errorf("not bind-mounted: %q", ctrRoot)
}
package crunchrun
import (
+ "bytes"
"io"
"io/ioutil"
"os"
+ "syscall"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
var _ = check.Suite(&copierSuite{})
type copierSuite struct {
- cp copier
+ cp copier
+ log bytes.Buffer
}
func (s *copierSuite) SetUpTest(c *check.C) {
- tmpdir, err := ioutil.TempDir("", "crunch-run.test.")
- c.Assert(err, check.IsNil)
+ tmpdir := c.MkDir()
api, err := arvadosclient.MakeArvadosClient()
c.Assert(err, check.IsNil)
+ s.log = bytes.Buffer{}
s.cp = copier{
client: arvados.NewClientFromEnv(),
arvClient: api,
secretMounts: map[string]arvados.Mount{
"/secret_text": {Kind: "text", Content: "xyzzy"},
},
+ logger: &logrus.Logger{Out: &s.log, Formatter: &logrus.TextFormatter{}, Level: logrus.InfoLevel},
}
}
-func (s *copierSuite) TearDownTest(c *check.C) {
- os.RemoveAll(s.cp.hostOutputDir)
-}
-
func (s *copierSuite) TestEmptyOutput(c *check.C) {
err := s.cp.walkMount("", s.cp.ctrOutputDir, 10, true)
c.Check(err, check.IsNil)
_, err = io.WriteString(f, "foo")
c.Assert(err, check.IsNil)
c.Assert(f.Close(), check.IsNil)
+ err = syscall.Mkfifo(s.cp.hostOutputDir+"/dir1/fifo", 0644)
+ c.Assert(err, check.IsNil)
err = s.cp.walkMount("", s.cp.ctrOutputDir, 10, true)
c.Check(err, check.IsNil)
{src: os.DevNull, dst: "/dir1/dir2/dir3/.keep"},
{src: s.cp.hostOutputDir + "/dir1/foo", dst: "/dir1/foo", size: 3},
})
+ c.Check(s.log.String(), check.Matches, `.* msg="Skipping unsupported file type \(mode 200000644\) in output dir: \\"/ctr/outdir/dir1/fifo\\""\n`)
}
func (s *copierSuite) TestSymlinkCycle(c *check.C) {
PortableDataHash: arvadostest.FooCollectionPDH,
Writable: true,
}
- s.cp.binds = append(s.cp.binds, bindtmp+":/mnt-w")
+ s.cp.bindmounts = map[string]bindmount{
+ "/mnt-w": bindmount{HostPath: bindtmp, ReadOnly: false},
+ }
c.Assert(os.Symlink("../../mnt", s.cp.hostOutputDir+"/l_dir"), check.IsNil)
c.Assert(os.Symlink("/mnt/foo", s.cp.hostOutputDir+"/l_file"), check.IsNil)
import (
"bytes"
+ "context"
"encoding/json"
"errors"
"flag"
"io"
"io/ioutil"
"log"
+ "net"
+ "net/http"
"os"
"os/exec"
"os/signal"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/keepclient"
"git.arvados.org/arvados.git/sdk/go/manifest"
- "golang.org/x/net/context"
-
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockernetwork "github.com/docker/docker/api/types/network"
- dockerclient "github.com/docker/docker/client"
)
type command struct{}
var Command = command{}
+// ConfigData contains environment variables and (when needed) cluster
+// configuration, passed from dispatchcloud to crunch-run on stdin.
+type ConfigData struct {
+ Env map[string]string
+ KeepBuffers int
+ Cluster *arvados.Cluster
+}
+
// IArvadosClient is the minimal Arvados API methods used by crunch-run.
type IArvadosClient interface {
Create(resourceType string, parameters arvadosclient.Dict, output interface{}) error
// IKeepClient is the minimal Keep API methods used by crunch-run.
type IKeepClient interface {
- PutB(buf []byte) (string, int, error)
+ BlockWrite(context.Context, arvados.BlockWriteOptions) (arvados.BlockWriteResponse, error)
ReadAt(locator string, p []byte, off int) (int, error)
ManifestFileReader(m manifest.Manifest, filename string) (arvados.File, error)
LocalLocator(locator string) (string, error)
ClearBlockCache()
+ SetStorageClasses(sc []string)
}
// NewLogWriter is a factory function to create a new log writer.
type NewLogWriter func(name string) (io.WriteCloser, error)
-type RunArvMount func(args []string, tok string) (*exec.Cmd, error)
+type RunArvMount func(cmdline []string, tok string) (*exec.Cmd, error)
type MkTempDir func(string, string) (string, error)
-// ThinDockerClient is the minimal Docker client interface used by crunch-run.
-type ThinDockerClient interface {
- ContainerAttach(ctx context.Context, container string, options dockertypes.ContainerAttachOptions) (dockertypes.HijackedResponse, error)
- ContainerCreate(ctx context.Context, config *dockercontainer.Config, hostConfig *dockercontainer.HostConfig,
- networkingConfig *dockernetwork.NetworkingConfig, containerName string) (dockercontainer.ContainerCreateCreatedBody, error)
- ContainerStart(ctx context.Context, container string, options dockertypes.ContainerStartOptions) error
- ContainerRemove(ctx context.Context, container string, options dockertypes.ContainerRemoveOptions) error
- ContainerWait(ctx context.Context, container string, condition dockercontainer.WaitCondition) (<-chan dockercontainer.ContainerWaitOKBody, <-chan error)
- ContainerInspect(ctx context.Context, id string) (dockertypes.ContainerJSON, error)
- ImageInspectWithRaw(ctx context.Context, image string) (dockertypes.ImageInspect, []byte, error)
- ImageLoad(ctx context.Context, input io.Reader, quiet bool) (dockertypes.ImageLoadResponse, error)
- ImageRemove(ctx context.Context, image string, options dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error)
-}
-
type PsProcess interface {
CmdlineSlice() ([]string, error)
}
// ContainerRunner is the main stateful struct used for a single execution of a
// container.
type ContainerRunner struct {
- Docker ThinDockerClient
+ executor containerExecutor
+ executorStdin io.Closer
+ executorStdout io.Closer
+ executorStderr io.Closer
// Dispatcher client is initialized with the Dispatcher token.
// This is a privileged token used to manage container status
ContainerArvClient IArvadosClient
ContainerKeepClient IKeepClient
- Container arvados.Container
- ContainerConfig dockercontainer.Config
- HostConfig dockercontainer.HostConfig
- token string
- ContainerID string
- ExitCode *int
- NewLogWriter NewLogWriter
- loggingDone chan bool
- CrunchLog *ThrottledLogger
- Stdout io.WriteCloser
- Stderr io.WriteCloser
- logUUID string
- logMtx sync.Mutex
- LogCollection arvados.CollectionFileSystem
- LogsPDH *string
- RunArvMount RunArvMount
- MkTempDir MkTempDir
- ArvMount *exec.Cmd
- ArvMountPoint string
- HostOutputDir string
- Binds []string
- Volumes map[string]struct{}
- OutputPDH *string
- SigChan chan os.Signal
- ArvMountExit chan error
- SecretMounts map[string]arvados.Mount
- MkArvClient func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error)
- finalState string
- parentTemp string
-
+ Container arvados.Container
+ token string
+ ExitCode *int
+ NewLogWriter NewLogWriter
+ CrunchLog *ThrottledLogger
+ logUUID string
+ logMtx sync.Mutex
+ LogCollection arvados.CollectionFileSystem
+ LogsPDH *string
+ RunArvMount RunArvMount
+ MkTempDir MkTempDir
+ ArvMount *exec.Cmd
+ ArvMountPoint string
+ HostOutputDir string
+ Volumes map[string]struct{}
+ OutputPDH *string
+ SigChan chan os.Signal
+ ArvMountExit chan error
+ SecretMounts map[string]arvados.Mount
+ MkArvClient func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error)
+ finalState string
+ parentTemp string
+
+ keepstoreLogger io.WriteCloser
+ keepstoreLogbuf *bufThenWrite
statLogger io.WriteCloser
statReporter *crunchstat.Reporter
hoststatLogger io.WriteCloser
cStateLock sync.Mutex
cCancelled bool // StopContainer() invoked
- cRemoved bool // docker confirmed the container no longer exists
- enableNetwork string // one of "default" or "always"
- networkMode string // passed through to HostConfig.NetworkMode
- arvMountLog *ThrottledLogger
+ enableMemoryLimit bool
+ enableNetwork string // one of "default" or "always"
+ networkMode string // "none", "host", or "" -- passed through to executor
+ arvMountLog *ThrottledLogger
containerWatchdogInterval time.Duration
gateway Gateway
}
-// setupSignals sets up signal handling to gracefully terminate the underlying
-// Docker container and update state when receiving a TERM, INT or QUIT signal.
+// setupSignals sets up signal handling to gracefully terminate the
+// underlying container and update state when receiving a TERM, INT or
+// QUIT signal.
func (runner *ContainerRunner) setupSignals() {
runner.SigChan = make(chan os.Signal, 1)
signal.Notify(runner.SigChan, syscall.SIGTERM)
}(runner.SigChan)
}
-// stop the underlying Docker container.
+// stop the underlying container.
func (runner *ContainerRunner) stop(sig os.Signal) {
runner.cStateLock.Lock()
defer runner.cStateLock.Unlock()
if sig != nil {
runner.CrunchLog.Printf("caught signal: %v", sig)
}
- if runner.ContainerID == "" {
- return
- }
runner.cCancelled = true
- runner.CrunchLog.Printf("removing container")
- err := runner.Docker.ContainerRemove(context.TODO(), runner.ContainerID, dockertypes.ContainerRemoveOptions{Force: true})
+ runner.CrunchLog.Printf("stopping container")
+ err := runner.executor.Stop()
if err != nil {
- runner.CrunchLog.Printf("error removing container: %s", err)
- }
- if err == nil || strings.Contains(err.Error(), "No such container: "+runner.ContainerID) {
- runner.cRemoved = true
+ runner.CrunchLog.Printf("error stopping container: %s", err)
}
}
// LoadImage determines the docker image id from the container record and
// checks if it is available in the local Docker image store. If not, it loads
// the image from Keep.
-func (runner *ContainerRunner) LoadImage() (err error) {
-
+func (runner *ContainerRunner) LoadImage() (string, error) {
runner.CrunchLog.Printf("Fetching Docker image from collection '%s'", runner.Container.ContainerImage)
- var collection arvados.Collection
- err = runner.ContainerArvClient.Get("collections", runner.Container.ContainerImage, nil, &collection)
+ d, err := os.Open(runner.ArvMountPoint + "/by_id/" + runner.Container.ContainerImage)
if err != nil {
- return fmt.Errorf("While getting container image collection: %v", err)
+ return "", err
+ }
+ defer d.Close()
+ allfiles, err := d.Readdirnames(-1)
+ if err != nil {
+ return "", err
}
- manifest := manifest.Manifest{Text: collection.ManifestText}
- var img, imageID string
- for ms := range manifest.StreamIter() {
- img = ms.FileStreamSegments[0].Name
- if !strings.HasSuffix(img, ".tar") {
- return fmt.Errorf("First file in the container image collection does not end in .tar")
+ var tarfiles []string
+ for _, fnm := range allfiles {
+ if strings.HasSuffix(fnm, ".tar") {
+ tarfiles = append(tarfiles, fnm)
}
- imageID = img[:len(img)-4]
}
+ if len(tarfiles) == 0 {
+ return "", fmt.Errorf("image collection does not include a .tar image file")
+ }
+ if len(tarfiles) > 1 {
+ return "", fmt.Errorf("cannot choose from multiple tar files in image collection: %v", tarfiles)
+ }
+ imageID := tarfiles[0][:len(tarfiles[0])-4]
+ imageTarballPath := runner.ArvMountPoint + "/by_id/" + runner.Container.ContainerImage + "/" + imageID + ".tar"
+ runner.CrunchLog.Printf("Using Docker image id %q", imageID)
- runner.CrunchLog.Printf("Using Docker image id '%s'", imageID)
-
- _, _, err = runner.Docker.ImageInspectWithRaw(context.TODO(), imageID)
+ runner.CrunchLog.Print("Loading Docker image from keep")
+ err = runner.executor.LoadImage(imageID, imageTarballPath, runner.Container, runner.ArvMountPoint,
+ runner.containerClient)
if err != nil {
- runner.CrunchLog.Print("Loading Docker image from keep")
-
- var readCloser io.ReadCloser
- readCloser, err = runner.ContainerKeepClient.ManifestFileReader(manifest, img)
- if err != nil {
- return fmt.Errorf("While creating ManifestFileReader for container image: %v", err)
- }
-
- response, err := runner.Docker.ImageLoad(context.TODO(), readCloser, true)
- if err != nil {
- return fmt.Errorf("While loading container image into Docker: %v", err)
- }
-
- defer response.Body.Close()
- rbody, err := ioutil.ReadAll(response.Body)
- if err != nil {
- return fmt.Errorf("Reading response to image load: %v", err)
- }
- runner.CrunchLog.Printf("Docker response: %s", rbody)
- } else {
- runner.CrunchLog.Print("Docker image is available")
+ return "", err
}
- runner.ContainerConfig.Image = imageID
-
- runner.ContainerKeepClient.ClearBlockCache()
-
- return nil
+ return imageID, nil
}
-func (runner *ContainerRunner) ArvMountCmd(arvMountCmd []string, token string) (c *exec.Cmd, err error) {
- c = exec.Command("arv-mount", arvMountCmd...)
+func (runner *ContainerRunner) ArvMountCmd(cmdline []string, token string) (c *exec.Cmd, err error) {
+ c = exec.Command(cmdline[0], cmdline[1:]...)
// Copy our environment, but override ARVADOS_API_TOKEN with
// the container auth token.
return nil, err
}
runner.arvMountLog = NewThrottledLogger(w)
+ scanner := logScanner{
+ Patterns: []string{
+ "Keep write error",
+ "Block not found error",
+ "Unhandled exception during FUSE operation",
+ },
+ ReportFunc: runner.reportArvMountWarning,
+ }
c.Stdout = runner.arvMountLog
- c.Stderr = runner.arvMountLog
+ c.Stderr = io.MultiWriter(runner.arvMountLog, os.Stderr, &scanner)
runner.CrunchLog.Printf("Running %v", c.Args)
return nil
}
-func (runner *ContainerRunner) SetupMounts() (err error) {
- err = runner.SetupArvMountPoint("keep")
+func (runner *ContainerRunner) SetupMounts() (map[string]bindmount, error) {
+ bindmounts := map[string]bindmount{}
+ err := runner.SetupArvMountPoint("keep")
if err != nil {
- return fmt.Errorf("While creating keep mount temp dir: %v", err)
+ return nil, fmt.Errorf("While creating keep mount temp dir: %v", err)
}
token, err := runner.ContainerToken()
if err != nil {
- return fmt.Errorf("could not get container token: %s", err)
+ return nil, fmt.Errorf("could not get container token: %s", err)
}
+ runner.CrunchLog.Printf("container token %q", token)
pdhOnly := true
tmpcount := 0
arvMountCmd := []string{
+ "arv-mount",
"--foreground",
- "--allow-other",
"--read-write",
+ "--storage-classes", strings.Join(runner.Container.OutputStorageClasses, ","),
fmt.Sprintf("--crunchstat-interval=%v", runner.statInterval.Seconds())}
+ if runner.executor.Runtime() == "docker" {
+ arvMountCmd = append(arvMountCmd, "--allow-other")
+ }
+
if runner.Container.RuntimeConstraints.KeepCacheRAM > 0 {
arvMountCmd = append(arvMountCmd, "--file-cache", fmt.Sprintf("%d", runner.Container.RuntimeConstraints.KeepCacheRAM))
}
collectionPaths := []string{}
- runner.Binds = nil
- runner.Volumes = make(map[string]struct{})
needCertMount := true
type copyFile struct {
src string
}
for bind := range runner.SecretMounts {
if _, ok := runner.Container.Mounts[bind]; ok {
- return fmt.Errorf("secret mount %q conflicts with regular mount", bind)
+ return nil, fmt.Errorf("secret mount %q conflicts with regular mount", bind)
}
if runner.SecretMounts[bind].Kind != "json" &&
runner.SecretMounts[bind].Kind != "text" {
- return fmt.Errorf("secret mount %q type is %q but only 'json' and 'text' are permitted",
+ return nil, fmt.Errorf("secret mount %q type is %q but only 'json' and 'text' are permitted",
bind, runner.SecretMounts[bind].Kind)
}
binds = append(binds, bind)
if bind == "stdout" || bind == "stderr" {
// Is it a "file" mount kind?
if mnt.Kind != "file" {
- return fmt.Errorf("unsupported mount kind '%s' for %s: only 'file' is supported", mnt.Kind, bind)
+ return nil, fmt.Errorf("unsupported mount kind '%s' for %s: only 'file' is supported", mnt.Kind, bind)
}
// Does path start with OutputPath?
prefix += "/"
}
if !strings.HasPrefix(mnt.Path, prefix) {
- return fmt.Errorf("%s path does not start with OutputPath: %s, %s", strings.Title(bind), mnt.Path, prefix)
+ return nil, fmt.Errorf("%s path does not start with OutputPath: %s, %s", strings.Title(bind), mnt.Path, prefix)
}
}
if bind == "stdin" {
// Is it a "collection" mount kind?
if mnt.Kind != "collection" && mnt.Kind != "json" {
- return fmt.Errorf("unsupported mount kind '%s' for stdin: only 'collection' and 'json' are supported", mnt.Kind)
+ return nil, fmt.Errorf("unsupported mount kind '%s' for stdin: only 'collection' and 'json' are supported", mnt.Kind)
}
}
if strings.HasPrefix(bind, runner.Container.OutputPath+"/") && bind != runner.Container.OutputPath+"/" {
if mnt.Kind != "collection" && mnt.Kind != "text" && mnt.Kind != "json" {
- return fmt.Errorf("only mount points of kind 'collection', 'text' or 'json' are supported underneath the output_path for %q, was %q", bind, mnt.Kind)
+ return nil, fmt.Errorf("only mount points of kind 'collection', 'text' or 'json' are supported underneath the output_path for %q, was %q", bind, mnt.Kind)
}
}
case mnt.Kind == "collection" && bind != "stdin":
var src string
if mnt.UUID != "" && mnt.PortableDataHash != "" {
- return fmt.Errorf("cannot specify both 'uuid' and 'portable_data_hash' for a collection mount")
+ return nil, fmt.Errorf("cannot specify both 'uuid' and 'portable_data_hash' for a collection mount")
}
if mnt.UUID != "" {
if mnt.Writable {
- return fmt.Errorf("writing to existing collections currently not permitted")
+ return nil, fmt.Errorf("writing to existing collections currently not permitted")
}
pdhOnly = false
src = fmt.Sprintf("%s/by_id/%s", runner.ArvMountPoint, mnt.UUID)
} else if mnt.PortableDataHash != "" {
if mnt.Writable && !strings.HasPrefix(bind, runner.Container.OutputPath+"/") {
- return fmt.Errorf("can never write to a collection specified by portable data hash")
+ return nil, fmt.Errorf("can never write to a collection specified by portable data hash")
}
idx := strings.Index(mnt.PortableDataHash, "/")
if idx > 0 {
if mnt.Writable {
if bind == runner.Container.OutputPath {
runner.HostOutputDir = src
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:%s", src, bind))
+ bindmounts[bind] = bindmount{HostPath: src}
} else if strings.HasPrefix(bind, runner.Container.OutputPath+"/") {
copyFiles = append(copyFiles, copyFile{src, runner.HostOutputDir + bind[len(runner.Container.OutputPath):]})
} else {
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:%s", src, bind))
+ bindmounts[bind] = bindmount{HostPath: src}
}
} else {
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:%s:ro", src, bind))
+ bindmounts[bind] = bindmount{HostPath: src, ReadOnly: true}
}
collectionPaths = append(collectionPaths, src)
var tmpdir string
tmpdir, err = runner.MkTempDir(runner.parentTemp, "tmp")
if err != nil {
- return fmt.Errorf("while creating mount temp dir: %v", err)
+ return nil, fmt.Errorf("while creating mount temp dir: %v", err)
}
st, staterr := os.Stat(tmpdir)
if staterr != nil {
- return fmt.Errorf("while Stat on temp dir: %v", staterr)
+ return nil, fmt.Errorf("while Stat on temp dir: %v", staterr)
}
err = os.Chmod(tmpdir, st.Mode()|os.ModeSetgid|0777)
if staterr != nil {
- return fmt.Errorf("while Chmod temp dir: %v", err)
+ return nil, fmt.Errorf("while Chmod temp dir: %v", err)
}
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:%s", tmpdir, bind))
+ bindmounts[bind] = bindmount{HostPath: tmpdir}
if bind == runner.Container.OutputPath {
runner.HostOutputDir = tmpdir
}
if mnt.Kind == "json" {
filedata, err = json.Marshal(mnt.Content)
if err != nil {
- return fmt.Errorf("encoding json data: %v", err)
+ return nil, fmt.Errorf("encoding json data: %v", err)
}
} else {
text, ok := mnt.Content.(string)
if !ok {
- return fmt.Errorf("content for mount %q must be a string", bind)
+ return nil, fmt.Errorf("content for mount %q must be a string", bind)
}
filedata = []byte(text)
}
tmpdir, err := runner.MkTempDir(runner.parentTemp, mnt.Kind)
if err != nil {
- return fmt.Errorf("creating temp dir: %v", err)
+ return nil, fmt.Errorf("creating temp dir: %v", err)
}
tmpfn := filepath.Join(tmpdir, "mountdata."+mnt.Kind)
err = ioutil.WriteFile(tmpfn, filedata, 0444)
if err != nil {
- return fmt.Errorf("writing temp file: %v", err)
+ return nil, fmt.Errorf("writing temp file: %v", err)
}
if strings.HasPrefix(bind, runner.Container.OutputPath+"/") {
copyFiles = append(copyFiles, copyFile{tmpfn, runner.HostOutputDir + bind[len(runner.Container.OutputPath):]})
} else {
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:%s:ro", tmpfn, bind))
+ bindmounts[bind] = bindmount{HostPath: tmpfn, ReadOnly: true}
}
case mnt.Kind == "git_tree":
tmpdir, err := runner.MkTempDir(runner.parentTemp, "git_tree")
if err != nil {
- return fmt.Errorf("creating temp dir: %v", err)
+ return nil, fmt.Errorf("creating temp dir: %v", err)
}
err = gitMount(mnt).extractTree(runner.ContainerArvClient, tmpdir, token)
if err != nil {
- return err
+ return nil, err
}
- runner.Binds = append(runner.Binds, tmpdir+":"+bind+":ro")
+ bindmounts[bind] = bindmount{HostPath: tmpdir, ReadOnly: true}
}
}
if runner.HostOutputDir == "" {
- return fmt.Errorf("output path does not correspond to a writable mount point")
+ return nil, fmt.Errorf("output path does not correspond to a writable mount point")
}
if needCertMount && runner.Container.RuntimeConstraints.API {
for _, certfile := range arvadosclient.CertFiles {
_, err := os.Stat(certfile)
if err == nil {
- runner.Binds = append(runner.Binds, fmt.Sprintf("%s:/etc/arvados/ca-certificates.crt:ro", certfile))
+ bindmounts["/etc/arvados/ca-certificates.crt"] = bindmount{HostPath: certfile, ReadOnly: true}
break
}
}
}
if pdhOnly {
- arvMountCmd = append(arvMountCmd, "--mount-by-pdh", "by_id")
+ // If we are only mounting collections by pdh, make
+ // sure we don't subscribe to websocket events to
+ // avoid putting undesired load on the API server
+ arvMountCmd = append(arvMountCmd, "--mount-by-pdh", "by_id", "--disable-event-listening")
} else {
arvMountCmd = append(arvMountCmd, "--mount-by-id", "by_id")
}
+ // the by_uuid mount point is used by singularity when writing
+ // out docker images converted to SIF
+ arvMountCmd = append(arvMountCmd, "--mount-by-id", "by_uuid")
arvMountCmd = append(arvMountCmd, runner.ArvMountPoint)
runner.ArvMount, err = runner.RunArvMount(arvMountCmd, token)
if err != nil {
- return fmt.Errorf("while trying to start arv-mount: %v", err)
+ return nil, fmt.Errorf("while trying to start arv-mount: %v", err)
}
for _, p := range collectionPaths {
_, err = os.Stat(p)
if err != nil {
- return fmt.Errorf("while checking that input files exist: %v", err)
+ return nil, fmt.Errorf("while checking that input files exist: %v", err)
}
}
for _, cp := range copyFiles {
st, err := os.Stat(cp.src)
if err != nil {
- return fmt.Errorf("while staging writable file from %q to %q: %v", cp.src, cp.bind, err)
+ return nil, fmt.Errorf("while staging writable file from %q to %q: %v", cp.src, cp.bind, err)
}
if st.IsDir() {
err = filepath.Walk(cp.src, func(walkpath string, walkinfo os.FileInfo, walkerr error) error {
}
}
if err != nil {
- return fmt.Errorf("while staging writable file from %q to %q: %v", cp.src, cp.bind, err)
+ return nil, fmt.Errorf("while staging writable file from %q to %q: %v", cp.src, cp.bind, err)
}
}
- return nil
-}
-
-func (runner *ContainerRunner) ProcessDockerAttach(containerReader io.Reader) {
- // Handle docker log protocol
- // https://docs.docker.com/engine/reference/api/docker_remote_api_v1.15/#attach-to-a-container
- defer close(runner.loggingDone)
-
- header := make([]byte, 8)
- var err error
- for err == nil {
- _, err = io.ReadAtLeast(containerReader, header, 8)
- if err != nil {
- if err == io.EOF {
- err = nil
- }
- break
- }
- readsize := int64(header[7]) | (int64(header[6]) << 8) | (int64(header[5]) << 16) | (int64(header[4]) << 24)
- if header[0] == 1 {
- // stdout
- _, err = io.CopyN(runner.Stdout, containerReader, readsize)
- } else {
- // stderr
- _, err = io.CopyN(runner.Stderr, containerReader, readsize)
- }
- }
-
- if err != nil {
- runner.CrunchLog.Printf("error reading docker logs: %v", err)
- }
-
- err = runner.Stdout.Close()
- if err != nil {
- runner.CrunchLog.Printf("error closing stdout logs: %v", err)
- }
-
- err = runner.Stderr.Close()
- if err != nil {
- runner.CrunchLog.Printf("error closing stderr logs: %v", err)
- }
-
- if runner.statReporter != nil {
- runner.statReporter.Stop()
- err = runner.statLogger.Close()
- if err != nil {
- runner.CrunchLog.Printf("error closing crunchstat logs: %v", err)
- }
- }
+ return bindmounts, nil
}
func (runner *ContainerRunner) stopHoststat() error {
}
runner.statLogger = NewThrottledLogger(w)
runner.statReporter = &crunchstat.Reporter{
- CID: runner.ContainerID,
+ CID: runner.executor.CgroupID(),
Logger: log.New(runner.statLogger, "", 0),
CgroupParent: runner.expectCgroupParent,
CgroupRoot: runner.cgroupRoot,
return true, nil
}
-// AttachStreams connects the docker container stdin, stdout and stderr logs
-// to the Arvados logger which logs to Keep and the API server logs table.
-func (runner *ContainerRunner) AttachStreams() (err error) {
-
- runner.CrunchLog.Print("Attaching container streams")
-
- // If stdin mount is provided, attach it to the docker container
- var stdinRdr arvados.File
- var stdinJSON []byte
- if stdinMnt, ok := runner.Container.Mounts["stdin"]; ok {
- if stdinMnt.Kind == "collection" {
- var stdinColl arvados.Collection
- collID := stdinMnt.UUID
- if collID == "" {
- collID = stdinMnt.PortableDataHash
- }
- err = runner.ContainerArvClient.Get("collections", collID, nil, &stdinColl)
- if err != nil {
- return fmt.Errorf("While getting stdin collection: %v", err)
- }
-
- stdinRdr, err = runner.ContainerKeepClient.ManifestFileReader(
- manifest.Manifest{Text: stdinColl.ManifestText},
- stdinMnt.Path)
- if os.IsNotExist(err) {
- return fmt.Errorf("stdin collection path not found: %v", stdinMnt.Path)
- } else if err != nil {
- return fmt.Errorf("While getting stdin collection path %v: %v", stdinMnt.Path, err)
- }
- } else if stdinMnt.Kind == "json" {
- stdinJSON, err = json.Marshal(stdinMnt.Content)
- if err != nil {
- return fmt.Errorf("While encoding stdin json data: %v", err)
- }
- }
- }
-
- stdinUsed := stdinRdr != nil || len(stdinJSON) != 0
- response, err := runner.Docker.ContainerAttach(context.TODO(), runner.ContainerID,
- dockertypes.ContainerAttachOptions{Stream: true, Stdin: stdinUsed, Stdout: true, Stderr: true})
- if err != nil {
- return fmt.Errorf("While attaching container stdout/stderr streams: %v", err)
- }
-
- runner.loggingDone = make(chan bool)
-
- if stdoutMnt, ok := runner.Container.Mounts["stdout"]; ok {
- stdoutFile, err := runner.getStdoutFile(stdoutMnt.Path)
- if err != nil {
- return err
- }
- runner.Stdout = stdoutFile
- } else if w, err := runner.NewLogWriter("stdout"); err != nil {
- return err
- } else {
- runner.Stdout = NewThrottledLogger(w)
- }
-
- if stderrMnt, ok := runner.Container.Mounts["stderr"]; ok {
- stderrFile, err := runner.getStdoutFile(stderrMnt.Path)
- if err != nil {
- return err
- }
- runner.Stderr = stderrFile
- } else if w, err := runner.NewLogWriter("stderr"); err != nil {
- return err
- } else {
- runner.Stderr = NewThrottledLogger(w)
- }
-
- if stdinRdr != nil {
- go func() {
- _, err := io.Copy(response.Conn, stdinRdr)
- if err != nil {
- runner.CrunchLog.Printf("While writing stdin collection to docker container: %v", err)
- runner.stop(nil)
- }
- stdinRdr.Close()
- response.CloseWrite()
- }()
- } else if len(stdinJSON) != 0 {
- go func() {
- _, err := io.Copy(response.Conn, bytes.NewReader(stdinJSON))
- if err != nil {
- runner.CrunchLog.Printf("While writing stdin json to docker container: %v", err)
- runner.stop(nil)
- }
- response.CloseWrite()
- }()
- }
-
- go runner.ProcessDockerAttach(response.Reader)
-
- return nil
-}
-
func (runner *ContainerRunner) getStdoutFile(mntPath string) (*os.File, error) {
stdoutPath := mntPath[len(runner.Container.OutputPath):]
index := strings.LastIndex(stdoutPath, "/")
}
// CreateContainer creates the docker container.
-func (runner *ContainerRunner) CreateContainer() error {
- runner.CrunchLog.Print("Creating Docker container")
-
- runner.ContainerConfig.Cmd = runner.Container.Command
- if runner.Container.Cwd != "." {
- runner.ContainerConfig.WorkingDir = runner.Container.Cwd
+func (runner *ContainerRunner) CreateContainer(imageID string, bindmounts map[string]bindmount) error {
+ var stdin io.ReadCloser = ioutil.NopCloser(bytes.NewReader(nil))
+ if mnt, ok := runner.Container.Mounts["stdin"]; ok {
+ switch mnt.Kind {
+ case "collection":
+ var collID string
+ if mnt.UUID != "" {
+ collID = mnt.UUID
+ } else {
+ collID = mnt.PortableDataHash
+ }
+ path := runner.ArvMountPoint + "/by_id/" + collID + "/" + mnt.Path
+ f, err := os.Open(path)
+ if err != nil {
+ return err
+ }
+ stdin = f
+ case "json":
+ j, err := json.Marshal(mnt.Content)
+ if err != nil {
+ return fmt.Errorf("error encoding stdin json data: %v", err)
+ }
+ stdin = ioutil.NopCloser(bytes.NewReader(j))
+ default:
+ return fmt.Errorf("stdin mount has unsupported kind %q", mnt.Kind)
+ }
}
- for k, v := range runner.Container.Environment {
- runner.ContainerConfig.Env = append(runner.ContainerConfig.Env, k+"="+v)
+ var stdout, stderr io.WriteCloser
+ if mnt, ok := runner.Container.Mounts["stdout"]; ok {
+ f, err := runner.getStdoutFile(mnt.Path)
+ if err != nil {
+ return err
+ }
+ stdout = f
+ } else if w, err := runner.NewLogWriter("stdout"); err != nil {
+ return err
+ } else {
+ stdout = NewThrottledLogger(w)
}
- runner.ContainerConfig.Volumes = runner.Volumes
-
- maxRAM := int64(runner.Container.RuntimeConstraints.RAM)
- minDockerRAM := int64(16)
- if maxRAM < minDockerRAM*1024*1024 {
- // Docker daemon won't let you set a limit less than ~10 MiB
- maxRAM = minDockerRAM * 1024 * 1024
- }
- runner.HostConfig = dockercontainer.HostConfig{
- Binds: runner.Binds,
- LogConfig: dockercontainer.LogConfig{
- Type: "none",
- },
- Resources: dockercontainer.Resources{
- CgroupParent: runner.setCgroupParent,
- NanoCPUs: int64(runner.Container.RuntimeConstraints.VCPUs) * 1000000000,
- Memory: maxRAM, // RAM
- MemorySwap: maxRAM, // RAM+swap
- KernelMemory: maxRAM, // kernel portion
- },
+ if mnt, ok := runner.Container.Mounts["stderr"]; ok {
+ f, err := runner.getStdoutFile(mnt.Path)
+ if err != nil {
+ return err
+ }
+ stderr = f
+ } else if w, err := runner.NewLogWriter("stderr"); err != nil {
+ return err
+ } else {
+ stderr = NewThrottledLogger(w)
}
+ env := runner.Container.Environment
+ enableNetwork := runner.enableNetwork == "always"
if runner.Container.RuntimeConstraints.API {
+ enableNetwork = true
tok, err := runner.ContainerToken()
if err != nil {
return err
}
- runner.ContainerConfig.Env = append(runner.ContainerConfig.Env,
- "ARVADOS_API_TOKEN="+tok,
- "ARVADOS_API_HOST="+os.Getenv("ARVADOS_API_HOST"),
- "ARVADOS_API_HOST_INSECURE="+os.Getenv("ARVADOS_API_HOST_INSECURE"),
- )
- runner.HostConfig.NetworkMode = dockercontainer.NetworkMode(runner.networkMode)
- } else {
- if runner.enableNetwork == "always" {
- runner.HostConfig.NetworkMode = dockercontainer.NetworkMode(runner.networkMode)
- } else {
- runner.HostConfig.NetworkMode = dockercontainer.NetworkMode("none")
- }
- }
-
- _, stdinUsed := runner.Container.Mounts["stdin"]
- runner.ContainerConfig.OpenStdin = stdinUsed
- runner.ContainerConfig.StdinOnce = stdinUsed
- runner.ContainerConfig.AttachStdin = stdinUsed
- runner.ContainerConfig.AttachStdout = true
- runner.ContainerConfig.AttachStderr = true
-
- createdBody, err := runner.Docker.ContainerCreate(context.TODO(), &runner.ContainerConfig, &runner.HostConfig, nil, runner.Container.UUID)
- if err != nil {
- return fmt.Errorf("While creating container: %v", err)
- }
-
- runner.ContainerID = createdBody.ID
-
- return runner.AttachStreams()
+ env = map[string]string{}
+ for k, v := range runner.Container.Environment {
+ env[k] = v
+ }
+ env["ARVADOS_API_TOKEN"] = tok
+ env["ARVADOS_API_HOST"] = os.Getenv("ARVADOS_API_HOST")
+ env["ARVADOS_API_HOST_INSECURE"] = os.Getenv("ARVADOS_API_HOST_INSECURE")
+ }
+ workdir := runner.Container.Cwd
+ if workdir == "." {
+ // both "" and "." mean default
+ workdir = ""
+ }
+ ram := runner.Container.RuntimeConstraints.RAM
+ if !runner.enableMemoryLimit {
+ ram = 0
+ }
+ runner.executorStdin = stdin
+ runner.executorStdout = stdout
+ runner.executorStderr = stderr
+
+ cudaDeviceCount := 0
+ if runner.Container.RuntimeConstraints.CUDADriverVersion != "" ||
+ runner.Container.RuntimeConstraints.CUDAHardwareCapability != "" ||
+ runner.Container.RuntimeConstraints.CUDADeviceCount != 0 {
+ // if any of these are set, enable CUDA GPU support
+ cudaDeviceCount = runner.Container.RuntimeConstraints.CUDADeviceCount
+ if cudaDeviceCount == 0 {
+ cudaDeviceCount = 1
+ }
+ }
+
+ return runner.executor.Create(containerSpec{
+ Image: imageID,
+ VCPUs: runner.Container.RuntimeConstraints.VCPUs,
+ RAM: ram,
+ WorkingDir: workdir,
+ Env: env,
+ BindMounts: bindmounts,
+ Command: runner.Container.Command,
+ EnableNetwork: enableNetwork,
+ CUDADeviceCount: cudaDeviceCount,
+ NetworkMode: runner.networkMode,
+ CgroupParent: runner.setCgroupParent,
+ Stdin: stdin,
+ Stdout: stdout,
+ Stderr: stderr,
+ })
}
// StartContainer starts the docker container created by CreateContainer.
func (runner *ContainerRunner) StartContainer() error {
- runner.CrunchLog.Printf("Starting Docker container id '%s'", runner.ContainerID)
+ runner.CrunchLog.Printf("Starting container")
runner.cStateLock.Lock()
defer runner.cStateLock.Unlock()
if runner.cCancelled {
return ErrCancelled
}
- err := runner.Docker.ContainerStart(context.TODO(), runner.ContainerID,
- dockertypes.ContainerStartOptions{})
+ err := runner.executor.Start()
if err != nil {
var advice string
if m, e := regexp.MatchString("(?ms).*(exec|System error).*(no such file or directory|file not found).*", err.Error()); m && e == nil {
// WaitFinish waits for the container to terminate, capture the exit code, and
// close the stdout/stderr logging.
func (runner *ContainerRunner) WaitFinish() error {
- var runTimeExceeded <-chan time.Time
runner.CrunchLog.Print("Waiting for container to finish")
-
- waitOk, waitErr := runner.Docker.ContainerWait(context.TODO(), runner.ContainerID, dockercontainer.WaitConditionNotRunning)
- arvMountExit := runner.ArvMountExit
- if timeout := runner.Container.SchedulingParameters.MaxRunTime; timeout > 0 {
- runTimeExceeded = time.After(time.Duration(timeout) * time.Second)
+ var timeout <-chan time.Time
+ if s := runner.Container.SchedulingParameters.MaxRunTime; s > 0 {
+ timeout = time.After(time.Duration(s) * time.Second)
}
-
- containerGone := make(chan struct{})
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
go func() {
- defer close(containerGone)
- if runner.containerWatchdogInterval < 1 {
- runner.containerWatchdogInterval = time.Minute
- }
- for range time.NewTicker(runner.containerWatchdogInterval).C {
- ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(runner.containerWatchdogInterval))
- ctr, err := runner.Docker.ContainerInspect(ctx, runner.ContainerID)
- cancel()
- runner.cStateLock.Lock()
- done := runner.cRemoved || runner.ExitCode != nil
- runner.cStateLock.Unlock()
- if done {
- return
- } else if err != nil {
- runner.CrunchLog.Printf("Error inspecting container: %s", err)
- runner.checkBrokenNode(err)
- return
- } else if ctr.State == nil || !(ctr.State.Running || ctr.State.Status == "created") {
- runner.CrunchLog.Printf("Container is not running: State=%v", ctr.State)
- return
- }
- }
- }()
-
- for {
select {
- case waitBody := <-waitOk:
- runner.CrunchLog.Printf("Container exited with code: %v", waitBody.StatusCode)
- code := int(waitBody.StatusCode)
- runner.ExitCode = &code
-
- // wait for stdout/stderr to complete
- <-runner.loggingDone
- return nil
-
- case err := <-waitErr:
- return fmt.Errorf("container wait: %v", err)
-
- case <-arvMountExit:
- runner.CrunchLog.Printf("arv-mount exited while container is still running. Stopping container.")
- runner.stop(nil)
- // arvMountExit will always be ready now that
- // it's closed, but that doesn't interest us.
- arvMountExit = nil
-
- case <-runTimeExceeded:
+ case <-timeout:
runner.CrunchLog.Printf("maximum run time exceeded. Stopping container.")
runner.stop(nil)
- runTimeExceeded = nil
+ case <-runner.ArvMountExit:
+ runner.CrunchLog.Printf("arv-mount exited while container is still running. Stopping container.")
+ runner.stop(nil)
+ case <-ctx.Done():
+ }
+ }()
+ exitcode, err := runner.executor.Wait(ctx)
+ if err != nil {
+ runner.checkBrokenNode(err)
+ return err
+ }
+ runner.ExitCode = &exitcode
+
+ var returnErr error
+ if err = runner.executorStdin.Close(); err != nil {
+ err = fmt.Errorf("error closing container stdin: %s", err)
+ runner.CrunchLog.Printf("%s", err)
+ returnErr = err
+ }
+ if err = runner.executorStdout.Close(); err != nil {
+ err = fmt.Errorf("error closing container stdout: %s", err)
+ runner.CrunchLog.Printf("%s", err)
+ if returnErr == nil {
+ returnErr = err
+ }
+ }
+ if err = runner.executorStderr.Close(); err != nil {
+ err = fmt.Errorf("error closing container stderr: %s", err)
+ runner.CrunchLog.Printf("%s", err)
+ if returnErr == nil {
+ returnErr = err
+ }
+ }
- case <-containerGone:
- return errors.New("docker client never returned status")
+ if runner.statReporter != nil {
+ runner.statReporter.Stop()
+ err = runner.statLogger.Close()
+ if err != nil {
+ runner.CrunchLog.Printf("error closing crunchstat logs: %v", err)
}
}
+ return returnErr
}
func (runner *ContainerRunner) updateLogs() {
}
}
+func (runner *ContainerRunner) reportArvMountWarning(pattern, text string) {
+ var updated arvados.Container
+ err := runner.DispatcherArvClient.Update("containers", runner.Container.UUID, arvadosclient.Dict{
+ "container": arvadosclient.Dict{
+ "runtime_status": arvadosclient.Dict{
+ "warning": "arv-mount: " + pattern,
+ "warningDetail": text,
+ },
+ },
+ }, &updated)
+ if err != nil {
+ runner.CrunchLog.Printf("error updating container runtime_status: %s", err)
+ }
+}
+
// CaptureOutput saves data from the container's output directory if
// needed, and updates the container output accordingly.
-func (runner *ContainerRunner) CaptureOutput() error {
+func (runner *ContainerRunner) CaptureOutput(bindmounts map[string]bindmount) error {
if runner.Container.RuntimeConstraints.API {
// Output may have been set directly by the container, so
// refresh the container record to check.
keepClient: runner.ContainerKeepClient,
hostOutputDir: runner.HostOutputDir,
ctrOutputDir: runner.Container.OutputPath,
- binds: runner.Binds,
+ bindmounts: bindmounts,
mounts: runner.Container.Mounts,
secretMounts: runner.SecretMounts,
logger: runner.CrunchLog,
if umnterr != nil {
runner.CrunchLog.Printf("Error unmounting: %v", umnterr)
+ runner.ArvMount.Process.Kill()
} else {
// If arv-mount --unmount gets stuck for any reason, we
// don't want to wait for it forever. Do Wait() in a goroutine
}
}
}
+ runner.ArvMount = nil
}
if runner.ArvMountPoint != "" {
if rmerr := os.Remove(runner.ArvMountPoint); rmerr != nil {
runner.CrunchLog.Printf("While cleaning up arv-mount directory %s: %v", runner.ArvMountPoint, rmerr)
}
+ runner.ArvMountPoint = ""
}
if rmerr := os.RemoveAll(runner.parentTemp); rmerr != nil {
runner.CrunchLog.Immediate = log.New(os.Stderr, runner.Container.UUID+" ", 0)
}()
+ if runner.keepstoreLogger != nil {
+ // Flush any buffered logs from our local keepstore
+ // process. Discard anything logged after this point
+ // -- it won't end up in the log collection, so
+ // there's no point writing it to the collectionfs.
+ runner.keepstoreLogbuf.SetWriter(io.Discard)
+ runner.keepstoreLogger.Close()
+ runner.keepstoreLogger = nil
+ }
+
if runner.LogsPDH != nil {
// If we have already assigned something to LogsPDH,
// we must be closing the re-opened log, which won't
// -- it exists only to send logs to other channels.
return nil
}
+
saved, err := runner.saveLogCollection(true)
if err != nil {
return fmt.Errorf("error saving log collection: %s", err)
// Run the full container lifecycle.
func (runner *ContainerRunner) Run() (err error) {
runner.CrunchLog.Printf("crunch-run %s started", cmd.Version.String())
- runner.CrunchLog.Printf("Executing container '%s'", runner.Container.UUID)
+ runner.CrunchLog.Printf("Executing container '%s' using %s runtime", runner.Container.UUID, runner.executor.Runtime())
hostname, hosterr := os.Hostname()
if hosterr != nil {
return fmt.Errorf("dispatch error detected: container %q has state %q", runner.Container.UUID, runner.Container.State)
}
+ var bindmounts map[string]bindmount
defer func() {
// checkErr prints e (unless it's nil) and sets err to
// e (unless err is already non-nil). Thus, if err
// capture partial output and write logs
}
- checkErr("CaptureOutput", runner.CaptureOutput())
+ if bindmounts != nil {
+ checkErr("CaptureOutput", runner.CaptureOutput(bindmounts))
+ }
checkErr("stopHoststat", runner.stopHoststat())
checkErr("CommitLogs", runner.CommitLogs())
+ runner.CleanupDirs()
checkErr("UpdateContainerFinal", runner.UpdateContainerFinal())
}()
return
}
+ // set up FUSE mount and binds
+ bindmounts, err = runner.SetupMounts()
+ if err != nil {
+ runner.finalState = "Cancelled"
+ err = fmt.Errorf("While setting up mounts: %v", err)
+ return
+ }
+
// check for and/or load image
- err = runner.LoadImage()
+ imageID, err := runner.LoadImage()
if err != nil {
if !runner.checkBrokenNode(err) {
// Failed to load image but not due to a "broken node"
return
}
- // set up FUSE mount and binds
- err = runner.SetupMounts()
- if err != nil {
- runner.finalState = "Cancelled"
- err = fmt.Errorf("While setting up mounts: %v", err)
- return
- }
-
- err = runner.CreateContainer()
+ err = runner.CreateContainer(imageID, bindmounts)
if err != nil {
return
}
return fmt.Errorf("error creating container API client: %v", err)
}
+ runner.ContainerKeepClient.SetStorageClasses(runner.Container.OutputStorageClasses)
+ runner.DispatcherKeepClient.SetStorageClasses(runner.Container.OutputStorageClasses)
+
err = runner.ContainerArvClient.Call("GET", "containers", runner.Container.UUID, "secret_mounts", nil, &sm)
if err != nil {
if apierr, ok := err.(arvadosclient.APIServerError); !ok || apierr.HttpStatusCode != 404 {
func NewContainerRunner(dispatcherClient *arvados.Client,
dispatcherArvClient IArvadosClient,
dispatcherKeepClient IKeepClient,
- docker ThinDockerClient,
containerUUID string) (*ContainerRunner, error) {
cr := &ContainerRunner{
dispatcherClient: dispatcherClient,
DispatcherArvClient: dispatcherArvClient,
DispatcherKeepClient: dispatcherKeepClient,
- Docker: docker,
}
cr.NewLogWriter = cr.NewArvLogWriter
cr.RunArvMount = cr.ArvMountCmd
}
func (command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ log := log.New(stderr, "", 0)
flags := flag.NewFlagSet(prog, flag.ContinueOnError)
statInterval := flags.Duration("crunchstat-interval", 10*time.Second, "sampling period for periodic resource usage reporting")
cgroupRoot := flags.String("cgroup-root", "/sys/fs/cgroup", "path to sysfs cgroup tree")
cgroupParentSubsystem := flags.String("cgroup-parent-subsystem", "", "use current cgroup for given subsystem as parent cgroup for container")
caCertsPath := flags.String("ca-certs", "", "Path to TLS root certificates")
detach := flags.Bool("detach", false, "Detach from parent process and run in the background")
- stdinEnv := flags.Bool("stdin-env", false, "Load environment variables from JSON message on stdin")
+ stdinConfig := flags.Bool("stdin-config", false, "Load config and environment variables from JSON message on stdin")
sleep := flags.Duration("sleep", 0, "Delay before starting (testing use only)")
kill := flags.Int("kill", -1, "Send signal to an existing crunch-run process for given UUID")
list := flags.Bool("list", false, "List UUIDs of existing crunch-run processes")
- enableNetwork := flags.String("container-enable-networking", "default",
- `Specify if networking should be enabled for container. One of 'default', 'always':
- default: only enable networking if container requests it.
- always: containers always have networking enabled
- `)
- networkMode := flags.String("container-network-mode", "default",
- `Set networking mode for container. Corresponds to Docker network mode (--net).
- `)
+ enableMemoryLimit := flags.Bool("enable-memory-limit", true, "tell container runtime to limit container's memory usage")
+ enableNetwork := flags.String("container-enable-networking", "default", "enable networking \"always\" (for all containers) or \"default\" (for containers that request it)")
+ networkMode := flags.String("container-network-mode", "default", `Docker network mode for container (use any argument valid for docker --net)`)
memprofile := flags.String("memprofile", "", "write memory profile to `file` after running container")
+ runtimeEngine := flags.String("runtime-engine", "docker", "container runtime: docker or singularity")
flags.Duration("check-containerd", 0, "Ignored. Exists for compatibility with older versions.")
ignoreDetachFlag := false
ignoreDetachFlag = true
}
- if err := flags.Parse(args); err == flag.ErrHelp {
- return 0
- } else if err != nil {
- log.Print(err)
- return 1
- }
-
- if *stdinEnv && !ignoreDetachFlag {
- // Load env vars on stdin if asked (but not in a
- // detached child process, in which case stdin is
- // /dev/null).
- err := loadEnv(os.Stdin)
- if err != nil {
- log.Print(err)
- return 1
- }
+ if ok, code := cmd.ParseFlags(flags, prog, args, "container-uuid", stderr); !ok {
+ return code
+ } else if !*list && flags.NArg() != 1 {
+ fmt.Fprintf(stderr, "missing required argument: container-uuid (try -help)\n")
+ return 2
}
- containerID := flags.Arg(0)
+ containerUUID := flags.Arg(0)
switch {
case *detach && !ignoreDetachFlag:
- return Detach(containerID, prog, args, os.Stdout, os.Stderr)
+ return Detach(containerUUID, prog, args, os.Stdin, os.Stdout, os.Stderr)
case *kill >= 0:
- return KillProcess(containerID, syscall.Signal(*kill), os.Stdout, os.Stderr)
+ return KillProcess(containerUUID, syscall.Signal(*kill), os.Stdout, os.Stderr)
case *list:
return ListProcesses(os.Stdout, os.Stderr)
}
- if containerID == "" {
+ if len(containerUUID) != 27 {
log.Printf("usage: %s [options] UUID", prog)
return 1
}
+ var conf ConfigData
+ if *stdinConfig {
+ err := json.NewDecoder(stdin).Decode(&conf)
+ if err != nil {
+ log.Printf("decode stdin: %s", err)
+ return 1
+ }
+ for k, v := range conf.Env {
+ err = os.Setenv(k, v)
+ if err != nil {
+ log.Printf("setenv(%q): %s", k, err)
+ return 1
+ }
+ }
+ if conf.Cluster != nil {
+ // ClusterID is missing from the JSON
+ // representation, but we need it to generate
+ // a valid config file for keepstore, so we
+ // fill it using the container UUID prefix.
+ conf.Cluster.ClusterID = containerUUID[:5]
+ }
+ }
+
log.Printf("crunch-run %s started", cmd.Version.String())
time.Sleep(*sleep)
arvadosclient.CertFiles = []string{*caCertsPath}
}
+ var keepstoreLogbuf bufThenWrite
+ keepstore, err := startLocalKeepstore(conf, io.MultiWriter(&keepstoreLogbuf, stderr))
+ if err != nil {
+ log.Print(err)
+ return 1
+ }
+ if keepstore != nil {
+ defer keepstore.Process.Kill()
+ }
+
api, err := arvadosclient.MakeArvadosClient()
if err != nil {
- log.Printf("%s: %v", containerID, err)
+ log.Printf("%s: %v", containerUUID, err)
return 1
}
api.Retries = 8
- kc, kcerr := keepclient.MakeKeepClient(api)
- if kcerr != nil {
- log.Printf("%s: %v", containerID, kcerr)
+ kc, err := keepclient.MakeKeepClient(api)
+ if err != nil {
+ log.Printf("%s: %v", containerUUID, err)
return 1
}
kc.BlockCache = &keepclient.BlockCache{MaxBlocks: 2}
kc.Retries = 4
- // API version 1.21 corresponds to Docker 1.9, which is currently the
- // minimum version we want to support.
- docker, dockererr := dockerclient.NewClient(dockerclient.DefaultDockerHost, "1.21", nil, nil)
-
- cr, err := NewContainerRunner(arvados.NewClientFromEnv(), api, kc, docker, containerID)
+ cr, err := NewContainerRunner(arvados.NewClientFromEnv(), api, kc, containerUUID)
if err != nil {
log.Print(err)
return 1
}
- if dockererr != nil {
- cr.CrunchLog.Printf("%s: %v", containerID, dockererr)
- cr.checkBrokenNode(dockererr)
+
+ if keepstore == nil {
+ // Log explanation (if any) for why we're not running
+ // a local keepstore.
+ var buf bytes.Buffer
+ keepstoreLogbuf.SetWriter(&buf)
+ if buf.Len() > 0 {
+ cr.CrunchLog.Printf("%s", strings.TrimSpace(buf.String()))
+ }
+ } else if logWhat := conf.Cluster.Containers.LocalKeepLogsToContainerLog; logWhat == "none" {
+ cr.CrunchLog.Printf("using local keepstore process (pid %d) at %s", keepstore.Process.Pid, os.Getenv("ARVADOS_KEEP_SERVICES"))
+ keepstoreLogbuf.SetWriter(io.Discard)
+ } else {
+ cr.CrunchLog.Printf("using local keepstore process (pid %d) at %s, writing logs to keepstore.txt in log collection", keepstore.Process.Pid, os.Getenv("ARVADOS_KEEP_SERVICES"))
+ logwriter, err := cr.NewLogWriter("keepstore")
+ if err != nil {
+ log.Print(err)
+ return 1
+ }
+ cr.keepstoreLogger = NewThrottledLogger(logwriter)
+
+ var writer io.WriteCloser = cr.keepstoreLogger
+ if logWhat == "errors" {
+ writer = &filterKeepstoreErrorsOnly{WriteCloser: writer}
+ } else if logWhat != "all" {
+ // should have been caught earlier by
+ // dispatcher's config loader
+ log.Printf("invalid value for Containers.LocalKeepLogsToContainerLog: %q", logWhat)
+ return 1
+ }
+ err = keepstoreLogbuf.SetWriter(writer)
+ if err != nil {
+ log.Print(err)
+ return 1
+ }
+ cr.keepstoreLogbuf = &keepstoreLogbuf
+ }
+
+ switch *runtimeEngine {
+ case "docker":
+ cr.executor, err = newDockerExecutor(containerUUID, cr.CrunchLog.Printf, cr.containerWatchdogInterval)
+ case "singularity":
+ cr.executor, err = newSingularityExecutor(cr.CrunchLog.Printf)
+ default:
+ cr.CrunchLog.Printf("%s: unsupported RuntimeEngine %q", containerUUID, *runtimeEngine)
+ cr.CrunchLog.Close()
+ return 1
+ }
+ if err != nil {
+ cr.CrunchLog.Printf("%s: %v", containerUUID, err)
+ cr.checkBrokenNode(err)
cr.CrunchLog.Close()
return 1
}
+ defer cr.executor.Close()
- cr.gateway = Gateway{
- Address: os.Getenv("GatewayAddress"),
- AuthSecret: os.Getenv("GatewayAuthSecret"),
- ContainerUUID: containerID,
- DockerContainerID: &cr.ContainerID,
- Log: cr.CrunchLog,
- }
+ gwAuthSecret := os.Getenv("GatewayAuthSecret")
os.Unsetenv("GatewayAuthSecret")
- if cr.gateway.Address != "" {
+ if gwAuthSecret == "" {
+ // not safe to run a gateway service without an auth
+ // secret
+ cr.CrunchLog.Printf("Not starting a gateway server (GatewayAuthSecret was not provided by dispatcher)")
+ } else if gwListen := os.Getenv("GatewayAddress"); gwListen == "" {
+ // dispatcher did not tell us which external IP
+ // address to advertise --> no gateway service
+ cr.CrunchLog.Printf("Not starting a gateway server (GatewayAddress was not provided by dispatcher)")
+ } else if de, ok := cr.executor.(*dockerExecutor); ok {
+ cr.gateway = Gateway{
+ Address: gwListen,
+ AuthSecret: gwAuthSecret,
+ ContainerUUID: containerUUID,
+ DockerContainerID: &de.containerID,
+ Log: cr.CrunchLog,
+ ContainerIPAddress: dockerContainerIPAddress(&de.containerID),
+ }
err = cr.gateway.Start()
if err != nil {
log.Printf("error starting gateway server: %s", err)
}
}
- parentTemp, tmperr := cr.MkTempDir("", "crunch-run."+containerID+".")
+ parentTemp, tmperr := cr.MkTempDir("", "crunch-run."+containerUUID+".")
if tmperr != nil {
- log.Printf("%s: %v", containerID, tmperr)
+ log.Printf("%s: %v", containerUUID, tmperr)
return 1
}
cr.statInterval = *statInterval
cr.cgroupRoot = *cgroupRoot
cr.expectCgroupParent = *cgroupParent
+ cr.enableMemoryLimit = *enableMemoryLimit
cr.enableNetwork = *enableNetwork
cr.networkMode = *networkMode
if *cgroupParentSubsystem != "" {
}
if runerr != nil {
- log.Printf("%s: %v", containerID, runerr)
+ log.Printf("%s: %v", containerUUID, runerr)
return 1
}
return 0
}
-func loadEnv(rdr io.Reader) error {
- buf, err := ioutil.ReadAll(rdr)
+func startLocalKeepstore(configData ConfigData, logbuf io.Writer) (*exec.Cmd, error) {
+ if configData.Cluster == nil || configData.KeepBuffers < 1 {
+ return nil, nil
+ }
+ for uuid, vol := range configData.Cluster.Volumes {
+ if len(vol.AccessViaHosts) > 0 {
+ fmt.Fprintf(logbuf, "not starting a local keepstore process because a volume (%s) uses AccessViaHosts\n", uuid)
+ return nil, nil
+ }
+ if !vol.ReadOnly && vol.Replication < configData.Cluster.Collections.DefaultReplication {
+ fmt.Fprintf(logbuf, "not starting a local keepstore process because a writable volume (%s) has replication less than Collections.DefaultReplication (%d < %d)\n", uuid, vol.Replication, configData.Cluster.Collections.DefaultReplication)
+ return nil, nil
+ }
+ }
+
+ // Rather than have an alternate way to tell keepstore how
+ // many buffers to use when starting it this way, we just
+ // modify the cluster configuration that we feed it on stdin.
+ configData.Cluster.API.MaxKeepBlobBuffers = configData.KeepBuffers
+
+ ln, err := net.Listen("tcp", "localhost:0")
+ if err != nil {
+ return nil, err
+ }
+ _, port, err := net.SplitHostPort(ln.Addr().String())
+ if err != nil {
+ ln.Close()
+ return nil, err
+ }
+ ln.Close()
+ url := "http://localhost:" + port
+
+ fmt.Fprintf(logbuf, "starting keepstore on %s\n", url)
+
+ var confJSON bytes.Buffer
+ err = json.NewEncoder(&confJSON).Encode(arvados.Config{
+ Clusters: map[string]arvados.Cluster{
+ configData.Cluster.ClusterID: *configData.Cluster,
+ },
+ })
if err != nil {
- return fmt.Errorf("read stdin: %s", err)
+ return nil, err
}
- var env map[string]string
- err = json.Unmarshal(buf, &env)
+ cmd := exec.Command("/proc/self/exe", "keepstore", "-config=-")
+ if target, err := os.Readlink(cmd.Path); err == nil && strings.HasSuffix(target, ".test") {
+ // If we're a 'go test' process, running
+ // /proc/self/exe would start the test suite in a
+ // child process, which is not what we want.
+ cmd.Path, _ = exec.LookPath("go")
+ cmd.Args = append([]string{"go", "run", "../../cmd/arvados-server"}, cmd.Args[1:]...)
+ cmd.Env = os.Environ()
+ }
+ cmd.Stdin = &confJSON
+ cmd.Stdout = logbuf
+ cmd.Stderr = logbuf
+ cmd.Env = append(cmd.Env,
+ "GOGC=10",
+ "ARVADOS_SERVICE_INTERNAL_URL="+url)
+ err = cmd.Start()
if err != nil {
- return fmt.Errorf("decode stdin: %s", err)
+ return nil, fmt.Errorf("error starting keepstore process: %w", err)
}
- for k, v := range env {
- err = os.Setenv(k, v)
+ cmdExited := false
+ go func() {
+ cmd.Wait()
+ cmdExited = true
+ }()
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(time.Second*10))
+ defer cancel()
+ poll := time.NewTicker(time.Second / 10)
+ defer poll.Stop()
+ client := http.Client{}
+ for range poll.C {
+ testReq, err := http.NewRequestWithContext(ctx, "GET", url+"/_health/ping", nil)
+ testReq.Header.Set("Authorization", "Bearer "+configData.Cluster.ManagementToken)
if err != nil {
- return fmt.Errorf("setenv(%q): %s", k, err)
+ return nil, err
+ }
+ resp, err := client.Do(testReq)
+ if err == nil {
+ resp.Body.Close()
+ if resp.StatusCode == http.StatusOK {
+ break
+ }
+ }
+ if cmdExited {
+ return nil, fmt.Errorf("keepstore child process exited")
+ }
+ if ctx.Err() != nil {
+ return nil, fmt.Errorf("timed out waiting for new keepstore process to report healthy")
}
}
- return nil
+ os.Setenv("ARVADOS_KEEP_SERVICES", url)
+ return cmd, nil
}
package crunchrun
import (
- "bufio"
"bytes"
"crypto/md5"
"encoding/json"
"fmt"
"io"
"io/ioutil"
- "net"
"os"
"os/exec"
+ "regexp"
"runtime/pprof"
- "sort"
"strings"
"sync"
"syscall"
"git.arvados.org/arvados.git/sdk/go/manifest"
"golang.org/x/net/context"
- dockertypes "github.com/docker/docker/api/types"
- dockercontainer "github.com/docker/docker/api/types/container"
- dockernetwork "github.com/docker/docker/api/types/network"
. "gopkg.in/check.v1"
)
TestingT(t)
}
-// Gocheck boilerplate
var _ = Suite(&TestSuite{})
type TestSuite struct {
- client *arvados.Client
- docker *TestDockerClient
- runner *ContainerRunner
+ client *arvados.Client
+ api *ArvTestClient
+ runner *ContainerRunner
+ executor *stubExecutor
+ keepmount string
+ testDispatcherKeepClient KeepTestClient
+ testContainerKeepClient KeepTestClient
}
func (s *TestSuite) SetUpTest(c *C) {
+ *brokenNodeHook = ""
s.client = arvados.NewClientFromEnv()
- s.docker = NewTestDockerClient()
+ s.executor = &stubExecutor{}
+ var err error
+ s.api = &ArvTestClient{}
+ s.runner, err = NewContainerRunner(s.client, s.api, &s.testDispatcherKeepClient, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ c.Assert(err, IsNil)
+ s.runner.executor = s.executor
+ s.runner.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
+ return s.api, &s.testContainerKeepClient, s.client, nil
+ }
+ s.runner.RunArvMount = func(cmd []string, tok string) (*exec.Cmd, error) {
+ s.runner.ArvMountPoint = s.keepmount
+ return nil, nil
+ }
+ s.keepmount = c.MkDir()
+ err = os.Mkdir(s.keepmount+"/by_id", 0755)
+ c.Assert(err, IsNil)
+ err = os.Mkdir(s.keepmount+"/by_id/"+arvadostest.DockerImage112PDH, 0755)
+ c.Assert(err, IsNil)
+ err = ioutil.WriteFile(s.keepmount+"/by_id/"+arvadostest.DockerImage112PDH+"/"+arvadostest.DockerImage112Filename, []byte("#notarealtarball"), 0644)
+ err = os.Mkdir(s.keepmount+"/by_id/"+fakeInputCollectionPDH, 0755)
+ c.Assert(err, IsNil)
+ err = ioutil.WriteFile(s.keepmount+"/by_id/"+fakeInputCollectionPDH+"/input.json", []byte(`{"input":true}`), 0644)
+ c.Assert(err, IsNil)
+ s.runner.ArvMountPoint = s.keepmount
}
type ArvTestClient struct {
}
type KeepTestClient struct {
- Called bool
- Content []byte
+ Called bool
+ Content []byte
+ StorageClasses []string
}
+type stubExecutor struct {
+ imageLoaded bool
+ loaded string
+ loadErr error
+ exitCode int
+ createErr error
+ created containerSpec
+ startErr error
+ waitSleep time.Duration
+ waitErr error
+ stopErr error
+ stopped bool
+ closed bool
+ runFunc func()
+ exit chan int
+}
+
+func (e *stubExecutor) LoadImage(imageId string, tarball string, container arvados.Container, keepMount string,
+ containerClient *arvados.Client) error {
+ e.loaded = tarball
+ return e.loadErr
+}
+func (e *stubExecutor) Runtime() string { return "stub" }
+func (e *stubExecutor) Create(spec containerSpec) error { e.created = spec; return e.createErr }
+func (e *stubExecutor) Start() error { e.exit = make(chan int, 1); go e.runFunc(); return e.startErr }
+func (e *stubExecutor) CgroupID() string { return "cgroupid" }
+func (e *stubExecutor) Stop() error { e.stopped = true; go func() { e.exit <- -1 }(); return e.stopErr }
+func (e *stubExecutor) Close() { e.closed = true }
+func (e *stubExecutor) Wait(context.Context) (int, error) {
+ return <-e.exit, e.waitErr
+}
+
+const fakeInputCollectionPDH = "ffffffffaaaaaaaa88888888eeeeeeee+1234"
+
var hwManifest = ". 82ab40c24fc8df01798e57ba66795bb1+841216+Aa124ac75e5168396c73c0a18eda641a4f41791c0@569fa8c3 0:841216:9c31ee32b3d15268a0754e8edc74d4f815ee014b693bc5109058e431dd5caea7.tar\n"
var hwPDH = "a45557269dcb65a6b78f9ac061c0850b+120"
var hwImageID = "9c31ee32b3d15268a0754e8edc74d4f815ee014b693bc5109058e431dd5caea7"
var fakeAuthUUID = "zzzzz-gj3su-55pqoyepgi2glem"
var fakeAuthToken = "a3ltuwzqcu2u4sc0q7yhpc2w7s00fdcqecg5d6e0u3pfohmbjt"
-type TestDockerClient struct {
- imageLoaded string
- logReader io.ReadCloser
- logWriter io.WriteCloser
- fn func(t *TestDockerClient)
- exitCode int
- stop chan bool
- cwd string
- env []string
- api *ArvTestClient
- realTemp string
- calledWait bool
- ctrExited bool
-}
-
-func NewTestDockerClient() *TestDockerClient {
- t := &TestDockerClient{}
- t.logReader, t.logWriter = io.Pipe()
- t.stop = make(chan bool, 1)
- t.cwd = "/"
- return t
-}
-
-type MockConn struct {
- net.Conn
-}
-
-func (m *MockConn) Write(b []byte) (int, error) {
- return len(b), nil
-}
-
-func NewMockConn() *MockConn {
- c := &MockConn{}
- return c
-}
-
-func (t *TestDockerClient) ContainerAttach(ctx context.Context, container string, options dockertypes.ContainerAttachOptions) (dockertypes.HijackedResponse, error) {
- return dockertypes.HijackedResponse{Conn: NewMockConn(), Reader: bufio.NewReader(t.logReader)}, nil
-}
-
-func (t *TestDockerClient) ContainerCreate(ctx context.Context, config *dockercontainer.Config, hostConfig *dockercontainer.HostConfig, networkingConfig *dockernetwork.NetworkingConfig, containerName string) (dockercontainer.ContainerCreateCreatedBody, error) {
- if config.WorkingDir != "" {
- t.cwd = config.WorkingDir
- }
- t.env = config.Env
- return dockercontainer.ContainerCreateCreatedBody{ID: "abcde"}, nil
-}
-
-func (t *TestDockerClient) ContainerStart(ctx context.Context, container string, options dockertypes.ContainerStartOptions) error {
- if t.exitCode == 3 {
- return errors.New(`Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/tmp/keep453790790/by_id/99999999999999999999999999999999+99999/myGenome\\\" to rootfs \\\"/tmp/docker/overlay2/9999999999999999999999999999999999999999999999999999999999999999/merged\\\" at \\\"/tmp/docker/overlay2/9999999999999999999999999999999999999999999999999999999999999999/merged/keep/99999999999999999999999999999999+99999/myGenome\\\" caused \\\"no such file or directory\\\"\""`)
- }
- if t.exitCode == 4 {
- return errors.New(`panic: standard_init_linux.go:175: exec user process caused "no such file or directory"`)
- }
- if t.exitCode == 5 {
- return errors.New(`Error response from daemon: Cannot start container 41f26cbc43bcc1280f4323efb1830a394ba8660c9d1c2b564ba42bf7f7694845: [8] System error: no such file or directory`)
- }
- if t.exitCode == 6 {
- return errors.New(`Error response from daemon: Cannot start container 58099cd76c834f3dc2a4fb76c8028f049ae6d4fdf0ec373e1f2cfea030670c2d: [8] System error: exec: "foobar": executable file not found in $PATH`)
- }
-
- if container == "abcde" {
- // t.fn gets executed in ContainerWait
- return nil
- }
- return errors.New("Invalid container id")
-}
-
-func (t *TestDockerClient) ContainerRemove(ctx context.Context, container string, options dockertypes.ContainerRemoveOptions) error {
- t.stop <- true
- return nil
-}
-
-func (t *TestDockerClient) ContainerWait(ctx context.Context, container string, condition dockercontainer.WaitCondition) (<-chan dockercontainer.ContainerWaitOKBody, <-chan error) {
- t.calledWait = true
- body := make(chan dockercontainer.ContainerWaitOKBody, 1)
- err := make(chan error)
- go func() {
- t.fn(t)
- body <- dockercontainer.ContainerWaitOKBody{StatusCode: int64(t.exitCode)}
- }()
- return body, err
-}
-
-func (t *TestDockerClient) ContainerInspect(ctx context.Context, id string) (c dockertypes.ContainerJSON, err error) {
- c.ContainerJSONBase = &dockertypes.ContainerJSONBase{}
- c.ID = "abcde"
- if t.ctrExited {
- c.State = &dockertypes.ContainerState{Status: "exited", Dead: true}
- } else {
- c.State = &dockertypes.ContainerState{Status: "running", Pid: 1234, Running: true}
- }
- return
-}
-
-func (t *TestDockerClient) ImageInspectWithRaw(ctx context.Context, image string) (dockertypes.ImageInspect, []byte, error) {
- if t.exitCode == 2 {
- return dockertypes.ImageInspect{}, nil, fmt.Errorf("Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?")
- }
-
- if t.imageLoaded == image {
- return dockertypes.ImageInspect{}, nil, nil
- }
- return dockertypes.ImageInspect{}, nil, errors.New("")
-}
-
-func (t *TestDockerClient) ImageLoad(ctx context.Context, input io.Reader, quiet bool) (dockertypes.ImageLoadResponse, error) {
- if t.exitCode == 2 {
- return dockertypes.ImageLoadResponse{}, fmt.Errorf("Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?")
- }
- _, err := io.Copy(ioutil.Discard, input)
- if err != nil {
- return dockertypes.ImageLoadResponse{}, err
- }
- t.imageLoaded = hwImageID
- return dockertypes.ImageLoadResponse{Body: ioutil.NopCloser(input)}, nil
-}
-
-func (*TestDockerClient) ImageRemove(ctx context.Context, image string, options dockertypes.ImageRemoveOptions) ([]dockertypes.ImageDeleteResponseItem, error) {
- return nil, nil
-}
-
func (client *ArvTestClient) Create(resourceType string,
parameters arvadosclient.Dict,
output interface{}) error {
} else {
j = []byte(`{
"command": ["sleep", "1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"}, "/json": {"kind": "json", "content": {"number": 123456789123456789}}},
return locator, nil
}
-func (client *KeepTestClient) PutB(buf []byte) (string, int, error) {
- client.Content = buf
- return fmt.Sprintf("%x+%d", md5.Sum(buf), len(buf)), len(buf), nil
+func (client *KeepTestClient) BlockWrite(_ context.Context, opts arvados.BlockWriteOptions) (arvados.BlockWriteResponse, error) {
+ client.Content = opts.Data
+ return arvados.BlockWriteResponse{
+ Locator: fmt.Sprintf("%x+%d", md5.Sum(opts.Data), len(opts.Data)),
+ }, nil
}
func (client *KeepTestClient) ReadAt(string, []byte, int) (int, error) {
client.Content = nil
}
+func (client *KeepTestClient) SetStorageClasses(sc []string) {
+ client.StorageClasses = sc
+}
+
type FileWrapper struct {
io.ReadCloser
len int64
}
func (s *TestSuite) TestLoadImage(c *C) {
- cr, err := NewContainerRunner(s.client, &ArvTestClient{},
- &KeepTestClient{}, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
-
- kc := &KeepTestClient{}
- defer kc.Close()
- cr.ContainerArvClient = &ArvTestClient{}
- cr.ContainerKeepClient = kc
-
- _, err = cr.Docker.ImageRemove(nil, hwImageID, dockertypes.ImageRemoveOptions{})
- c.Check(err, IsNil)
-
- _, _, err = cr.Docker.ImageInspectWithRaw(nil, hwImageID)
- c.Check(err, NotNil)
-
- cr.Container.ContainerImage = hwPDH
-
- // (1) Test loading image from keep
- c.Check(kc.Called, Equals, false)
- c.Check(cr.ContainerConfig.Image, Equals, "")
-
- err = cr.LoadImage()
-
- c.Check(err, IsNil)
- defer func() {
- cr.Docker.ImageRemove(nil, hwImageID, dockertypes.ImageRemoveOptions{})
- }()
-
- c.Check(kc.Called, Equals, true)
- c.Check(cr.ContainerConfig.Image, Equals, hwImageID)
-
- _, _, err = cr.Docker.ImageInspectWithRaw(nil, hwImageID)
- c.Check(err, IsNil)
+ s.runner.Container.ContainerImage = arvadostest.DockerImage112PDH
+ s.runner.Container.Mounts = map[string]arvados.Mount{
+ "/out": {Kind: "tmp", Writable: true},
+ }
+ s.runner.Container.OutputPath = "/out"
- // (2) Test using image that's already loaded
- kc.Called = false
- cr.ContainerConfig.Image = ""
+ _, err := s.runner.SetupMounts()
+ c.Assert(err, IsNil)
- err = cr.LoadImage()
+ imageID, err := s.runner.LoadImage()
c.Check(err, IsNil)
- c.Check(kc.Called, Equals, false)
- c.Check(cr.ContainerConfig.Image, Equals, hwImageID)
-
+ c.Check(s.executor.loaded, Matches, ".*"+regexp.QuoteMeta(arvadostest.DockerImage112Filename))
+ c.Check(imageID, Equals, strings.TrimSuffix(arvadostest.DockerImage112Filename, ".tar"))
+
+ s.runner.Container.ContainerImage = arvadostest.DockerImage112PDH
+ s.executor.imageLoaded = false
+ s.executor.loaded = ""
+ s.executor.loadErr = errors.New("bork")
+ imageID, err = s.runner.LoadImage()
+ c.Check(err, ErrorMatches, ".*bork")
+ c.Check(s.executor.loaded, Matches, ".*"+regexp.QuoteMeta(arvadostest.DockerImage112Filename))
+
+ s.runner.Container.ContainerImage = fakeInputCollectionPDH
+ s.executor.imageLoaded = false
+ s.executor.loaded = ""
+ s.executor.loadErr = nil
+ imageID, err = s.runner.LoadImage()
+ c.Check(err, ErrorMatches, "image collection does not include a \\.tar image file")
+ c.Check(s.executor.loaded, Equals, "")
}
type ArvErrorTestClient struct{}
return nil, errors.New("KeepError")
}
-func (*KeepErrorTestClient) PutB(buf []byte) (string, int, error) {
- return "", 0, errors.New("KeepError")
+func (*KeepErrorTestClient) BlockWrite(context.Context, arvados.BlockWriteOptions) (arvados.BlockWriteResponse, error) {
+ return arvados.BlockWriteResponse{}, errors.New("KeepError")
}
func (*KeepErrorTestClient) LocalLocator(string) (string, error) {
return ErrorReader{}, nil
}
-func (s *TestSuite) TestLoadImageArvError(c *C) {
- // (1) Arvados error
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err := NewContainerRunner(s.client, &ArvErrorTestClient{}, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
-
- cr.ContainerArvClient = &ArvErrorTestClient{}
- cr.ContainerKeepClient = &KeepTestClient{}
-
- cr.Container.ContainerImage = hwPDH
-
- err = cr.LoadImage()
- c.Check(err.Error(), Equals, "While getting container image collection: ArvError")
-}
-
-func (s *TestSuite) TestLoadImageKeepError(c *C) {
- // (2) Keep error
- kc := &KeepErrorTestClient{}
- cr, err := NewContainerRunner(s.client, &ArvTestClient{}, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
-
- cr.ContainerArvClient = &ArvTestClient{}
- cr.ContainerKeepClient = &KeepErrorTestClient{}
-
- cr.Container.ContainerImage = hwPDH
-
- err = cr.LoadImage()
- c.Assert(err, NotNil)
- c.Check(err.Error(), Equals, "While creating ManifestFileReader for container image: KeepError")
-}
-
-func (s *TestSuite) TestLoadImageCollectionError(c *C) {
- // (3) Collection doesn't contain image
- kc := &KeepReadErrorTestClient{}
- cr, err := NewContainerRunner(s.client, &ArvTestClient{}, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
- cr.Container.ContainerImage = otherPDH
-
- cr.ContainerArvClient = &ArvTestClient{}
- cr.ContainerKeepClient = &KeepReadErrorTestClient{}
-
- err = cr.LoadImage()
- c.Check(err.Error(), Equals, "First file in the container image collection does not end in .tar")
-}
-
-func (s *TestSuite) TestLoadImageKeepReadError(c *C) {
- // (4) Collection doesn't contain image
- kc := &KeepReadErrorTestClient{}
- cr, err := NewContainerRunner(s.client, &ArvTestClient{}, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
- cr.Container.ContainerImage = hwPDH
- cr.ContainerArvClient = &ArvTestClient{}
- cr.ContainerKeepClient = &KeepReadErrorTestClient{}
-
- err = cr.LoadImage()
- c.Check(err, NotNil)
-}
-
type ClosableBuffer struct {
bytes.Buffer
}
}
func (s *TestSuite) TestRunContainer(c *C) {
- s.docker.fn = func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "Hello world\n"))
- t.logWriter.Close()
+ s.executor.runFunc = func() {
+ fmt.Fprintf(s.executor.created.Stdout, "Hello world\n")
+ s.executor.exit <- 0
}
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err := NewContainerRunner(s.client, &ArvTestClient{}, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
-
- cr.ContainerArvClient = &ArvTestClient{}
- cr.ContainerKeepClient = &KeepTestClient{}
var logs TestLogs
- cr.NewLogWriter = logs.NewTestLoggingWriter
- cr.Container.ContainerImage = hwPDH
- cr.Container.Command = []string{"./hw"}
- err = cr.LoadImage()
- c.Check(err, IsNil)
+ s.runner.NewLogWriter = logs.NewTestLoggingWriter
+ s.runner.Container.ContainerImage = arvadostest.DockerImage112PDH
+ s.runner.Container.Command = []string{"./hw"}
+ s.runner.Container.OutputStorageClasses = []string{"default"}
- err = cr.CreateContainer()
- c.Check(err, IsNil)
+ imageID, err := s.runner.LoadImage()
+ c.Assert(err, IsNil)
- err = cr.StartContainer()
- c.Check(err, IsNil)
+ err = s.runner.CreateContainer(imageID, nil)
+ c.Assert(err, IsNil)
- err = cr.WaitFinish()
- c.Check(err, IsNil)
+ err = s.runner.StartContainer()
+ c.Assert(err, IsNil)
- c.Check(strings.HasSuffix(logs.Stdout.String(), "Hello world\n"), Equals, true)
+ err = s.runner.WaitFinish()
+ c.Assert(err, IsNil)
+
+ c.Check(logs.Stdout.String(), Matches, ".*Hello world\n")
c.Check(logs.Stderr.String(), Equals, "")
}
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.CrunchLog.Timestamper = (&TestTimestamper{}).Timestamp
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
err = cr.UpdateContainerRunning()
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.LogsPDH = new(string)
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.cCancelled = true
cr.finalState = "Cancelled"
// Used by the TestFullRun*() test below to DRY up boilerplate setup to do full
// dress rehearsal of the Run() function, starting from a JSON container record.
-func (s *TestSuite) fullRunHelper(c *C, record string, extraMounts []string, exitCode int, fn func(t *TestDockerClient)) (api *ArvTestClient, cr *ContainerRunner, realTemp string) {
- rec := arvados.Container{}
- err := json.Unmarshal([]byte(record), &rec)
- c.Check(err, IsNil)
+func (s *TestSuite) fullRunHelper(c *C, record string, extraMounts []string, exitCode int, fn func()) (*ArvTestClient, *ContainerRunner, string) {
+ err := json.Unmarshal([]byte(record), &s.api.Container)
+ c.Assert(err, IsNil)
+ initialState := s.api.Container.State
var sm struct {
SecretMounts map[string]arvados.Mount `json:"secret_mounts"`
err = json.Unmarshal([]byte(record), &sm)
c.Check(err, IsNil)
secretMounts, err := json.Marshal(sm)
- c.Logf("%s %q", sm, secretMounts)
- c.Check(err, IsNil)
-
- s.docker.exitCode = exitCode
- s.docker.fn = fn
- s.docker.ImageRemove(nil, hwImageID, dockertypes.ImageRemoveOptions{})
-
- api = &ArvTestClient{Container: rec}
- s.docker.api = api
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err = NewContainerRunner(s.client, api, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
- s.runner = cr
- cr.statInterval = 100 * time.Millisecond
- cr.containerWatchdogInterval = time.Second
- am := &ArvMountCmdLine{}
- cr.RunArvMount = am.ArvMountTest
+ c.Logf("SecretMounts decoded %v json %q", sm, secretMounts)
- realTemp, err = ioutil.TempDir("", "crunchrun_test1-")
- c.Assert(err, IsNil)
- defer os.RemoveAll(realTemp)
+ s.executor.runFunc = func() {
+ fn()
+ s.executor.exit <- exitCode
+ }
- s.docker.realTemp = realTemp
+ s.runner.statInterval = 100 * time.Millisecond
+ s.runner.containerWatchdogInterval = time.Second
+ am := &ArvMountCmdLine{}
+ s.runner.RunArvMount = am.ArvMountTest
+ realTemp := c.MkDir()
tempcount := 0
- cr.MkTempDir = func(_ string, prefix string) (string, error) {
+ s.runner.MkTempDir = func(_, prefix string) (string, error) {
tempcount++
d := fmt.Sprintf("%s/%s%d", realTemp, prefix, tempcount)
err := os.Mkdir(d, os.ModePerm)
}
return d, err
}
- cr.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
- return &ArvTestClient{secretMounts: secretMounts}, &KeepTestClient{}, nil, nil
+ s.runner.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
+ return &ArvTestClient{secretMounts: secretMounts}, &s.testContainerKeepClient, nil, nil
}
if extraMounts != nil && len(extraMounts) > 0 {
- err := cr.SetupArvMountPoint("keep")
+ err := s.runner.SetupArvMountPoint("keep")
c.Check(err, IsNil)
for _, m := range extraMounts {
- os.MkdirAll(cr.ArvMountPoint+"/by_id/"+m, os.ModePerm)
+ os.MkdirAll(s.runner.ArvMountPoint+"/by_id/"+m, os.ModePerm)
}
}
- err = cr.Run()
- if api.CalledWith("container.state", "Complete") != nil {
+ err = s.runner.Run()
+ if s.api.CalledWith("container.state", "Complete") != nil {
c.Check(err, IsNil)
}
- if exitCode != 2 {
- c.Check(api.WasSetRunning, Equals, true)
+ if s.executor.loadErr == nil && s.executor.createErr == nil && initialState != "Running" {
+ c.Check(s.api.WasSetRunning, Equals, true)
var lastupdate arvadosclient.Dict
- for _, content := range api.Content {
+ for _, content := range s.api.Content {
if content["container"] != nil {
lastupdate = content["container"].(arvadosclient.Dict)
}
}
if lastupdate["log"] == nil {
- c.Errorf("no container update with non-nil log -- updates were: %v", api.Content)
+ c.Errorf("no container update with non-nil log -- updates were: %v", s.api.Content)
}
}
if err != nil {
- for k, v := range api.Logs {
+ for k, v := range s.api.Logs {
c.Log(k)
c.Log(v.String())
}
}
- return
+ return s.api, s.runner, realTemp
}
func (s *TestSuite) TestFullRunHello(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.runner.enableMemoryLimit = true
+ s.runner.networkMode = "default"
+ s.fullRunHelper(c, `{
"command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
- "environment": {},
+ "environment": {"foo":"bar","baz":"waz"},
"mounts": {"/tmp": {"kind": "tmp"} },
"output_path": "/tmp",
"priority": 1,
- "runtime_constraints": {},
- "state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
+ "runtime_constraints": {"vcpus":1,"ram":1000000},
+ "state": "Locked",
+ "output_storage_classes": ["default"]
+}`, nil, 0, func() {
+ c.Check(s.executor.created.Command, DeepEquals, []string{"echo", "hello world"})
+ c.Check(s.executor.created.Image, Equals, "sha256:d8309758b8fe2c81034ffc8a10c36460b77db7bc5e7b448c4e5b684f9d95a678")
+ c.Check(s.executor.created.Env, DeepEquals, map[string]string{"foo": "bar", "baz": "waz"})
+ c.Check(s.executor.created.VCPUs, Equals, 1)
+ c.Check(s.executor.created.RAM, Equals, int64(1000000))
+ c.Check(s.executor.created.NetworkMode, Equals, "default")
+ c.Check(s.executor.created.EnableNetwork, Equals, false)
+ c.Check(s.executor.created.CUDADeviceCount, Equals, 0)
+ fmt.Fprintln(s.executor.created.Stdout, "hello world")
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "hello world\n"), Equals, true)
-
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.Logs["stdout"].String(), Matches, ".*hello world\n")
+ c.Check(s.testDispatcherKeepClient.StorageClasses, DeepEquals, []string{"default"})
+ c.Check(s.testContainerKeepClient.StorageClasses, DeepEquals, []string{"default"})
}
func (s *TestSuite) TestRunAlreadyRunning(c *C) {
var ran bool
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "3"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"runtime_constraints": {},
"scheduling_parameters":{"max_run_time": 1},
"state": "Running"
-}`, nil, 2, func(t *TestDockerClient) {
+}`, nil, 2, func() {
ran = true
})
-
- c.Check(api.CalledWith("container.state", "Cancelled"), IsNil)
- c.Check(api.CalledWith("container.state", "Complete"), IsNil)
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), IsNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), IsNil)
c.Check(ran, Equals, false)
}
func (s *TestSuite) TestRunTimeExceeded(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "3"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"runtime_constraints": {},
"scheduling_parameters":{"max_run_time": 1},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
+}`, nil, 0, func() {
time.Sleep(3 * time.Second)
- t.logWriter.Close()
})
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*maximum run time exceeded.*")
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), NotNil)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*maximum run time exceeded.*")
}
func (s *TestSuite) TestContainerWaitFails(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "3"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"mounts": {"/tmp": {"kind": "tmp"} },
"output_path": "/tmp",
"priority": 1,
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.ctrExited = true
- time.Sleep(10 * time.Second)
- t.logWriter.Close()
+}`, nil, 0, func() {
+ s.executor.waitErr = errors.New("Container is not running")
})
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Container is not running.*")
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), NotNil)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*Container is not running.*")
}
func (s *TestSuite) TestCrunchstat(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
- }`, nil, 0, func(t *TestDockerClient) {
+ }`, nil, 0, func() {
time.Sleep(time.Second)
- t.logWriter.Close()
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
// We didn't actually start a container, so crunchstat didn't
// find accounting files and therefore didn't log any stats.
// It should have logged a "can't find accounting files"
// message after one poll interval, though, so we can confirm
// it's alive:
- c.Assert(api.Logs["crunchstat"], NotNil)
- c.Check(api.Logs["crunchstat"].String(), Matches, `(?ms).*cgroup stats files have not appeared after 100ms.*`)
+ c.Assert(s.api.Logs["crunchstat"], NotNil)
+ c.Check(s.api.Logs["crunchstat"].String(), Matches, `(?ms).*cgroup stats files have not appeared after 100ms.*`)
// The "files never appeared" log assures us that we called
// (*crunchstat.Reporter)Stop(), and that we set it up with
// the correct container ID "abcde":
- c.Check(api.Logs["crunchstat"].String(), Matches, `(?ms).*cgroup stats files never appeared for abcde\n`)
+ c.Check(s.api.Logs["crunchstat"].String(), Matches, `(?ms).*cgroup stats files never appeared for cgroupid\n`)
}
func (s *TestSuite) TestNodeInfoLog(c *C) {
os.Setenv("SLURMD_NODENAME", "compute2")
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"runtime_constraints": {},
"state": "Locked"
}`, nil, 0,
- func(t *TestDockerClient) {
+ func() {
time.Sleep(time.Second)
- t.logWriter.Close()
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
- c.Assert(api.Logs["node"], NotNil)
- json := api.Logs["node"].String()
+ c.Assert(s.api.Logs["node"], NotNil)
+ json := s.api.Logs["node"].String()
c.Check(json, Matches, `(?ms).*"uuid": *"zzzzz-7ekkf-2z3mc76g2q73aio".*`)
c.Check(json, Matches, `(?ms).*"total_cpu_cores": *16.*`)
c.Check(json, Not(Matches), `(?ms).*"info":.*`)
- c.Assert(api.Logs["node-info"], NotNil)
- json = api.Logs["node-info"].String()
+ c.Assert(s.api.Logs["node-info"], NotNil)
+ json = s.api.Logs["node-info"].String()
c.Check(json, Matches, `(?ms).*Host Information.*`)
c.Check(json, Matches, `(?ms).*CPU Information.*`)
c.Check(json, Matches, `(?ms).*Memory Information.*`)
c.Check(json, Matches, `(?ms).*Disk INodes.*`)
}
+func (s *TestSuite) TestLogVersionAndRuntime(c *C) {
+ s.fullRunHelper(c, `{
+ "command": ["sleep", "1"],
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
+ "cwd": ".",
+ "environment": {},
+ "mounts": {"/tmp": {"kind": "tmp"} },
+ "output_path": "/tmp",
+ "priority": 1,
+ "runtime_constraints": {},
+ "state": "Locked"
+ }`, nil, 0,
+ func() {
+ })
+
+ c.Assert(s.api.Logs["crunch-run"], NotNil)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, `(?ms).*crunch-run \S+ \(go\S+\) start.*`)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, `(?ms).*Executing container 'zzzzz-zzzzz-zzzzzzzzzzzzzzz' using stub runtime.*`)
+}
+
func (s *TestSuite) TestContainerRecordLog(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["sleep", "1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"runtime_constraints": {},
"state": "Locked"
}`, nil, 0,
- func(t *TestDockerClient) {
+ func() {
time.Sleep(time.Second)
- t.logWriter.Close()
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
- c.Assert(api.Logs["container"], NotNil)
- c.Check(api.Logs["container"].String(), Matches, `(?ms).*container_image.*`)
+ c.Assert(s.api.Logs["container"], NotNil)
+ c.Check(s.api.Logs["container"].String(), Matches, `(?ms).*container_image.*`)
}
func (s *TestSuite) TestFullRunStderr(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["/bin/sh", "-c", "echo hello ; echo world 1>&2 ; exit 1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 1, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello\n"))
- t.logWriter.Write(dockerLog(2, "world\n"))
- t.logWriter.Close()
+}`, nil, 1, func() {
+ fmt.Fprintln(s.executor.created.Stdout, "hello")
+ fmt.Fprintln(s.executor.created.Stderr, "world")
})
- final := api.CalledWith("container.state", "Complete")
+ final := s.api.CalledWith("container.state", "Complete")
c.Assert(final, NotNil)
c.Check(final["container"].(arvadosclient.Dict)["exit_code"], Equals, 1)
c.Check(final["container"].(arvadosclient.Dict)["log"], NotNil)
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "hello\n"), Equals, true)
- c.Check(strings.HasSuffix(api.Logs["stderr"].String(), "world\n"), Equals, true)
+ c.Check(s.api.Logs["stdout"].String(), Matches, ".*hello\n")
+ c.Check(s.api.Logs["stderr"].String(), Matches, ".*world\n")
}
func (s *TestSuite) TestFullRunDefaultCwd(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["pwd"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.cwd+"\n"))
- t.logWriter.Close()
+}`, nil, 0, func() {
+ fmt.Fprintf(s.executor.created.Stdout, "workdir=%q", s.executor.created.WorkingDir)
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Log(api.Logs["stdout"])
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "/\n"), Equals, true)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Log(s.api.Logs["stdout"])
+ c.Check(s.api.Logs["stdout"].String(), Matches, `.*workdir=""\n`)
}
func (s *TestSuite) TestFullRunSetCwd(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["pwd"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": "/bin",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.cwd+"\n"))
- t.logWriter.Close()
+}`, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.WorkingDir)
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "/bin\n"), Equals, true)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.Logs["stdout"].String(), Matches, ".*/bin\n")
}
-func (s *TestSuite) TestStopOnSignal(c *C) {
- s.testStopContainer(c, func(cr *ContainerRunner) {
- go func() {
- for !s.docker.calledWait {
- time.Sleep(time.Millisecond)
- }
- cr.SigChan <- syscall.SIGINT
- }()
+func (s *TestSuite) TestFullRunSetOutputStorageClasses(c *C) {
+ s.fullRunHelper(c, `{
+ "command": ["pwd"],
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
+ "cwd": "/bin",
+ "environment": {},
+ "mounts": {"/tmp": {"kind": "tmp"} },
+ "output_path": "/tmp",
+ "priority": 1,
+ "runtime_constraints": {},
+ "state": "Locked",
+ "output_storage_classes": ["foo", "bar"]
+}`, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.WorkingDir)
})
+
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.Logs["stdout"].String(), Matches, ".*/bin\n")
+ c.Check(s.testDispatcherKeepClient.StorageClasses, DeepEquals, []string{"foo", "bar"})
+ c.Check(s.testContainerKeepClient.StorageClasses, DeepEquals, []string{"foo", "bar"})
}
-func (s *TestSuite) TestStopOnArvMountDeath(c *C) {
- s.testStopContainer(c, func(cr *ContainerRunner) {
- cr.ArvMountExit = make(chan error)
- go func() {
- cr.ArvMountExit <- exec.Command("true").Run()
- close(cr.ArvMountExit)
- }()
+func (s *TestSuite) TestEnableCUDADeviceCount(c *C) {
+ s.fullRunHelper(c, `{
+ "command": ["pwd"],
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
+ "cwd": "/bin",
+ "environment": {},
+ "mounts": {"/tmp": {"kind": "tmp"} },
+ "output_path": "/tmp",
+ "priority": 1,
+ "runtime_constraints": {"cuda_device_count": 2},
+ "state": "Locked",
+ "output_storage_classes": ["foo", "bar"]
+}`, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, "ok")
+ })
+ c.Check(s.executor.created.CUDADeviceCount, Equals, 2)
+}
+
+func (s *TestSuite) TestEnableCUDAHardwareCapability(c *C) {
+ s.fullRunHelper(c, `{
+ "command": ["pwd"],
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
+ "cwd": "/bin",
+ "environment": {},
+ "mounts": {"/tmp": {"kind": "tmp"} },
+ "output_path": "/tmp",
+ "priority": 1,
+ "runtime_constraints": {"cuda_hardware_capability": "foo"},
+ "state": "Locked",
+ "output_storage_classes": ["foo", "bar"]
+}`, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, "ok")
})
+ c.Check(s.executor.created.CUDADeviceCount, Equals, 1)
+}
+
+func (s *TestSuite) TestStopOnSignal(c *C) {
+ s.executor.runFunc = func() {
+ s.executor.created.Stdout.Write([]byte("foo\n"))
+ s.runner.SigChan <- syscall.SIGINT
+ }
+ s.testStopContainer(c)
}
-func (s *TestSuite) testStopContainer(c *C, setup func(cr *ContainerRunner)) {
+func (s *TestSuite) TestStopOnArvMountDeath(c *C) {
+ s.executor.runFunc = func() {
+ s.executor.created.Stdout.Write([]byte("foo\n"))
+ s.runner.ArvMountExit <- nil
+ close(s.runner.ArvMountExit)
+ }
+ s.runner.ArvMountExit = make(chan error)
+ s.testStopContainer(c)
+}
+
+func (s *TestSuite) testStopContainer(c *C) {
record := `{
"command": ["/bin/sh", "-c", "echo foo && sleep 30 && echo bar"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"state": "Locked"
}`
- rec := arvados.Container{}
- err := json.Unmarshal([]byte(record), &rec)
- c.Check(err, IsNil)
-
- s.docker.fn = func(t *TestDockerClient) {
- <-t.stop
- t.logWriter.Write(dockerLog(1, "foo\n"))
- t.logWriter.Close()
- }
- s.docker.ImageRemove(nil, hwImageID, dockertypes.ImageRemoveOptions{})
-
- api := &ArvTestClient{Container: rec}
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ err := json.Unmarshal([]byte(record), &s.api.Container)
c.Assert(err, IsNil)
- cr.RunArvMount = func([]string, string) (*exec.Cmd, error) { return nil, nil }
- cr.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
+
+ s.runner.RunArvMount = func([]string, string) (*exec.Cmd, error) { return nil, nil }
+ s.runner.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
return &ArvTestClient{}, &KeepTestClient{}, nil, nil
}
- setup(cr)
done := make(chan error)
go func() {
- done <- cr.Run()
+ done <- s.runner.Run()
}()
select {
case <-time.After(20 * time.Second):
case err = <-done:
c.Check(err, IsNil)
}
- for k, v := range api.Logs {
+ for k, v := range s.api.Logs {
c.Log(k)
- c.Log(v.String())
+ c.Log(v.String(), "\n")
}
- c.Check(api.CalledWith("container.log", nil), NotNil)
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["stdout"].String(), Matches, "(?ms).*foo\n$")
+ c.Check(s.api.CalledWith("container.log", nil), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), NotNil)
+ c.Check(s.api.Logs["stdout"].String(), Matches, "(?ms).*foo\n$")
}
func (s *TestSuite) TestFullRunSetEnv(c *C) {
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+}`, nil, 0, func() {
+ fmt.Fprintf(s.executor.created.Stdout, "%v", s.executor.created.Env)
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "bilbo\n"), Equals, true)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.Logs["stdout"].String(), Matches, `.*map\[FROBIZ:bilbo\]\n`)
}
type ArvMountCmdLine struct {
}
func (s *TestSuite) TestSetupMounts(c *C) {
- api := &ArvTestClient{}
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
- c.Assert(err, IsNil)
+ cr := s.runner
am := &ArvMountCmdLine{}
cr.RunArvMount = am.ArvMountTest
cr.ContainerArvClient = &ArvTestClient{}
cr.ContainerKeepClient = &KeepTestClient{}
+ cr.Container.OutputStorageClasses = []string{"default"}
- realTemp, err := ioutil.TempDir("", "crunchrun_test1-")
- c.Assert(err, IsNil)
- certTemp, err := ioutil.TempDir("", "crunchrun_test2-")
- c.Assert(err, IsNil)
+ realTemp := c.MkDir()
+ certTemp := c.MkDir()
stubCertPath := stubCert(certTemp)
-
cr.parentTemp = realTemp
- defer os.RemoveAll(realTemp)
- defer os.RemoveAll(certTemp)
-
i := 0
cr.MkTempDir = func(_ string, prefix string) (string, error) {
i++
cr.Container.Mounts["/tmp"] = arvados.Mount{Kind: "tmp"}
cr.Container.OutputPath = "/tmp"
cr.statInterval = 5 * time.Second
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/tmp2:/tmp"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{"/tmp": {realTemp + "/tmp2", false}})
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
cr.Container.Mounts["/out"] = arvados.Mount{Kind: "tmp"}
cr.Container.Mounts["/tmp"] = arvados.Mount{Kind: "tmp"}
cr.Container.OutputPath = "/out"
+ cr.Container.OutputStorageClasses = []string{"foo", "bar"}
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/tmp2:/out", realTemp + "/tmp3:/tmp"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "foo,bar", "--crunchstat-interval=5",
+ "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{"/out": {realTemp + "/tmp2", false}, "/tmp": {realTemp + "/tmp3", false}})
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
cr.Container.Mounts["/tmp"] = arvados.Mount{Kind: "tmp"}
cr.Container.OutputPath = "/tmp"
cr.Container.RuntimeConstraints.API = true
+ cr.Container.OutputStorageClasses = []string{"default"}
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/tmp2:/tmp", stubCertPath + ":/etc/arvados/ca-certificates.crt:ro"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{"/tmp": {realTemp + "/tmp2", false}, "/etc/arvados/ca-certificates.crt": {stubCertPath, true}})
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
os.MkdirAll(realTemp+"/keep1/tmp0", os.ModePerm)
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/keep1/tmp0:/keeptmp"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{"/keeptmp": {realTemp + "/keep1/tmp0", false}})
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
os.MkdirAll(realTemp+"/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53", os.ModePerm)
os.MkdirAll(realTemp+"/keep1/tmp0", os.ModePerm)
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- sort.StringSlice(cr.Binds).Sort()
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53:/keepinp:ro",
- realTemp + "/keep1/tmp0:/keepout"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{
+ "/keepinp": {realTemp + "/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53", true},
+ "/keepout": {realTemp + "/keep1/tmp0", false},
+ })
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
os.MkdirAll(realTemp+"/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53", os.ModePerm)
os.MkdirAll(realTemp+"/keep1/tmp0", os.ModePerm)
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- sort.StringSlice(cr.Binds).Sort()
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53:/keepinp:ro",
- realTemp + "/keep1/tmp0:/keepout"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{
+ "/keepinp": {realTemp + "/keep1/by_id/59389a8f9ee9d399be35462a0f92541c+53", true},
+ "/keepout": {realTemp + "/keep1/tmp0", false},
+ })
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
cr.Container.Mounts = map[string]arvados.Mount{
"/mnt/test.json": {Kind: "json", Content: test.in},
}
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- sort.StringSlice(cr.Binds).Sort()
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/json2/mountdata.json:/mnt/test.json:ro"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{
+ "/mnt/test.json": {realTemp + "/json2/mountdata.json", true},
+ })
content, err := ioutil.ReadFile(realTemp + "/json2/mountdata.json")
c.Check(err, IsNil)
c.Check(content, DeepEquals, []byte(test.out))
cr.Container.Mounts = map[string]arvados.Mount{
"/mnt/test.txt": {Kind: "text", Content: test.in},
}
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
if test.out == "error" {
c.Check(err.Error(), Equals, "content for mount \"/mnt/test.txt\" must be a string")
} else {
c.Check(err, IsNil)
- sort.StringSlice(cr.Binds).Sort()
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/text2/mountdata.text:/mnt/test.txt:ro"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{
+ "/mnt/test.txt": {realTemp + "/text2/mountdata.text", true},
+ })
content, err := ioutil.ReadFile(realTemp + "/text2/mountdata.text")
c.Check(err, IsNil)
c.Check(content, DeepEquals, []byte(test.out))
os.MkdirAll(realTemp+"/keep1/tmp0", os.ModePerm)
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"--foreground", "--allow-other",
- "--read-write", "--crunchstat-interval=5",
- "--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", realTemp + "/keep1"})
- c.Check(cr.Binds, DeepEquals, []string{realTemp + "/tmp2:/tmp", realTemp + "/keep1/tmp0:/tmp/foo:ro"})
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
+ "--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
+ "--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--disable-event-listening", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
+ c.Check(bindmounts, DeepEquals, map[string]bindmount{
+ "/tmp": {realTemp + "/tmp2", false},
+ "/tmp/foo": {realTemp + "/keep1/tmp0", true},
+ })
os.RemoveAll(cr.ArvMountPoint)
cr.CleanupDirs()
checkEmpty()
rf.Write([]byte("bar"))
rf.Close()
- err := cr.SetupMounts()
+ _, err := cr.SetupMounts()
c.Check(err, IsNil)
_, err = os.Stat(cr.HostOutputDir + "/foo")
c.Check(err, IsNil)
}
cr.Container.OutputPath = "/tmp"
- err := cr.SetupMounts()
+ _, err := cr.SetupMounts()
c.Check(err, NotNil)
c.Check(err, ErrorMatches, `only mount points of kind 'collection', 'text' or 'json' are supported underneath the output_path.*`)
os.RemoveAll(cr.ArvMountPoint)
"stdin": {Kind: "tmp"},
}
- err := cr.SetupMounts()
+ _, err := cr.SetupMounts()
c.Check(err, NotNil)
c.Check(err, ErrorMatches, `unsupported mount kind 'tmp' for stdin.*`)
os.RemoveAll(cr.ArvMountPoint)
}
cr.Container.OutputPath = "/tmp"
- err := cr.SetupMounts()
+ bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- // dirMap[mountpoint] == tmpdir
- dirMap := make(map[string]string)
- for _, bind := range cr.Binds {
- tokens := strings.Split(bind, ":")
- dirMap[tokens[1]] = tokens[0]
-
- if cr.Container.Mounts[tokens[1]].Writable {
- c.Check(len(tokens), Equals, 2)
- } else {
- c.Check(len(tokens), Equals, 3)
- c.Check(tokens[2], Equals, "ro")
- }
+ for path, mount := range bindmounts {
+ c.Check(mount.ReadOnly, Equals, !cr.Container.Mounts[path].Writable, Commentf("%s %#v", path, mount))
}
- data, err := ioutil.ReadFile(dirMap["/tip"] + "/dir1/dir2/file with mode 0644")
+ data, err := ioutil.ReadFile(bindmounts["/tip"].HostPath + "/dir1/dir2/file with mode 0644")
c.Check(err, IsNil)
c.Check(string(data), Equals, "\000\001\002\003")
- _, err = ioutil.ReadFile(dirMap["/tip"] + "/file only on testbranch")
+ _, err = ioutil.ReadFile(bindmounts["/tip"].HostPath + "/file only on testbranch")
c.Check(err, FitsTypeOf, &os.PathError{})
c.Check(os.IsNotExist(err), Equals, true)
- data, err = ioutil.ReadFile(dirMap["/non-tip"] + "/dir1/dir2/file with mode 0644")
+ data, err = ioutil.ReadFile(bindmounts["/non-tip"].HostPath + "/dir1/dir2/file with mode 0644")
c.Check(err, IsNil)
c.Check(string(data), Equals, "\000\001\002\003")
- data, err = ioutil.ReadFile(dirMap["/non-tip"] + "/file only on testbranch")
+ data, err = ioutil.ReadFile(bindmounts["/non-tip"].HostPath + "/file only on testbranch")
c.Check(err, IsNil)
c.Check(string(data), Equals, "testfile\n")
func (s *TestSuite) TestStdout(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {"/tmp": {"kind": "tmp"}, "stdout": {"kind": "file", "path": "/tmp/a/b/c.out"} },
"state": "Locked"
}`
- api, cr, _ := s.fullRunHelper(c, helperRecord, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ s.fullRunHelper(c, helperRecord, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", "./a/b 307372fa8fd5c146b22ae7a45b49bc31+6 0:6:c.out\n"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", "./a/b 307372fa8fd5c146b22ae7a45b49bc31+6 0:6:c.out\n"), NotNil)
}
// Used by the TestStdoutWithWrongPath*()
-func (s *TestSuite) stdoutErrorRunHelper(c *C, record string, fn func(t *TestDockerClient)) (api *ArvTestClient, cr *ContainerRunner, err error) {
- rec := arvados.Container{}
- err = json.Unmarshal([]byte(record), &rec)
- c.Check(err, IsNil)
-
- s.docker.fn = fn
- s.docker.ImageRemove(nil, hwImageID, dockertypes.ImageRemoveOptions{})
-
- api = &ArvTestClient{Container: rec}
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err = NewContainerRunner(s.client, api, kc, s.docker, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+func (s *TestSuite) stdoutErrorRunHelper(c *C, record string, fn func()) (*ArvTestClient, *ContainerRunner, error) {
+ err := json.Unmarshal([]byte(record), &s.api.Container)
c.Assert(err, IsNil)
- am := &ArvMountCmdLine{}
- cr.RunArvMount = am.ArvMountTest
- cr.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
- return &ArvTestClient{}, &KeepTestClient{}, nil, nil
+ s.executor.runFunc = fn
+ s.runner.RunArvMount = (&ArvMountCmdLine{}).ArvMountTest
+ s.runner.MkArvClient = func(token string) (IArvadosClient, IKeepClient, *arvados.Client, error) {
+ return s.api, &KeepTestClient{}, nil, nil
}
-
- err = cr.Run()
- return
+ return s.api, s.runner, s.runner.Run()
}
func (s *TestSuite) TestStdoutWithWrongPath(c *C) {
"mounts": {"/tmp": {"kind": "tmp"}, "stdout": {"kind": "file", "path":"/tmpa.out"} },
"output_path": "/tmp",
"state": "Locked"
-}`, func(t *TestDockerClient) {})
-
- c.Check(err, NotNil)
- c.Check(strings.Contains(err.Error(), "Stdout path does not start with OutputPath"), Equals, true)
+}`, func() {})
+ c.Check(err, ErrorMatches, ".*Stdout path does not start with OutputPath.*")
}
func (s *TestSuite) TestStdoutWithWrongKindTmp(c *C) {
"mounts": {"/tmp": {"kind": "tmp"}, "stdout": {"kind": "tmp", "path":"/tmp/a.out"} },
"output_path": "/tmp",
"state": "Locked"
-}`, func(t *TestDockerClient) {})
-
- c.Check(err, NotNil)
- c.Check(strings.Contains(err.Error(), "unsupported mount kind 'tmp' for stdout"), Equals, true)
+}`, func() {})
+ c.Check(err, ErrorMatches, ".*unsupported mount kind 'tmp' for stdout.*")
}
func (s *TestSuite) TestStdoutWithWrongKindCollection(c *C) {
"mounts": {"/tmp": {"kind": "tmp"}, "stdout": {"kind": "collection", "path":"/tmp/a.out"} },
"output_path": "/tmp",
"state": "Locked"
-}`, func(t *TestDockerClient) {})
-
- c.Check(err, NotNil)
- c.Check(strings.Contains(err.Error(), "unsupported mount kind 'collection' for stdout"), Equals, true)
+}`, func() {})
+ c.Check(err, ErrorMatches, ".*unsupported mount kind 'collection' for stdout.*")
}
func (s *TestSuite) TestFullRunWithAPI(c *C) {
- defer os.Setenv("ARVADOS_API_HOST", os.Getenv("ARVADOS_API_HOST"))
- os.Setenv("ARVADOS_API_HOST", "test.arvados.org")
- api, _, _ := s.fullRunHelper(c, `{
- "command": ["/bin/sh", "-c", "echo $ARVADOS_API_HOST"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ s.fullRunHelper(c, `{
+ "command": ["/bin/sh", "-c", "true $ARVADOS_API_HOST"],
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": "/bin",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {"API": true},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[1][17:]+"\n"))
- t.logWriter.Close()
+}`, nil, 0, func() {
+ c.Check(s.executor.created.Env["ARVADOS_API_HOST"], Equals, os.Getenv("ARVADOS_API_HOST"))
+ s.executor.exit <- 3
})
-
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(strings.HasSuffix(api.Logs["stdout"].String(), "test.arvados.org\n"), Equals, true)
- c.Check(api.CalledWith("container.output", "d41d8cd98f00b204e9800998ecf8427e+0"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 3), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
}
func (s *TestSuite) TestFullRunSetOutput(c *C) {
defer os.Setenv("ARVADOS_API_HOST", os.Getenv("ARVADOS_API_HOST"))
os.Setenv("ARVADOS_API_HOST", "test.arvados.org")
- api, _, _ := s.fullRunHelper(c, `{
+ s.fullRunHelper(c, `{
"command": ["/bin/sh", "-c", "echo $ARVADOS_API_HOST"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": "/bin",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {"API": true},
"state": "Locked"
-}`, nil, 0, func(t *TestDockerClient) {
- t.api.Container.Output = "d4ab34d3d4f8a72f5c4973051ae69fab+122"
- t.logWriter.Close()
+}`, nil, 0, func() {
+ s.api.Container.Output = arvadostest.DockerImage112PDH
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(api.CalledWith("container.output", "d4ab34d3d4f8a72f5c4973051ae69fab+122"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.api.CalledWith("container.output", arvadostest.DockerImage112PDH), NotNil)
+}
+
+func (s *TestSuite) TestArvMountRuntimeStatusWarning(c *C) {
+ s.runner.RunArvMount = func([]string, string) (*exec.Cmd, error) {
+ os.Mkdir(s.runner.ArvMountPoint+"/by_id", 0666)
+ ioutil.WriteFile(s.runner.ArvMountPoint+"/by_id/README", nil, 0666)
+ return s.runner.ArvMountCmd([]string{"bash", "-c", "echo >&2 Test: Keep write error: I am a teapot; sleep 3"}, "")
+ }
+ s.executor.runFunc = func() {
+ time.Sleep(time.Second)
+ s.executor.exit <- 0
+ }
+ record := `{
+ "command": ["sleep", "1"],
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
+ "cwd": "/bin",
+ "environment": {},
+ "mounts": {"/tmp": {"kind": "tmp"} },
+ "output_path": "/tmp",
+ "priority": 1,
+ "runtime_constraints": {"API": true},
+ "state": "Locked"
+}`
+ err := json.Unmarshal([]byte(record), &s.api.Container)
+ c.Assert(err, IsNil)
+ err = s.runner.Run()
+ c.Assert(err, IsNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.runtime_status.warning", "arv-mount: Keep write error"), NotNil)
+ c.Check(s.api.CalledWith("container.runtime_status.warningDetail", "Test: Keep write error: I am a teapot"), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
}
func (s *TestSuite) TestStdoutWithExcludeFromOutputMountPointUnderOutputDir(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
extraMounts := []string{"a3e8f74c6f101eae01fa08bfb4e49b3a+54"}
- api, cr, _ := s.fullRunHelper(c, helperRecord, extraMounts, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ s.fullRunHelper(c, helperRecord, extraMounts, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", "./a/b 307372fa8fd5c146b22ae7a45b49bc31+6 0:6:c.out\n"), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", "./a/b 307372fa8fd5c146b22ae7a45b49bc31+6 0:6:c.out\n"), NotNil)
}
func (s *TestSuite) TestStdoutWithMultipleMountPointsUnderOutputDir(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
"a0def87f80dd594d4675809e83bd4f15+367/subdir1/subdir2/file2_in_subdir2.txt",
}
- api, runner, realtemp := s.fullRunHelper(c, helperRecord, extraMounts, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ api, _, realtemp := s.fullRunHelper(c, helperRecord, extraMounts, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
- c.Check(runner.Binds, DeepEquals, []string{realtemp + "/tmp2:/tmp",
- realtemp + "/keep1/by_id/a0def87f80dd594d4675809e83bd4f15+367/file2_in_main.txt:/tmp/foo/bar:ro",
- realtemp + "/keep1/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1/subdir2/file2_in_subdir2.txt:/tmp/foo/baz/sub2file2:ro",
- realtemp + "/keep1/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1:/tmp/foo/sub1:ro",
- realtemp + "/keep1/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1/file2_in_subdir1.txt:/tmp/foo/sub1file2:ro",
+ c.Check(s.executor.created.BindMounts, DeepEquals, map[string]bindmount{
+ "/tmp": {realtemp + "/tmp1", false},
+ "/tmp/foo/bar": {s.keepmount + "/by_id/a0def87f80dd594d4675809e83bd4f15+367/file2_in_main.txt", true},
+ "/tmp/foo/baz/sub2file2": {s.keepmount + "/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1/subdir2/file2_in_subdir2.txt", true},
+ "/tmp/foo/sub1": {s.keepmount + "/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1", true},
+ "/tmp/foo/sub1file2": {s.keepmount + "/by_id/a0def87f80dd594d4675809e83bd4f15+367/subdir1/file2_in_subdir1.txt", true},
})
c.Check(api.CalledWith("container.exit_code", 0), NotNil)
func (s *TestSuite) TestStdoutWithMountPointsUnderOutputDirDenormalizedManifest(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
"b0def87f80dd594d4675809e83bd4f15+367/subdir1/file2_in_subdir1.txt",
}
- api, _, _ := s.fullRunHelper(c, helperRecord, extraMounts, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ s.fullRunHelper(c, helperRecord, extraMounts, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- for _, v := range api.Content {
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ for _, v := range s.api.Content {
if v["collection"] != nil {
collection := v["collection"].(arvadosclient.Dict)
if strings.Index(collection["name"].(string), "output") == 0 {
func (s *TestSuite) TestOutputError(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
"runtime_constraints": {},
"state": "Locked"
}`
-
- extraMounts := []string{}
-
- api, _, _ := s.fullRunHelper(c, helperRecord, extraMounts, 0, func(t *TestDockerClient) {
- os.Symlink("/etc/hosts", t.realTemp+"/tmp2/baz")
- t.logWriter.Close()
+ s.fullRunHelper(c, helperRecord, nil, 0, func() {
+ os.Symlink("/etc/hosts", s.runner.HostOutputDir+"/baz")
})
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), NotNil)
}
func (s *TestSuite) TestStdinCollectionMountPoint(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
"b0def87f80dd594d4675809e83bd4f15+367/file1_in_main.txt",
}
- api, _, _ := s.fullRunHelper(c, helperRecord, extraMounts, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ api, _, _ := s.fullRunHelper(c, helperRecord, extraMounts, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
c.Check(api.CalledWith("container.exit_code", 0), NotNil)
func (s *TestSuite) TestStdinJsonMountPoint(c *C) {
helperRecord := `{
"command": ["/bin/sh", "-c", "echo $FROBIZ"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"environment": {"FROBIZ": "bilbo"},
"mounts": {
"state": "Locked"
}`
- api, _, _ := s.fullRunHelper(c, helperRecord, nil, 0, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, t.env[0][7:]+"\n"))
- t.logWriter.Close()
+ api, _, _ := s.fullRunHelper(c, helperRecord, nil, 0, func() {
+ fmt.Fprintln(s.executor.created.Stdout, s.executor.created.Env["FROBIZ"])
})
c.Check(api.CalledWith("container.exit_code", 0), NotNil)
func (s *TestSuite) TestStderrMount(c *C) {
api, cr, _ := s.fullRunHelper(c, `{
"command": ["/bin/sh", "-c", "echo hello;exit 1"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"},
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 1, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello\n"))
- t.logWriter.Write(dockerLog(2, "oops\n"))
- t.logWriter.Close()
+}`, nil, 1, func() {
+ fmt.Fprintln(s.executor.created.Stdout, "hello")
+ fmt.Fprintln(s.executor.created.Stderr, "oops")
})
final := api.CalledWith("container.state", "Complete")
}
func (s *TestSuite) TestNumberRoundTrip(c *C) {
- kc := &KeepTestClient{}
- defer kc.Close()
- cr, err := NewContainerRunner(s.client, &ArvTestClient{callraw: true}, kc, nil, "zzzzz-zzzzz-zzzzzzzzzzzzzzz")
+ s.api.callraw = true
+ err := s.runner.fetchContainerRecord()
c.Assert(err, IsNil)
- cr.fetchContainerRecord()
-
- jsondata, err := json.Marshal(cr.Container.Mounts["/json"].Content)
-
+ jsondata, err := json.Marshal(s.runner.Container.Mounts["/json"].Content)
+ c.Logf("%#v", s.runner.Container)
c.Check(err, IsNil)
c.Check(string(jsondata), Equals, `{"number":123456789123456789}`)
}
-func (s *TestSuite) TestFullBrokenDocker1(c *C) {
- tf, err := ioutil.TempFile("", "brokenNodeHook-")
- c.Assert(err, IsNil)
- defer os.Remove(tf.Name())
-
- tf.Write([]byte(`#!/bin/sh
-exec echo killme
-`))
- tf.Close()
- os.Chmod(tf.Name(), 0700)
-
- ech := tf.Name()
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
- "command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
- "cwd": ".",
- "environment": {},
- "mounts": {"/tmp": {"kind": "tmp"} },
- "output_path": "/tmp",
- "priority": 1,
- "runtime_constraints": {},
- "state": "Locked"
-}`, nil, 2, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Queued"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*unable to run containers.*")
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Running broken node hook.*")
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*killme.*")
-
-}
-
-func (s *TestSuite) TestFullBrokenDocker2(c *C) {
- ech := ""
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
- "command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
- "cwd": ".",
- "environment": {},
- "mounts": {"/tmp": {"kind": "tmp"} },
- "output_path": "/tmp",
- "priority": 1,
- "runtime_constraints": {},
- "state": "Locked"
-}`, nil, 2, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Queued"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*unable to run containers.*")
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Writing /var/lock/crunch-run-broken to mark node as broken.*")
-}
-
-func (s *TestSuite) TestFullBrokenDocker3(c *C) {
- ech := ""
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
- "command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
- "cwd": ".",
- "environment": {},
- "mounts": {"/tmp": {"kind": "tmp"} },
- "output_path": "/tmp",
- "priority": 1,
- "runtime_constraints": {},
- "state": "Locked"
-}`, nil, 3, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*unable to run containers.*")
-}
-
-func (s *TestSuite) TestBadCommand1(c *C) {
- ech := ""
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
- "command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
- "cwd": ".",
- "environment": {},
- "mounts": {"/tmp": {"kind": "tmp"} },
- "output_path": "/tmp",
- "priority": 1,
- "runtime_constraints": {},
- "state": "Locked"
-}`, nil, 4, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Possible causes:.*is missing.*")
-}
-
-func (s *TestSuite) TestBadCommand2(c *C) {
- ech := ""
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
+func (s *TestSuite) TestFullBrokenDocker(c *C) {
+ nextState := ""
+ for _, setup := range []func(){
+ func() {
+ c.Log("// waitErr = ocl runtime error")
+ s.executor.waitErr = errors.New(`Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/tmp/keep453790790/by_id/99999999999999999999999999999999+99999/myGenome\\\" to rootfs \\\"/tmp/docker/overlay2/9999999999999999999999999999999999999999999999999999999999999999/merged\\\" at \\\"/tmp/docker/overlay2/9999999999999999999999999999999999999999999999999999999999999999/merged/keep/99999999999999999999999999999999+99999/myGenome\\\" caused \\\"no such file or directory\\\"\""`)
+ nextState = "Cancelled"
+ },
+ func() {
+ c.Log("// loadErr = cannot connect")
+ s.executor.loadErr = errors.New("Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?")
+ *brokenNodeHook = c.MkDir() + "/broken-node-hook"
+ err := ioutil.WriteFile(*brokenNodeHook, []byte("#!/bin/sh\nexec echo killme\n"), 0700)
+ c.Assert(err, IsNil)
+ nextState = "Queued"
+ },
+ } {
+ s.SetUpTest(c)
+ setup()
+ s.fullRunHelper(c, `{
"command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 5, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Possible causes:.*is missing.*")
+}`, nil, 0, func() {})
+ c.Check(s.api.CalledWith("container.state", nextState), NotNil)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*unable to run containers.*")
+ if *brokenNodeHook != "" {
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*Running broken node hook.*")
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*killme.*")
+ c.Check(s.api.Logs["crunch-run"].String(), Not(Matches), "(?ms).*Writing /var/lock/crunch-run-broken to mark node as broken.*")
+ } else {
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*Writing /var/lock/crunch-run-broken to mark node as broken.*")
+ }
+ }
}
-func (s *TestSuite) TestBadCommand3(c *C) {
- ech := ""
- brokenNodeHook = &ech
-
- api, _, _ := s.fullRunHelper(c, `{
+func (s *TestSuite) TestBadCommand(c *C) {
+ for _, startError := range []string{
+ `panic: standard_init_linux.go:175: exec user process caused "no such file or directory"`,
+ `Error response from daemon: Cannot start container 41f26cbc43bcc1280f4323efb1830a394ba8660c9d1c2b564ba42bf7f7694845: [8] System error: no such file or directory`,
+ `Error response from daemon: Cannot start container 58099cd76c834f3dc2a4fb76c8028f049ae6d4fdf0ec373e1f2cfea030670c2d: [8] System error: exec: "foobar": executable file not found in $PATH`,
+ } {
+ s.SetUpTest(c)
+ s.executor.startErr = errors.New(startError)
+ s.fullRunHelper(c, `{
"command": ["echo", "hello world"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "`+arvadostest.DockerImage112PDH+`",
"cwd": ".",
"environment": {},
"mounts": {"/tmp": {"kind": "tmp"} },
"priority": 1,
"runtime_constraints": {},
"state": "Locked"
-}`, nil, 6, func(t *TestDockerClient) {
- t.logWriter.Write(dockerLog(1, "hello world\n"))
- t.logWriter.Close()
- })
-
- c.Check(api.CalledWith("container.state", "Cancelled"), NotNil)
- c.Check(api.Logs["crunch-run"].String(), Matches, "(?ms).*Possible causes:.*is missing.*")
+}`, nil, 0, func() {})
+ c.Check(s.api.CalledWith("container.state", "Cancelled"), NotNil)
+ c.Check(s.api.Logs["crunch-run"].String(), Matches, "(?ms).*Possible causes:.*is missing.*")
+ }
}
func (s *TestSuite) TestSecretTextMountPoint(c *C) {
- // under normal mounts, gets captured in output, oops
helperRecord := `{
"command": ["true"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"mounts": {
"/tmp": {"kind": "tmp"},
"state": "Locked"
}`
- api, cr, _ := s.fullRunHelper(c, helperRecord, nil, 0, func(t *TestDockerClient) {
- content, err := ioutil.ReadFile(t.realTemp + "/tmp2/secret.conf")
+ s.fullRunHelper(c, helperRecord, nil, 0, func() {
+ content, err := ioutil.ReadFile(s.runner.HostOutputDir + "/secret.conf")
c.Check(err, IsNil)
- c.Check(content, DeepEquals, []byte("mypassword"))
- t.logWriter.Close()
+ c.Check(string(content), Equals, "mypassword")
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ". 34819d7beeabb9260a5c854bc85b3e44+10 0:10:secret.conf\n"), NotNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ""), IsNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ". 34819d7beeabb9260a5c854bc85b3e44+10 0:10:secret.conf\n"), NotNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ""), IsNil)
// under secret mounts, not captured in output
helperRecord = `{
"command": ["true"],
- "container_image": "d4ab34d3d4f8a72f5c4973051ae69fab+122",
+ "container_image": "` + arvadostest.DockerImage112PDH + `",
"cwd": "/bin",
"mounts": {
"/tmp": {"kind": "tmp"}
"state": "Locked"
}`
- api, cr, _ = s.fullRunHelper(c, helperRecord, nil, 0, func(t *TestDockerClient) {
- content, err := ioutil.ReadFile(t.realTemp + "/tmp2/secret.conf")
+ s.SetUpTest(c)
+ s.fullRunHelper(c, helperRecord, nil, 0, func() {
+ content, err := ioutil.ReadFile(s.runner.HostOutputDir + "/secret.conf")
c.Check(err, IsNil)
- c.Check(content, DeepEquals, []byte("mypassword"))
- t.logWriter.Close()
+ c.Check(string(content), Equals, "mypassword")
})
- c.Check(api.CalledWith("container.exit_code", 0), NotNil)
- c.Check(api.CalledWith("container.state", "Complete"), NotNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ". 34819d7beeabb9260a5c854bc85b3e44+10 0:10:secret.conf\n"), IsNil)
- c.Check(cr.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ""), NotNil)
+ c.Check(s.api.CalledWith("container.exit_code", 0), NotNil)
+ c.Check(s.api.CalledWith("container.state", "Complete"), NotNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ". 34819d7beeabb9260a5c854bc85b3e44+10 0:10:secret.conf\n"), IsNil)
+ c.Check(s.runner.ContainerArvClient.(*ArvTestClient).CalledWith("collection.manifest_text", ""), NotNil)
}
type FakeProcess struct {
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+package crunchrun
+
+import (
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "strings"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ dockertypes "github.com/docker/docker/api/types"
+ dockercontainer "github.com/docker/docker/api/types/container"
+ dockerclient "github.com/docker/docker/client"
+ "golang.org/x/net/context"
+)
+
+// Docker daemon won't let you set a limit less than ~10 MiB
+const minDockerRAM = int64(16 * 1024 * 1024)
+
+type dockerExecutor struct {
+ containerUUID string
+ logf func(string, ...interface{})
+ watchdogInterval time.Duration
+ dockerclient *dockerclient.Client
+ containerID string
+ doneIO chan struct{}
+ errIO error
+}
+
+func newDockerExecutor(containerUUID string, logf func(string, ...interface{}), watchdogInterval time.Duration) (*dockerExecutor, error) {
+ // API version 1.21 corresponds to Docker 1.9, which is
+ // currently the minimum version we want to support.
+ client, err := dockerclient.NewClient(dockerclient.DefaultDockerHost, "1.21", nil, nil)
+ if watchdogInterval < 1 {
+ watchdogInterval = time.Minute
+ }
+ return &dockerExecutor{
+ containerUUID: containerUUID,
+ logf: logf,
+ watchdogInterval: watchdogInterval,
+ dockerclient: client,
+ }, err
+}
+
+func (e *dockerExecutor) Runtime() string { return "docker" }
+
+func (e *dockerExecutor) LoadImage(imageID string, imageTarballPath string, container arvados.Container, arvMountPoint string,
+ containerClient *arvados.Client) error {
+ _, _, err := e.dockerclient.ImageInspectWithRaw(context.TODO(), imageID)
+ if err == nil {
+ // already loaded
+ return nil
+ }
+
+ f, err := os.Open(imageTarballPath)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ resp, err := e.dockerclient.ImageLoad(context.TODO(), f, true)
+ if err != nil {
+ return fmt.Errorf("While loading container image into Docker: %v", err)
+ }
+ defer resp.Body.Close()
+ buf, _ := ioutil.ReadAll(resp.Body)
+ e.logf("loaded image: response %s", buf)
+ return nil
+}
+
+func (e *dockerExecutor) config(spec containerSpec) (dockercontainer.Config, dockercontainer.HostConfig) {
+ e.logf("Creating Docker container")
+ cfg := dockercontainer.Config{
+ Image: spec.Image,
+ Cmd: spec.Command,
+ WorkingDir: spec.WorkingDir,
+ Volumes: map[string]struct{}{},
+ OpenStdin: spec.Stdin != nil,
+ StdinOnce: spec.Stdin != nil,
+ AttachStdin: spec.Stdin != nil,
+ AttachStdout: true,
+ AttachStderr: true,
+ }
+ if cfg.WorkingDir == "." {
+ cfg.WorkingDir = ""
+ }
+ for k, v := range spec.Env {
+ cfg.Env = append(cfg.Env, k+"="+v)
+ }
+ if spec.RAM > 0 && spec.RAM < minDockerRAM {
+ spec.RAM = minDockerRAM
+ }
+ hostCfg := dockercontainer.HostConfig{
+ LogConfig: dockercontainer.LogConfig{
+ Type: "none",
+ },
+ NetworkMode: dockercontainer.NetworkMode("none"),
+ Resources: dockercontainer.Resources{
+ CgroupParent: spec.CgroupParent,
+ NanoCPUs: int64(spec.VCPUs) * 1000000000,
+ Memory: spec.RAM, // RAM
+ MemorySwap: spec.RAM, // RAM+swap
+ KernelMemory: spec.RAM, // kernel portion
+ },
+ }
+ if spec.CUDADeviceCount != 0 {
+ hostCfg.Resources.DeviceRequests = append(hostCfg.Resources.DeviceRequests, dockercontainer.DeviceRequest{
+ Driver: "nvidia",
+ Count: spec.CUDADeviceCount,
+ Capabilities: [][]string{[]string{"gpu", "nvidia", "compute"}},
+ })
+ }
+ for path, mount := range spec.BindMounts {
+ bind := mount.HostPath + ":" + path
+ if mount.ReadOnly {
+ bind += ":ro"
+ }
+ hostCfg.Binds = append(hostCfg.Binds, bind)
+ }
+ if spec.EnableNetwork {
+ hostCfg.NetworkMode = dockercontainer.NetworkMode(spec.NetworkMode)
+ }
+ return cfg, hostCfg
+}
+
+func (e *dockerExecutor) Create(spec containerSpec) error {
+ cfg, hostCfg := e.config(spec)
+ created, err := e.dockerclient.ContainerCreate(context.TODO(), &cfg, &hostCfg, nil, e.containerUUID)
+ if err != nil {
+ return fmt.Errorf("While creating container: %v", err)
+ }
+ e.containerID = created.ID
+ return e.startIO(spec.Stdin, spec.Stdout, spec.Stderr)
+}
+
+func (e *dockerExecutor) CgroupID() string {
+ return e.containerID
+}
+
+func (e *dockerExecutor) Start() error {
+ return e.dockerclient.ContainerStart(context.TODO(), e.containerID, dockertypes.ContainerStartOptions{})
+}
+
+func (e *dockerExecutor) Stop() error {
+ err := e.dockerclient.ContainerRemove(context.TODO(), e.containerID, dockertypes.ContainerRemoveOptions{Force: true})
+ if err != nil && strings.Contains(err.Error(), "No such container: "+e.containerID) {
+ err = nil
+ }
+ return err
+}
+
+// Wait for the container to terminate, capture the exit code, and
+// wait for stdout/stderr logging to finish.
+func (e *dockerExecutor) Wait(ctx context.Context) (int, error) {
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+ watchdogErr := make(chan error, 1)
+ go func() {
+ ticker := time.NewTicker(e.watchdogInterval)
+ defer ticker.Stop()
+ for range ticker.C {
+ dctx, dcancel := context.WithDeadline(ctx, time.Now().Add(e.watchdogInterval))
+ ctr, err := e.dockerclient.ContainerInspect(dctx, e.containerID)
+ dcancel()
+ if ctx.Err() != nil {
+ // Either the container already
+ // exited, or our caller is trying to
+ // kill it.
+ return
+ } else if err != nil {
+ e.logf("Error inspecting container: %s", err)
+ watchdogErr <- err
+ return
+ } else if ctr.State == nil || !(ctr.State.Running || ctr.State.Status == "created") {
+ watchdogErr <- fmt.Errorf("Container is not running: State=%v", ctr.State)
+ return
+ }
+ }
+ }()
+
+ waitOk, waitErr := e.dockerclient.ContainerWait(ctx, e.containerID, dockercontainer.WaitConditionNotRunning)
+ for {
+ select {
+ case waitBody := <-waitOk:
+ e.logf("Container exited with code: %v", waitBody.StatusCode)
+ // wait for stdout/stderr to complete
+ <-e.doneIO
+ return int(waitBody.StatusCode), nil
+
+ case err := <-waitErr:
+ return -1, fmt.Errorf("container wait: %v", err)
+
+ case <-ctx.Done():
+ return -1, ctx.Err()
+
+ case err := <-watchdogErr:
+ return -1, err
+ }
+ }
+}
+
+func (e *dockerExecutor) startIO(stdin io.Reader, stdout, stderr io.Writer) error {
+ resp, err := e.dockerclient.ContainerAttach(context.TODO(), e.containerID, dockertypes.ContainerAttachOptions{
+ Stream: true,
+ Stdin: stdin != nil,
+ Stdout: true,
+ Stderr: true,
+ })
+ if err != nil {
+ return fmt.Errorf("error attaching container stdin/stdout/stderr streams: %v", err)
+ }
+ var errStdin error
+ if stdin != nil {
+ go func() {
+ errStdin = e.handleStdin(stdin, resp.Conn, resp.CloseWrite)
+ }()
+ }
+ e.doneIO = make(chan struct{})
+ go func() {
+ e.errIO = e.handleStdoutStderr(stdout, stderr, resp.Reader)
+ if e.errIO == nil && errStdin != nil {
+ e.errIO = errStdin
+ }
+ close(e.doneIO)
+ }()
+ return nil
+}
+
+func (e *dockerExecutor) handleStdin(stdin io.Reader, conn io.Writer, closeConn func() error) error {
+ defer closeConn()
+ _, err := io.Copy(conn, stdin)
+ if err != nil {
+ return fmt.Errorf("While writing to docker container on stdin: %v", err)
+ }
+ return nil
+}
+
+// Handle docker log protocol; see
+// https://docs.docker.com/engine/reference/api/docker_remote_api_v1.15/#attach-to-a-container
+func (e *dockerExecutor) handleStdoutStderr(stdout, stderr io.Writer, reader io.Reader) error {
+ header := make([]byte, 8)
+ var err error
+ for err == nil {
+ _, err = io.ReadAtLeast(reader, header, 8)
+ if err != nil {
+ if err == io.EOF {
+ err = nil
+ }
+ break
+ }
+ readsize := int64(header[7]) | (int64(header[6]) << 8) | (int64(header[5]) << 16) | (int64(header[4]) << 24)
+ if header[0] == 1 {
+ _, err = io.CopyN(stdout, reader, readsize)
+ } else {
+ // stderr
+ _, err = io.CopyN(stderr, reader, readsize)
+ }
+ }
+ if err != nil {
+ return fmt.Errorf("error copying stdout/stderr from docker: %v", err)
+ }
+ return nil
+}
+
+func (e *dockerExecutor) Close() {
+ e.dockerclient.ContainerRemove(context.TODO(), e.containerID, dockertypes.ContainerRemoveOptions{Force: true})
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "os/exec"
+ "time"
+
+ dockercontainer "github.com/docker/docker/api/types/container"
+ . "gopkg.in/check.v1"
+)
+
+var _ = Suite(&dockerSuite{})
+
+type dockerSuite struct {
+ executorSuite
+}
+
+func (s *dockerSuite) SetUpSuite(c *C) {
+ _, err := exec.LookPath("docker")
+ if err != nil {
+ c.Skip("looks like docker is not installed")
+ }
+ s.newExecutor = func(c *C) {
+ exec.Command("docker", "rm", "zzzzz-zzzzz-zzzzzzzzzzzzzzz").Run()
+ var err error
+ s.executor, err = newDockerExecutor("zzzzz-zzzzz-zzzzzzzzzzzzzzz", c.Logf, time.Second/2)
+ c.Assert(err, IsNil)
+ }
+}
+
+var _ = Suite(&dockerStubSuite{})
+
+// dockerStubSuite tests don't really connect to the docker service,
+// so we can run them even if docker is not installed.
+type dockerStubSuite struct{}
+
+func (s *dockerStubSuite) TestDockerContainerConfig(c *C) {
+ e, err := newDockerExecutor("zzzzz-zzzzz-zzzzzzzzzzzzzzz", c.Logf, time.Second/2)
+ c.Assert(err, IsNil)
+ cfg, hostCfg := e.config(containerSpec{
+ VCPUs: 4,
+ RAM: 123123123,
+ WorkingDir: "/WorkingDir",
+ Env: map[string]string{"FOO": "bar"},
+ BindMounts: map[string]bindmount{"/mnt": {HostPath: "/hostpath", ReadOnly: true}},
+ EnableNetwork: false,
+ CUDADeviceCount: 3,
+ })
+ c.Check(cfg.WorkingDir, Equals, "/WorkingDir")
+ c.Check(cfg.Env, DeepEquals, []string{"FOO=bar"})
+ c.Check(hostCfg.NetworkMode, Equals, dockercontainer.NetworkMode("none"))
+ c.Check(hostCfg.Resources.NanoCPUs, Equals, int64(4000000000))
+ c.Check(hostCfg.Resources.Memory, Equals, int64(123123123))
+ c.Check(hostCfg.Resources.MemorySwap, Equals, int64(123123123))
+ c.Check(hostCfg.Resources.KernelMemory, Equals, int64(123123123))
+ c.Check(hostCfg.Resources.DeviceRequests, DeepEquals, []dockercontainer.DeviceRequest{{
+ Driver: "nvidia",
+ Count: 3,
+ Capabilities: [][]string{{"gpu", "nvidia", "compute"}},
+ }})
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+package crunchrun
+
+import (
+ "io"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "golang.org/x/net/context"
+)
+
+type bindmount struct {
+ HostPath string
+ ReadOnly bool
+}
+
+type containerSpec struct {
+ Image string
+ VCPUs int
+ RAM int64
+ WorkingDir string
+ Env map[string]string
+ BindMounts map[string]bindmount
+ Command []string
+ EnableNetwork bool
+ CUDADeviceCount int
+ NetworkMode string // docker network mode, normally "default"
+ CgroupParent string
+ Stdin io.Reader
+ Stdout io.Writer
+ Stderr io.Writer
+}
+
+// containerExecutor is an interface to a container runtime
+// (docker/singularity).
+type containerExecutor interface {
+ // ImageLoad loads the image from the given tarball such that
+ // it can be used to create/start a container.
+ LoadImage(imageID string, imageTarballPath string, container arvados.Container, keepMount string,
+ containerClient *arvados.Client) error
+
+ // Wait for the container process to finish, and return its
+ // exit code. If applicable, also remove the stopped container
+ // before returning.
+ Wait(context.Context) (int, error)
+
+ // Create a container, but don't start it yet.
+ Create(containerSpec) error
+
+ // Start the container
+ Start() error
+
+ // CID the container will belong to
+ CgroupID() string
+
+ // Stop the container immediately
+ Stop() error
+
+ // Release resources (temp dirs, stopped containers)
+ Close()
+
+ // Name of runtime engine ("docker", "singularity")
+ Runtime() string
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "bytes"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "strings"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "golang.org/x/net/context"
+ . "gopkg.in/check.v1"
+)
+
+func busyboxDockerImage(c *C) string {
+ fnm := "busybox_uclibc.tar"
+ cachedir := c.MkDir()
+ cachefile := cachedir + "/" + fnm
+ if _, err := os.Stat(cachefile); err == nil {
+ return cachefile
+ }
+
+ f, err := ioutil.TempFile(cachedir, "")
+ c.Assert(err, IsNil)
+ defer f.Close()
+ defer os.Remove(f.Name())
+
+ resp, err := http.Get("https://cache.arvados.org/" + fnm)
+ c.Assert(err, IsNil)
+ defer resp.Body.Close()
+ _, err = io.Copy(f, resp.Body)
+ c.Assert(err, IsNil)
+ err = f.Close()
+ c.Assert(err, IsNil)
+ err = os.Rename(f.Name(), cachefile)
+ c.Assert(err, IsNil)
+
+ return cachefile
+}
+
+type nopWriteCloser struct{ io.Writer }
+
+func (nopWriteCloser) Close() error { return nil }
+
+// embedded by dockerSuite and singularitySuite so they can share
+// tests.
+type executorSuite struct {
+ newExecutor func(*C) // embedding struct's SetUpSuite method must set this
+ executor containerExecutor
+ spec containerSpec
+ stdout bytes.Buffer
+ stderr bytes.Buffer
+}
+
+func (s *executorSuite) SetUpTest(c *C) {
+ s.newExecutor(c)
+ s.stdout = bytes.Buffer{}
+ s.stderr = bytes.Buffer{}
+ s.spec = containerSpec{
+ Image: "busybox:uclibc",
+ VCPUs: 1,
+ WorkingDir: "",
+ Env: map[string]string{"PATH": "/bin:/usr/bin"},
+ NetworkMode: "default",
+ Stdout: nopWriteCloser{&s.stdout},
+ Stderr: nopWriteCloser{&s.stderr},
+ }
+ err := s.executor.LoadImage("", busyboxDockerImage(c), arvados.Container{}, "", nil)
+ c.Assert(err, IsNil)
+}
+
+func (s *executorSuite) TearDownTest(c *C) {
+ s.executor.Close()
+}
+
+func (s *executorSuite) TestExecTrivialContainer(c *C) {
+ s.spec.Command = []string{"echo", "ok"}
+ s.checkRun(c, 0)
+ c.Check(s.stdout.String(), Equals, "ok\n")
+ c.Check(s.stderr.String(), Equals, "")
+}
+
+func (s *executorSuite) TestExecStop(c *C) {
+ s.spec.Command = []string{"sh", "-c", "sleep 10; echo ok"}
+ err := s.executor.Create(s.spec)
+ c.Assert(err, IsNil)
+ err = s.executor.Start()
+ c.Assert(err, IsNil)
+ go func() {
+ time.Sleep(time.Second / 10)
+ s.executor.Stop()
+ }()
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(10*time.Second))
+ defer cancel()
+ code, err := s.executor.Wait(ctx)
+ c.Check(code, Not(Equals), 0)
+ c.Check(err, IsNil)
+ c.Check(s.stdout.String(), Equals, "")
+ c.Check(s.stderr.String(), Equals, "")
+}
+
+func (s *executorSuite) TestExecCleanEnv(c *C) {
+ s.spec.Command = []string{"env"}
+ s.checkRun(c, 0)
+ c.Check(s.stderr.String(), Equals, "")
+ got := map[string]string{}
+ for _, kv := range strings.Split(s.stdout.String(), "\n") {
+ if kv == "" {
+ continue
+ }
+ kv := strings.SplitN(kv, "=", 2)
+ switch kv[0] {
+ case "HOSTNAME", "HOME":
+ // docker sets these by itself
+ case "LD_LIBRARY_PATH", "SINGULARITY_NAME", "PWD", "LANG", "SHLVL", "SINGULARITY_INIT", "SINGULARITY_CONTAINER":
+ // singularity sets these by itself (cf. https://sylabs.io/guides/3.5/user-guide/environment_and_metadata.html)
+ case "SINGULARITY_APPNAME":
+ // singularity also sets this by itself (v3.5.2, but not v3.7.4)
+ case "PROMPT_COMMAND", "PS1", "SINGULARITY_BIND", "SINGULARITY_COMMAND", "SINGULARITY_ENVIRONMENT":
+ // singularity also sets these by itself (v3.7.4)
+ default:
+ got[kv[0]] = kv[1]
+ }
+ }
+ c.Check(got, DeepEquals, s.spec.Env)
+}
+func (s *executorSuite) TestExecEnableNetwork(c *C) {
+ for _, enable := range []bool{false, true} {
+ s.SetUpTest(c)
+ s.spec.Command = []string{"ip", "route"}
+ s.spec.EnableNetwork = enable
+ s.checkRun(c, 0)
+ if enable {
+ c.Check(s.stdout.String(), Matches, "(?ms).*default via.*")
+ } else {
+ c.Check(s.stdout.String(), Equals, "")
+ }
+ }
+}
+
+func (s *executorSuite) TestExecWorkingDir(c *C) {
+ s.spec.WorkingDir = "/tmp"
+ s.spec.Command = []string{"sh", "-c", "pwd"}
+ s.checkRun(c, 0)
+ c.Check(s.stdout.String(), Equals, "/tmp\n")
+}
+
+func (s *executorSuite) TestExecStdoutStderr(c *C) {
+ s.spec.Command = []string{"sh", "-c", "echo foo; echo -n bar >&2; echo baz; echo waz >&2"}
+ s.checkRun(c, 0)
+ c.Check(s.stdout.String(), Equals, "foo\nbaz\n")
+ c.Check(s.stderr.String(), Equals, "barwaz\n")
+}
+
+func (s *executorSuite) checkRun(c *C, expectCode int) {
+ c.Assert(s.executor.Create(s.spec), IsNil)
+ c.Assert(s.executor.Start(), IsNil)
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(10*time.Second))
+ defer cancel()
+ code, err := s.executor.Wait(ctx)
+ c.Assert(err, IsNil)
+ c.Check(code, Equals, expectCode)
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "os/exec"
+ "strings"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ . "gopkg.in/check.v1"
+)
+
+var _ = Suite(&integrationSuite{})
+
+type integrationSuite struct {
+ engine string
+ image arvados.Collection
+ input arvados.Collection
+ stdin bytes.Buffer
+ stdout bytes.Buffer
+ stderr bytes.Buffer
+ cr arvados.ContainerRequest
+ client *arvados.Client
+ ac *arvadosclient.ArvadosClient
+ kc *keepclient.KeepClient
+
+ logCollection arvados.Collection
+ outputCollection arvados.Collection
+}
+
+func (s *integrationSuite) SetUpSuite(c *C) {
+ _, err := exec.LookPath("docker")
+ if err != nil {
+ c.Skip("looks like docker is not installed")
+ }
+
+ arvadostest.StartKeep(2, true)
+
+ out, err := exec.Command("docker", "load", "--input", busyboxDockerImage(c)).CombinedOutput()
+ c.Log(string(out))
+ c.Assert(err, IsNil)
+ out, err = exec.Command("arv-keepdocker", "--no-resume", "busybox:uclibc").Output()
+ imageUUID := strings.TrimSpace(string(out))
+ c.Logf("image uuid %s", imageUUID)
+ if !c.Check(err, IsNil) {
+ if err, ok := err.(*exec.ExitError); ok {
+ c.Logf("%s", err.Stderr)
+ }
+ c.Fail()
+ }
+ err = arvados.NewClientFromEnv().RequestAndDecode(&s.image, "GET", "arvados/v1/collections/"+imageUUID, nil, nil)
+ c.Assert(err, IsNil)
+ c.Logf("image pdh %s", s.image.PortableDataHash)
+
+ s.client = arvados.NewClientFromEnv()
+ s.ac, err = arvadosclient.New(s.client)
+ c.Assert(err, IsNil)
+ s.kc = keepclient.New(s.ac)
+ fs, err := s.input.FileSystem(s.client, s.kc)
+ c.Assert(err, IsNil)
+ f, err := fs.OpenFile("inputfile", os.O_CREATE|os.O_WRONLY, 0755)
+ c.Assert(err, IsNil)
+ _, err = f.Write([]byte("inputdata"))
+ c.Assert(err, IsNil)
+ err = f.Close()
+ c.Assert(err, IsNil)
+ s.input.ManifestText, err = fs.MarshalManifest(".")
+ c.Assert(err, IsNil)
+ err = s.client.RequestAndDecode(&s.input, "POST", "arvados/v1/collections", nil, map[string]interface{}{
+ "ensure_unique_name": true,
+ "collection": map[string]interface{}{
+ "manifest_text": s.input.ManifestText,
+ },
+ })
+ c.Assert(err, IsNil)
+ c.Logf("input pdh %s", s.input.PortableDataHash)
+}
+
+func (s *integrationSuite) TearDownSuite(c *C) {
+ os.Unsetenv("ARVADOS_KEEP_SERVICES")
+ if s.client == nil {
+ // didn't set up
+ return
+ }
+ err := s.client.RequestAndDecode(nil, "POST", "database/reset", nil, nil)
+ c.Check(err, IsNil)
+}
+
+func (s *integrationSuite) SetUpTest(c *C) {
+ os.Unsetenv("ARVADOS_KEEP_SERVICES")
+ s.engine = "docker"
+ s.stdin = bytes.Buffer{}
+ s.stdout = bytes.Buffer{}
+ s.stderr = bytes.Buffer{}
+ s.logCollection = arvados.Collection{}
+ s.outputCollection = arvados.Collection{}
+ s.cr = arvados.ContainerRequest{
+ Priority: 1,
+ State: "Committed",
+ OutputPath: "/mnt/out",
+ ContainerImage: s.image.PortableDataHash,
+ Mounts: map[string]arvados.Mount{
+ "/mnt/json": {
+ Kind: "json",
+ Content: []interface{}{
+ "foo",
+ map[string]string{"foo": "bar"},
+ nil,
+ },
+ },
+ "/mnt/in": {
+ Kind: "collection",
+ PortableDataHash: s.input.PortableDataHash,
+ },
+ "/mnt/out": {
+ Kind: "tmp",
+ Capacity: 1000,
+ },
+ },
+ RuntimeConstraints: arvados.RuntimeConstraints{
+ RAM: 128000000,
+ VCPUs: 1,
+ API: true,
+ },
+ }
+}
+
+func (s *integrationSuite) setup(c *C) {
+ err := s.client.RequestAndDecode(&s.cr, "POST", "arvados/v1/container_requests", nil, map[string]interface{}{"container_request": map[string]interface{}{
+ "priority": s.cr.Priority,
+ "state": s.cr.State,
+ "command": s.cr.Command,
+ "output_path": s.cr.OutputPath,
+ "container_image": s.cr.ContainerImage,
+ "mounts": s.cr.Mounts,
+ "runtime_constraints": s.cr.RuntimeConstraints,
+ "use_existing": false,
+ }})
+ c.Assert(err, IsNil)
+ c.Assert(s.cr.ContainerUUID, Not(Equals), "")
+ err = s.client.RequestAndDecode(nil, "POST", "arvados/v1/containers/"+s.cr.ContainerUUID+"/lock", nil, nil)
+ c.Assert(err, IsNil)
+}
+
+func (s *integrationSuite) TestRunTrivialContainerWithDocker(c *C) {
+ s.engine = "docker"
+ s.testRunTrivialContainer(c)
+}
+
+func (s *integrationSuite) TestRunTrivialContainerWithSingularity(c *C) {
+ s.engine = "singularity"
+ s.testRunTrivialContainer(c)
+}
+
+func (s *integrationSuite) TestRunTrivialContainerWithLocalKeepstore(c *C) {
+ for _, trial := range []struct {
+ logConfig string
+ matchGetReq Checker
+ matchPutReq Checker
+ matchStartupMessage Checker
+ }{
+ {"none", Not(Matches), Not(Matches), Not(Matches)},
+ {"all", Matches, Matches, Matches},
+ {"errors", Not(Matches), Not(Matches), Matches},
+ } {
+ c.Logf("=== testing with Containers.LocalKeepLogsToContainerLog: %q", trial.logConfig)
+ s.SetUpTest(c)
+
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, IsNil)
+ cluster, err := cfg.GetCluster("")
+ c.Assert(err, IsNil)
+ for uuid, volume := range cluster.Volumes {
+ volume.AccessViaHosts = nil
+ volume.Replication = 2
+ cluster.Volumes[uuid] = volume
+ }
+ cluster.Containers.LocalKeepLogsToContainerLog = trial.logConfig
+
+ s.stdin.Reset()
+ err = json.NewEncoder(&s.stdin).Encode(ConfigData{
+ Env: nil,
+ KeepBuffers: 1,
+ Cluster: cluster,
+ })
+ c.Assert(err, IsNil)
+
+ s.engine = "docker"
+ s.testRunTrivialContainer(c)
+
+ fs, err := s.logCollection.FileSystem(s.client, s.kc)
+ c.Assert(err, IsNil)
+ f, err := fs.Open("keepstore.txt")
+ if trial.logConfig == "none" {
+ c.Check(err, NotNil)
+ c.Check(os.IsNotExist(err), Equals, true)
+ } else {
+ c.Assert(err, IsNil)
+ buf, err := ioutil.ReadAll(f)
+ c.Assert(err, IsNil)
+ c.Check(string(buf), trial.matchGetReq, `(?ms).*"reqMethod":"GET".*`)
+ c.Check(string(buf), trial.matchPutReq, `(?ms).*"reqMethod":"PUT".*,"reqPath":"0e3bcff26d51c895a60ea0d4585e134d".*`)
+ }
+ }
+}
+
+func (s *integrationSuite) testRunTrivialContainer(c *C) {
+ if err := exec.Command("which", s.engine).Run(); err != nil {
+ c.Skip(fmt.Sprintf("%s: %s", s.engine, err))
+ }
+ s.cr.Command = []string{"sh", "-c", "cat /mnt/in/inputfile >/mnt/out/inputfile && cat /mnt/json >/mnt/out/json && ! touch /mnt/in/shouldbereadonly && mkdir /mnt/out/emptydir"}
+ s.setup(c)
+
+ args := []string{
+ "-runtime-engine=" + s.engine,
+ "-enable-memory-limit=false",
+ s.cr.ContainerUUID,
+ }
+ if s.stdin.Len() > 0 {
+ args = append([]string{"-stdin-config=true"}, args...)
+ }
+ code := command{}.RunCommand("crunch-run", args, &s.stdin, io.MultiWriter(&s.stdout, os.Stderr), io.MultiWriter(&s.stderr, os.Stderr))
+ c.Logf("\n===== stdout =====\n%s", s.stdout.String())
+ c.Logf("\n===== stderr =====\n%s", s.stderr.String())
+ c.Check(code, Equals, 0)
+ err := s.client.RequestAndDecode(&s.cr, "GET", "arvados/v1/container_requests/"+s.cr.UUID, nil, nil)
+ c.Assert(err, IsNil)
+ c.Logf("Finished container request: %#v", s.cr)
+
+ var log arvados.Collection
+ err = s.client.RequestAndDecode(&log, "GET", "arvados/v1/collections/"+s.cr.LogUUID, nil, nil)
+ c.Assert(err, IsNil)
+ fs, err := log.FileSystem(s.client, s.kc)
+ c.Assert(err, IsNil)
+ if d, err := fs.Open("/"); c.Check(err, IsNil) {
+ fis, err := d.Readdir(-1)
+ c.Assert(err, IsNil)
+ for _, fi := range fis {
+ if fi.IsDir() {
+ continue
+ }
+ f, err := fs.Open(fi.Name())
+ c.Assert(err, IsNil)
+ buf, err := ioutil.ReadAll(f)
+ c.Assert(err, IsNil)
+ c.Logf("\n===== %s =====\n%s", fi.Name(), buf)
+ }
+ }
+ s.logCollection = log
+
+ var output arvados.Collection
+ err = s.client.RequestAndDecode(&output, "GET", "arvados/v1/collections/"+s.cr.OutputUUID, nil, nil)
+ c.Assert(err, IsNil)
+ fs, err = output.FileSystem(s.client, s.kc)
+ c.Assert(err, IsNil)
+ if f, err := fs.Open("inputfile"); c.Check(err, IsNil) {
+ defer f.Close()
+ buf, err := ioutil.ReadAll(f)
+ c.Check(err, IsNil)
+ c.Check(string(buf), Equals, "inputdata")
+ }
+ if f, err := fs.Open("json"); c.Check(err, IsNil) {
+ defer f.Close()
+ buf, err := ioutil.ReadAll(f)
+ c.Check(err, IsNil)
+ c.Check(string(buf), Equals, `["foo",{"foo":"bar"},null]`)
+ }
+ if fi, err := fs.Stat("emptydir"); c.Check(err, IsNil) {
+ c.Check(fi.IsDir(), Equals, true)
+ }
+ if d, err := fs.Open("emptydir"); c.Check(err, IsNil) {
+ defer d.Close()
+ fis, err := d.Readdir(-1)
+ c.Assert(err, IsNil)
+ // crunch-run still saves a ".keep" file to preserve
+ // empty dirs even though that shouldn't be
+ // necessary. Ideally we would do:
+ // c.Check(fis, HasLen, 0)
+ for _, fi := range fis {
+ c.Check(fi.Name(), Equals, ".keep")
+ }
+ }
+ s.outputCollection = output
+}
import (
"bufio"
"bytes"
+ "encoding/json"
"fmt"
"io"
"log"
loadDuration(&crunchLogUpdatePeriod, "crunchLogUpdatePeriod")
}
+
+type filterKeepstoreErrorsOnly struct {
+ io.WriteCloser
+ buf []byte
+}
+
+func (f *filterKeepstoreErrorsOnly) Write(p []byte) (int, error) {
+ log.Printf("filterKeepstoreErrorsOnly: write %q", p)
+ f.buf = append(f.buf, p...)
+ start := 0
+ for i := len(f.buf) - len(p); i < len(f.buf); i++ {
+ if f.buf[i] == '\n' {
+ if f.check(f.buf[start:i]) {
+ _, err := f.WriteCloser.Write(f.buf[start : i+1])
+ if err != nil {
+ return 0, err
+ }
+ }
+ start = i + 1
+ }
+ }
+ if start > 0 {
+ copy(f.buf, f.buf[start:])
+ f.buf = f.buf[:len(f.buf)-start]
+ }
+ return len(p), nil
+}
+
+func (f *filterKeepstoreErrorsOnly) check(line []byte) bool {
+ if len(line) == 0 {
+ return false
+ }
+ if line[0] != '{' {
+ return true
+ }
+ var m map[string]interface{}
+ err := json.Unmarshal(line, &m)
+ if err != nil {
+ return true
+ }
+ if m["msg"] == "request" {
+ return false
+ }
+ if m["msg"] == "response" {
+ if code, _ := m["respStatusCode"].(float64); code >= 200 && code < 300 {
+ return false
+ }
+ }
+ return true
+}
package crunchrun
import (
+ "bytes"
"fmt"
+ "io"
"strings"
"testing"
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
. "gopkg.in/check.v1"
+ check "gopkg.in/check.v1"
)
type LoggingTestSuite struct {
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.CrunchLog.Timestamper = (&TestTimestamper{}).Timestamp
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.CrunchLog.Timestamper = (&TestTimestamper{}).Timestamp
cr.CrunchLog.Immediate = nil
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
ts := &TestTimestamper{}
cr.CrunchLog.Timestamper = ts.Timestamp
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
ts := &TestTimestamper{}
cr.CrunchLog.Timestamper = ts.Timestamp
api := &ArvTestClient{}
kc := &KeepTestClient{}
defer kc.Close()
- cr, err := NewContainerRunner(s.client, api, kc, nil, "zzzzz-zzzzzzzzzzzzzzz")
+ cr, err := NewContainerRunner(s.client, api, kc, "zzzzz-zzzzzzzzzzzzzzz")
c.Assert(err, IsNil)
cr.CrunchLog.Timestamper = (&TestTimestamper{}).Timestamp
c.Check(true, Equals, strings.Contains(stderrLog, expected))
c.Check(string(kc.Content), Equals, logtext)
}
+
+type filterSuite struct{}
+
+var _ = Suite(&filterSuite{})
+
+func (*filterSuite) TestFilterKeepstoreErrorsOnly(c *check.C) {
+ var buf bytes.Buffer
+ f := filterKeepstoreErrorsOnly{WriteCloser: nopCloser{&buf}}
+ for _, s := range []string{
+ "not j",
+ "son\n" + `{"msg":"foo"}` + "\n{}\n" + `{"msg":"request"}` + "\n" + `{"msg":1234}` + "\n\n",
+ "\n[\n",
+ `{"msg":"response","respStatusCode":404,"foo": "bar"}` + "\n",
+ `{"msg":"response","respStatusCode":206}` + "\n",
+ } {
+ f.Write([]byte(s))
+ }
+ c.Check(buf.String(), check.Equals, `not json
+{"msg":"foo"}
+{}
+{"msg":1234}
+[
+{"msg":"response","respStatusCode":404,"foo": "bar"}
+`)
+}
+
+type nopCloser struct {
+ io.Writer
+}
+
+func (nopCloser) Close() error { return nil }
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "bytes"
+ "strings"
+)
+
+// logScanner is an io.Writer that calls ReportFunc(pattern) the first
+// time one of the Patterns appears in the data. Patterns must not
+// contain newlines.
+type logScanner struct {
+ Patterns []string
+ ReportFunc func(pattern, text string)
+ reported bool
+ buf bytes.Buffer
+}
+
+func (s *logScanner) Write(p []byte) (int, error) {
+ if s.reported {
+ // We only call reportFunc once. Once we've called it
+ // there's no need to buffer/search subsequent writes.
+ return len(p), nil
+ }
+ split := bytes.LastIndexByte(p, '\n')
+ if split < 0 {
+ return s.buf.Write(p)
+ }
+ s.buf.Write(p[:split+1])
+ txt := s.buf.String()
+ for _, pattern := range s.Patterns {
+ if found := strings.Index(txt, pattern); found >= 0 {
+ // Report the entire line where the pattern
+ // was found.
+ txt = txt[strings.LastIndexByte(txt[:found], '\n')+1:]
+ if end := strings.IndexByte(txt, '\n'); end >= 0 {
+ txt = txt[:end]
+ }
+ s.ReportFunc(pattern, txt)
+ s.reported = true
+ return len(p), nil
+ }
+ }
+ s.buf.Reset()
+ if split == len(p) {
+ return len(p), nil
+ }
+ n, err := s.buf.Write(p[split+1:])
+ return n + split + 1, err
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&logScannerSuite{})
+
+type logScannerSuite struct {
+}
+
+func (s *logScannerSuite) TestCallReportFuncOnce(c *check.C) {
+ var reported []string
+ ls := logScanner{
+ Patterns: []string{"foobar", "barbaz"},
+ ReportFunc: func(pattern, detail string) {
+ reported = append(reported, pattern, detail)
+ },
+ }
+ ls.Write([]byte("foo\nbar\n2021-01-01T00:00:00.000Z: bar"))
+ ls.Write([]byte("baz: it's a detail\nwaz\nqux"))
+ ls.Write([]byte("\nfoobar\n"))
+ c.Check(reported, check.DeepEquals, []string{"barbaz", "2021-01-01T00:00:00.000Z: barbaz: it's a detail"})
+}
+
+func (s *logScannerSuite) TestOneWritePerLine(c *check.C) {
+ var reported []string
+ ls := logScanner{
+ Patterns: []string{"barbaz"},
+ ReportFunc: func(pattern, detail string) {
+ reported = append(reported, pattern, detail)
+ },
+ }
+ ls.Write([]byte("foo\n"))
+ ls.Write([]byte("2021-01-01T00:00:00.000Z: barbaz: it's a detail\n"))
+ ls.Write([]byte("waz\n"))
+ c.Check(reported, check.DeepEquals, []string{"barbaz", "2021-01-01T00:00:00.000Z: barbaz: it's a detail"})
+}
+
+func (s *logScannerSuite) TestNoDetail(c *check.C) {
+ var reported []string
+ ls := logScanner{
+ Patterns: []string{"barbaz"},
+ ReportFunc: func(pattern, detail string) {
+ reported = append(reported, pattern, detail)
+ },
+ }
+ ls.Write([]byte("foo\n"))
+ ls.Write([]byte("barbaz\n"))
+ ls.Write([]byte("waz\n"))
+ c.Check(reported, check.DeepEquals, []string{"barbaz", "barbaz"})
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "os/exec"
+ "sort"
+ "syscall"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "golang.org/x/net/context"
+)
+
+type singularityExecutor struct {
+ logf func(string, ...interface{})
+ spec containerSpec
+ tmpdir string
+ child *exec.Cmd
+ imageFilename string // "sif" image
+}
+
+func newSingularityExecutor(logf func(string, ...interface{})) (*singularityExecutor, error) {
+ tmpdir, err := ioutil.TempDir("", "crunch-run-singularity-")
+ if err != nil {
+ return nil, err
+ }
+ return &singularityExecutor{
+ logf: logf,
+ tmpdir: tmpdir,
+ }, nil
+}
+
+func (e *singularityExecutor) Runtime() string { return "singularity" }
+
+func (e *singularityExecutor) getOrCreateProject(ownerUuid string, name string, containerClient *arvados.Client) (*arvados.Group, error) {
+ var gp arvados.GroupList
+ err := containerClient.RequestAndDecode(&gp,
+ arvados.EndpointGroupList.Method,
+ arvados.EndpointGroupList.Path,
+ nil, arvados.ListOptions{Filters: []arvados.Filter{
+ arvados.Filter{"owner_uuid", "=", ownerUuid},
+ arvados.Filter{"name", "=", name},
+ arvados.Filter{"group_class", "=", "project"},
+ },
+ Limit: 1})
+ if err != nil {
+ return nil, err
+ }
+ if len(gp.Items) == 1 {
+ return &gp.Items[0], nil
+ }
+
+ var rgroup arvados.Group
+ err = containerClient.RequestAndDecode(&rgroup,
+ arvados.EndpointGroupCreate.Method,
+ arvados.EndpointGroupCreate.Path,
+ nil, map[string]interface{}{
+ "group": map[string]string{
+ "owner_uuid": ownerUuid,
+ "name": name,
+ "group_class": "project",
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &rgroup, nil
+}
+
+func (e *singularityExecutor) checkImageCache(dockerImageID string, container arvados.Container, arvMountPoint string,
+ containerClient *arvados.Client) (collection *arvados.Collection, err error) {
+
+ // Cache the image to keep
+ cacheGroup, err := e.getOrCreateProject(container.RuntimeUserUUID, ".cache", containerClient)
+ if err != nil {
+ return nil, fmt.Errorf("error getting '.cache' project: %v", err)
+ }
+ imageGroup, err := e.getOrCreateProject(cacheGroup.UUID, "auto-generated singularity images", containerClient)
+ if err != nil {
+ return nil, fmt.Errorf("error getting 'auto-generated singularity images' project: %s", err)
+ }
+
+ collectionName := fmt.Sprintf("singularity image for %v", dockerImageID)
+ var cl arvados.CollectionList
+ err = containerClient.RequestAndDecode(&cl,
+ arvados.EndpointCollectionList.Method,
+ arvados.EndpointCollectionList.Path,
+ nil, arvados.ListOptions{Filters: []arvados.Filter{
+ arvados.Filter{"owner_uuid", "=", imageGroup.UUID},
+ arvados.Filter{"name", "=", collectionName},
+ },
+ Limit: 1})
+ if err != nil {
+ return nil, fmt.Errorf("error querying for collection '%v': %v", collectionName, err)
+ }
+ var imageCollection arvados.Collection
+ if len(cl.Items) == 1 {
+ imageCollection = cl.Items[0]
+ } else {
+ collectionName := "converting " + collectionName
+ exp := time.Now().Add(24 * 7 * 2 * time.Hour)
+ err = containerClient.RequestAndDecode(&imageCollection,
+ arvados.EndpointCollectionCreate.Method,
+ arvados.EndpointCollectionCreate.Path,
+ nil, map[string]interface{}{
+ "collection": map[string]string{
+ "owner_uuid": imageGroup.UUID,
+ "name": collectionName,
+ "trash_at": exp.UTC().Format(time.RFC3339),
+ },
+ "ensure_unique_name": true,
+ })
+ if err != nil {
+ return nil, fmt.Errorf("error creating '%v' collection: %s", collectionName, err)
+ }
+
+ }
+
+ return &imageCollection, nil
+}
+
+// LoadImage will satisfy ContainerExecuter interface transforming
+// containerImage into a sif file for later use.
+func (e *singularityExecutor) LoadImage(dockerImageID string, imageTarballPath string, container arvados.Container, arvMountPoint string,
+ containerClient *arvados.Client) error {
+
+ var imageFilename string
+ var sifCollection *arvados.Collection
+ var err error
+ if containerClient != nil {
+ sifCollection, err = e.checkImageCache(dockerImageID, container, arvMountPoint, containerClient)
+ if err != nil {
+ return err
+ }
+ imageFilename = fmt.Sprintf("%s/by_uuid/%s/image.sif", arvMountPoint, sifCollection.UUID)
+ } else {
+ imageFilename = e.tmpdir + "/image.sif"
+ }
+
+ if _, err := os.Stat(imageFilename); os.IsNotExist(err) {
+ // Make sure the docker image is readable, and error
+ // out if not.
+ if _, err := os.Stat(imageTarballPath); err != nil {
+ return err
+ }
+
+ e.logf("building singularity image")
+ // "singularity build" does not accept a
+ // docker-archive://... filename containing a ":" character,
+ // as in "/path/to/sha256:abcd...1234.tar". Workaround: make a
+ // symlink that doesn't have ":" chars.
+ err := os.Symlink(imageTarballPath, e.tmpdir+"/image.tar")
+ if err != nil {
+ return err
+ }
+
+ // Set up a cache and tmp dir for singularity build
+ err = os.Mkdir(e.tmpdir+"/cache", 0700)
+ if err != nil {
+ return err
+ }
+ defer os.RemoveAll(e.tmpdir + "/cache")
+ err = os.Mkdir(e.tmpdir+"/tmp", 0700)
+ if err != nil {
+ return err
+ }
+ defer os.RemoveAll(e.tmpdir + "/tmp")
+
+ build := exec.Command("singularity", "build", imageFilename, "docker-archive://"+e.tmpdir+"/image.tar")
+ build.Env = os.Environ()
+ build.Env = append(build.Env, "SINGULARITY_CACHEDIR="+e.tmpdir+"/cache")
+ build.Env = append(build.Env, "SINGULARITY_TMPDIR="+e.tmpdir+"/tmp")
+ e.logf("%v", build.Args)
+ out, err := build.CombinedOutput()
+ // INFO: Starting build...
+ // Getting image source signatures
+ // Copying blob ab15617702de done
+ // Copying config 651e02b8a2 done
+ // Writing manifest to image destination
+ // Storing signatures
+ // 2021/04/22 14:42:14 info unpack layer: sha256:21cbfd3a344c52b197b9fa36091e66d9cbe52232703ff78d44734f85abb7ccd3
+ // INFO: Creating SIF file...
+ // INFO: Build complete: arvados-jobs.latest.sif
+ e.logf("%s", out)
+ if err != nil {
+ return err
+ }
+ }
+
+ if containerClient == nil {
+ e.imageFilename = imageFilename
+ return nil
+ }
+
+ // update TTL to now + two weeks
+ exp := time.Now().Add(24 * 7 * 2 * time.Hour)
+
+ uuidPath, err := containerClient.PathForUUID("update", sifCollection.UUID)
+ if err != nil {
+ e.logf("error PathForUUID: %v", err)
+ return nil
+ }
+ var imageCollection arvados.Collection
+ err = containerClient.RequestAndDecode(&imageCollection,
+ arvados.EndpointCollectionUpdate.Method,
+ uuidPath,
+ nil, map[string]interface{}{
+ "collection": map[string]string{
+ "name": fmt.Sprintf("singularity image for %v", dockerImageID),
+ "trash_at": exp.UTC().Format(time.RFC3339),
+ },
+ })
+ if err == nil {
+ // If we just wrote the image to the cache, the
+ // response also returns the updated PDH
+ e.imageFilename = fmt.Sprintf("%s/by_id/%s/image.sif", arvMountPoint, imageCollection.PortableDataHash)
+ return nil
+ }
+
+ e.logf("error updating/renaming collection for cached sif image: %v", err)
+ // Failed to update but maybe it lost a race and there is
+ // another cached collection in the same place, so check the cache
+ // again
+ sifCollection, err = e.checkImageCache(dockerImageID, container, arvMountPoint, containerClient)
+ if err != nil {
+ return err
+ }
+ e.imageFilename = fmt.Sprintf("%s/by_id/%s/image.sif", arvMountPoint, sifCollection.PortableDataHash)
+
+ return nil
+}
+
+func (e *singularityExecutor) Create(spec containerSpec) error {
+ e.spec = spec
+ return nil
+}
+
+func (e *singularityExecutor) execCmd(path string) *exec.Cmd {
+ args := []string{path, "exec", "--containall", "--cleanenv", "--pwd", e.spec.WorkingDir}
+ if !e.spec.EnableNetwork {
+ args = append(args, "--net", "--network=none")
+ }
+
+ if e.spec.CUDADeviceCount != 0 {
+ args = append(args, "--nv")
+ }
+
+ readonlyflag := map[bool]string{
+ false: "rw",
+ true: "ro",
+ }
+ var binds []string
+ for path, _ := range e.spec.BindMounts {
+ binds = append(binds, path)
+ }
+ sort.Strings(binds)
+ for _, path := range binds {
+ mount := e.spec.BindMounts[path]
+ if path == e.spec.Env["HOME"] {
+ // Singularity treates $HOME as special case
+ args = append(args, "--home", mount.HostPath+":"+path)
+ } else {
+ args = append(args, "--bind", mount.HostPath+":"+path+":"+readonlyflag[mount.ReadOnly])
+ }
+ }
+
+ // This is for singularity 3.5.2. There are some behaviors
+ // that will change in singularity 3.6, please see:
+ // https://sylabs.io/guides/3.7/user-guide/environment_and_metadata.html
+ // https://sylabs.io/guides/3.5/user-guide/environment_and_metadata.html
+ env := make([]string, 0, len(e.spec.Env))
+ for k, v := range e.spec.Env {
+ if k == "HOME" {
+ // Singularity treates $HOME as special case, this is handled
+ // with --home above
+ continue
+ }
+ env = append(env, "SINGULARITYENV_"+k+"="+v)
+ }
+
+ args = append(args, e.imageFilename)
+ args = append(args, e.spec.Command...)
+
+ return &exec.Cmd{
+ Path: path,
+ Args: args,
+ Env: env,
+ Stdin: e.spec.Stdin,
+ Stdout: e.spec.Stdout,
+ Stderr: e.spec.Stderr,
+ }
+}
+
+func (e *singularityExecutor) Start() error {
+ path, err := exec.LookPath("singularity")
+ if err != nil {
+ return err
+ }
+ child := e.execCmd(path)
+ err = child.Start()
+ if err != nil {
+ return err
+ }
+ e.child = child
+ return nil
+}
+
+func (e *singularityExecutor) CgroupID() string {
+ return ""
+}
+
+func (e *singularityExecutor) Stop() error {
+ if err := e.child.Process.Signal(syscall.Signal(0)); err != nil {
+ // process already exited
+ return nil
+ }
+ return e.child.Process.Signal(syscall.SIGKILL)
+}
+
+func (e *singularityExecutor) Wait(context.Context) (int, error) {
+ err := e.child.Wait()
+ if err, ok := err.(*exec.ExitError); ok {
+ return err.ProcessState.ExitCode(), nil
+ }
+ if err != nil {
+ return 0, err
+ }
+ return e.child.ProcessState.ExitCode(), nil
+}
+
+func (e *singularityExecutor) Close() {
+ err := os.RemoveAll(e.tmpdir)
+ if err != nil {
+ e.logf("error removing temp dir: %s", err)
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package crunchrun
+
+import (
+ "os/exec"
+
+ . "gopkg.in/check.v1"
+)
+
+var _ = Suite(&singularitySuite{})
+
+type singularitySuite struct {
+ executorSuite
+}
+
+func (s *singularitySuite) SetUpSuite(c *C) {
+ _, err := exec.LookPath("singularity")
+ if err != nil {
+ c.Skip("looks like singularity is not installed")
+ }
+ s.newExecutor = func(c *C) {
+ var err error
+ s.executor, err = newSingularityExecutor(c.Logf)
+ c.Assert(err, IsNil)
+ }
+}
+
+var _ = Suite(&singularityStubSuite{})
+
+// singularityStubSuite tests don't really invoke singularity, so we
+// can run them even if singularity is not installed.
+type singularityStubSuite struct{}
+
+func (s *singularityStubSuite) TestSingularityExecArgs(c *C) {
+ e, err := newSingularityExecutor(c.Logf)
+ c.Assert(err, IsNil)
+ err = e.Create(containerSpec{
+ WorkingDir: "/WorkingDir",
+ Env: map[string]string{"FOO": "bar"},
+ BindMounts: map[string]bindmount{"/mnt": {HostPath: "/hostpath", ReadOnly: true}},
+ EnableNetwork: false,
+ CUDADeviceCount: 3,
+ })
+ c.Check(err, IsNil)
+ e.imageFilename = "/fake/image.sif"
+ cmd := e.execCmd("./singularity")
+ c.Check(cmd.Args, DeepEquals, []string{"./singularity", "exec", "--containall", "--cleanenv", "--pwd", "/WorkingDir", "--net", "--network=none", "--nv", "--bind", "/hostpath:/mnt:ro", "/fake/image.sif"})
+ c.Check(cmd.Env, DeepEquals, []string{"SINGULARITYENV_FOO=bar"})
+}
import (
"io"
- "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
- "github.com/sirupsen/logrus"
)
var Command command
type command struct{}
-type NoPrefixFormatter struct{}
-
-func (f *NoPrefixFormatter) Format(entry *logrus.Entry) ([]byte, error) {
- return []byte(entry.Message), nil
-}
-
// RunCommand implements the subcommand "deduplication-report <collection> <collection> ..."
func (command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
var err error
}
}()
- logger.SetFormatter(new(NoPrefixFormatter))
-
- loader := config.NewLoader(stdin, logger)
- loader.SkipLegacy = true
+ logger.SetFormatter(cmd.NoPrefixFormatter{})
- exitcode := report(prog, args, loader, logger, stdout, stderr)
+ exitcode := report(prog, args, logger, stdout, stderr)
return exitcode
}
"io"
"strings"
- "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/manifest"
return
}
-func parseFlags(prog string, args []string, loader *config.Loader, logger *logrus.Logger, stderr io.Writer) (exitcode int, inputs []string) {
- flags := flag.NewFlagSet("", flag.ContinueOnError)
- flags.SetOutput(stderr)
+// parseFlags returns either some inputs to process, or (if there are
+// no inputs to process) a nil slice and a suitable exit code.
+func parseFlags(prog string, args []string, logger *logrus.Logger, stderr io.Writer) (inputs []string, exitcode int) {
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
flags.Usage = func() {
fmt.Fprintf(flags.Output(), `
Usage:
%s [options ...] <collection-uuid> <collection-uuid> ...
- %s [options ...] <collection-pdh>,<collection_uuid> \
- <collection-pdh>,<collection_uuid> ...
+ %s [options ...] <collection-pdh>,<collection-uuid> \
+ <collection-pdh>,<collection-uuid> ...
This program analyzes the overlap in blocks used by 2 or more collections. It
prints a deduplication report that shows the nominal space used by the
arv collection list --order 'file_size_total desc' --limit 100 | \
jq -r '.items[] | [.portable_data_hash,.uuid] |@csv' | \
- tail -n+2 |sed -e 's/"//g'|tr '\n' ' ' | \
+ sed -e 's/"//g'|tr '\n' ' ' | \
xargs %s
Options:
`, prog, prog, prog)
flags.PrintDefaults()
}
- loader.SetupFlags(flags)
loglevel := flags.String("log-level", "info", "logging level (debug, info, ...)")
- err := flags.Parse(args)
- if err == flag.ErrHelp {
- return 0, inputs
- } else if err != nil {
- return 2, inputs
+ if ok, code := cmd.ParseFlags(flags, prog, args, "collection-uuid [...]", stderr); !ok {
+ return nil, code
}
- inputs = flags.Args()
-
- inputs = deDuplicate(inputs)
+ inputs = deDuplicate(flags.Args())
if len(inputs) < 1 {
- logger.Errorf("Error: no collections provided")
- flags.Usage()
- return 2, inputs
+ fmt.Fprintf(stderr, "Error: no collections provided\n")
+ return nil, 2
}
lvl, err := logrus.ParseLevel(*loglevel)
if err != nil {
- return 2, inputs
+ fmt.Fprintf(stderr, "Error: cannot parse log level: %s\n", err)
+ return nil, 2
}
logger.SetLevel(lvl)
- return
+ return inputs, 0
}
func blockList(collection arvados.Collection) (blocks map[string]int) {
return
}
-func report(prog string, args []string, loader *config.Loader, logger *logrus.Logger, stdout, stderr io.Writer) (exitcode int) {
-
+func report(prog string, args []string, logger *logrus.Logger, stdout, stderr io.Writer) (exitcode int) {
var inputs []string
- exitcode, inputs = parseFlags(prog, args, loader, logger, stderr)
- if exitcode != 0 {
+
+ inputs, exitcode = parseFlags(prog, args, logger, stderr)
+ if inputs == nil {
return
}
func (*Suite) TestUsage(c *check.C) {
var stdout, stderr bytes.Buffer
- exitcode := Command.RunCommand("deduplicationreport.test", []string{"-log-level=debug"}, &bytes.Buffer{}, &stdout, &stderr)
- c.Check(exitcode, check.Equals, 2)
+ exitcode := Command.RunCommand("deduplicationreport.test", []string{"-h", "-log-level=debug"}, &bytes.Buffer{}, &stdout, &stderr)
+ c.Check(exitcode, check.Equals, 0)
c.Check(stdout.String(), check.Equals, "")
c.Log(stderr.String())
c.Check(stderr.String(), check.Matches, `(?ms).*Usage:.*`)
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package diagnostics
+
+import (
+ "bytes"
+ "context"
+ "flag"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net"
+ "net/http"
+ "net/url"
+ "strings"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/sirupsen/logrus"
+)
+
+type Command struct{}
+
+func (Command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ var diag diagnoser
+ f := flag.NewFlagSet(prog, flag.ContinueOnError)
+ f.StringVar(&diag.projectName, "project-name", "scratch area for diagnostics", "name of project to find/create in home project and use for temporary/test objects")
+ f.StringVar(&diag.logLevel, "log-level", "info", "logging level (debug, info, warning, error)")
+ f.BoolVar(&diag.checkInternal, "internal-client", false, "check that this host is considered an \"internal\" client")
+ f.BoolVar(&diag.checkExternal, "external-client", false, "check that this host is considered an \"external\" client")
+ f.IntVar(&diag.priority, "priority", 500, "priority for test container (1..1000, or 0 to skip)")
+ f.DurationVar(&diag.timeout, "timeout", 10*time.Second, "timeout for http requests")
+ if ok, code := cmd.ParseFlags(f, prog, args, "", stderr); !ok {
+ return code
+ }
+ diag.logger = ctxlog.New(stdout, "text", diag.logLevel)
+ diag.logger.SetFormatter(&logrus.TextFormatter{DisableTimestamp: true, DisableLevelTruncation: true, PadLevelText: true})
+ diag.runtests()
+ if len(diag.errors) == 0 {
+ diag.logger.Info("--- no errors ---")
+ return 0
+ } else {
+ if diag.logger.Level > logrus.ErrorLevel {
+ fmt.Fprint(stdout, "\n--- cut here --- error summary ---\n\n")
+ for _, e := range diag.errors {
+ diag.logger.Error(e)
+ }
+ }
+ return 1
+ }
+}
+
+type diagnoser struct {
+ stdout io.Writer
+ stderr io.Writer
+ logLevel string
+ priority int
+ projectName string
+ checkInternal bool
+ checkExternal bool
+ timeout time.Duration
+ logger *logrus.Logger
+ errors []string
+ done map[int]bool
+}
+
+func (diag *diagnoser) debugf(f string, args ...interface{}) {
+ diag.logger.Debugf(" ... "+f, args...)
+}
+
+func (diag *diagnoser) infof(f string, args ...interface{}) {
+ diag.logger.Infof(" ... "+f, args...)
+}
+
+func (diag *diagnoser) warnf(f string, args ...interface{}) {
+ diag.logger.Warnf(" ... "+f, args...)
+}
+
+func (diag *diagnoser) errorf(f string, args ...interface{}) {
+ diag.logger.Errorf(f, args...)
+ diag.errors = append(diag.errors, fmt.Sprintf(f, args...))
+}
+
+// Run the given func, logging appropriate messages before and after,
+// adding timing info, etc.
+//
+// The id argument should be unique among tests, and shouldn't change
+// when other tests are added/removed.
+func (diag *diagnoser) dotest(id int, title string, fn func() error) {
+ if diag.done == nil {
+ diag.done = map[int]bool{}
+ } else if diag.done[id] {
+ diag.errorf("(bug) reused test id %d", id)
+ }
+ diag.done[id] = true
+
+ diag.logger.Infof("%4d: %s", id, title)
+ t0 := time.Now()
+ err := fn()
+ elapsed := fmt.Sprintf("%d ms", time.Now().Sub(t0)/time.Millisecond)
+ if err != nil {
+ diag.errorf("%4d: %s (%s): %s", id, title, elapsed, err)
+ } else {
+ diag.logger.Debugf("%4d: %s (%s): ok", id, title, elapsed)
+ }
+}
+
+func (diag *diagnoser) runtests() {
+ client := arvados.NewClientFromEnv()
+
+ if client.APIHost == "" || client.AuthToken == "" {
+ diag.errorf("ARVADOS_API_HOST and ARVADOS_API_TOKEN environment variables are not set -- aborting without running any tests")
+ return
+ }
+
+ var dd arvados.DiscoveryDocument
+ ddpath := "discovery/v1/apis/arvados/v1/rest"
+ diag.dotest(10, fmt.Sprintf("getting discovery document from https://%s/%s", client.APIHost, ddpath), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ err := client.RequestAndDecodeContext(ctx, &dd, "GET", ddpath, nil, nil)
+ if err != nil {
+ return err
+ }
+ diag.debugf("BlobSignatureTTL = %d", dd.BlobSignatureTTL)
+ return nil
+ })
+
+ var cluster arvados.Cluster
+ cfgpath := "arvados/v1/config"
+ diag.dotest(20, fmt.Sprintf("getting exported config from https://%s/%s", client.APIHost, cfgpath), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ err := client.RequestAndDecodeContext(ctx, &cluster, "GET", cfgpath, nil, nil)
+ if err != nil {
+ return err
+ }
+ diag.debugf("Collections.BlobSigning = %v", cluster.Collections.BlobSigning)
+ return nil
+ })
+
+ var user arvados.User
+ diag.dotest(30, "getting current user record", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ err := client.RequestAndDecodeContext(ctx, &user, "GET", "arvados/v1/users/current", nil, nil)
+ if err != nil {
+ return err
+ }
+ diag.debugf("user uuid = %s", user.UUID)
+ return nil
+ })
+
+ // uncomment to create some spurious errors
+ // cluster.Services.WebDAVDownload.ExternalURL.Host = "0.0.0.0:9"
+
+ // TODO: detect routing errors here, like finding wb2 at the
+ // wb1 address.
+ for i, svc := range []*arvados.Service{
+ &cluster.Services.Keepproxy,
+ &cluster.Services.WebDAV,
+ &cluster.Services.WebDAVDownload,
+ &cluster.Services.Websocket,
+ &cluster.Services.Workbench1,
+ &cluster.Services.Workbench2,
+ } {
+ diag.dotest(40+i, fmt.Sprintf("connecting to service endpoint %s", svc.ExternalURL), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ u := svc.ExternalURL
+ if strings.HasPrefix(u.Scheme, "ws") {
+ // We can do a real websocket test elsewhere,
+ // but for now we'll just check the https
+ // connection.
+ u.Scheme = "http" + u.Scheme[2:]
+ }
+ if svc == &cluster.Services.WebDAV && strings.HasPrefix(u.Host, "*") {
+ u.Host = "d41d8cd98f00b204e9800998ecf8427e-0" + u.Host[1:]
+ }
+ req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
+ if err != nil {
+ return err
+ }
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+ })
+ }
+
+ for i, url := range []string{
+ cluster.Services.Controller.ExternalURL.String(),
+ cluster.Services.Keepproxy.ExternalURL.String() + "d41d8cd98f00b204e9800998ecf8427e+0",
+ cluster.Services.WebDAVDownload.ExternalURL.String(),
+ } {
+ diag.dotest(50+i, fmt.Sprintf("checking CORS headers at %s", url), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Origin", "https://example.com")
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ if hdr := resp.Header.Get("Access-Control-Allow-Origin"); hdr != "*" {
+ return fmt.Errorf("expected \"Access-Control-Allow-Origin: *\", got %q", hdr)
+ }
+ return nil
+ })
+ }
+
+ var keeplist arvados.KeepServiceList
+ diag.dotest(60, "checking internal/external client detection", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ err := client.RequestAndDecodeContext(ctx, &keeplist, "GET", "arvados/v1/keep_services/accessible", nil, arvados.ListOptions{Limit: 999999})
+ if err != nil {
+ return fmt.Errorf("error getting keep services list: %s", err)
+ } else if len(keeplist.Items) == 0 {
+ return fmt.Errorf("controller did not return any keep services")
+ }
+ found := map[string]int{}
+ for _, ks := range keeplist.Items {
+ found[ks.ServiceType]++
+ }
+ isInternal := found["proxy"] == 0 && len(keeplist.Items) > 0
+ isExternal := found["proxy"] > 0 && found["proxy"] == len(keeplist.Items)
+ if isExternal {
+ diag.debugf("controller returned only proxy services, this host is treated as \"external\"")
+ } else if isInternal {
+ diag.debugf("controller returned only non-proxy services, this host is treated as \"internal\"")
+ }
+ if (diag.checkInternal && !isInternal) || (diag.checkExternal && !isExternal) {
+ return fmt.Errorf("expecting internal=%v external=%v, but found internal=%v external=%v", diag.checkInternal, diag.checkExternal, isInternal, isExternal)
+ }
+ return nil
+ })
+
+ for i, ks := range keeplist.Items {
+ u := url.URL{
+ Scheme: "http",
+ Host: net.JoinHostPort(ks.ServiceHost, fmt.Sprintf("%d", ks.ServicePort)),
+ Path: "/",
+ }
+ if ks.ServiceSSLFlag {
+ u.Scheme = "https"
+ }
+ diag.dotest(61+i, fmt.Sprintf("reading+writing via keep service at %s", u.String()), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ req, err := http.NewRequestWithContext(ctx, "PUT", u.String()+"d41d8cd98f00b204e9800998ecf8427e", nil)
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Authorization", "Bearer "+client.AuthToken)
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return fmt.Errorf("reading response body: %s", err)
+ }
+ loc := strings.TrimSpace(string(body))
+ if !strings.HasPrefix(loc, "d41d8") {
+ return fmt.Errorf("unexpected response from write: %q", body)
+ }
+
+ req, err = http.NewRequestWithContext(ctx, "GET", u.String()+loc, nil)
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Authorization", "Bearer "+client.AuthToken)
+ resp, err = http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ body, err = ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return fmt.Errorf("reading response body: %s", err)
+ }
+ if len(body) != 0 {
+ return fmt.Errorf("unexpected response from read: %q", body)
+ }
+
+ return nil
+ })
+ }
+
+ var project arvados.Group
+ diag.dotest(80, fmt.Sprintf("finding/creating %q project", diag.projectName), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ var grplist arvados.GroupList
+ err := client.RequestAndDecodeContext(ctx, &grplist, "GET", "arvados/v1/groups", nil, arvados.ListOptions{
+ Filters: []arvados.Filter{
+ {"name", "=", diag.projectName},
+ {"group_class", "=", "project"},
+ {"owner_uuid", "=", user.UUID}},
+ Limit: 999999})
+ if err != nil {
+ return fmt.Errorf("list groups: %s", err)
+ }
+ if len(grplist.Items) > 0 {
+ project = grplist.Items[0]
+ diag.debugf("using existing project, uuid = %s", project.UUID)
+ return nil
+ }
+ diag.debugf("list groups: ok, no results")
+ err = client.RequestAndDecodeContext(ctx, &project, "POST", "arvados/v1/groups", nil, map[string]interface{}{"group": map[string]interface{}{
+ "name": diag.projectName,
+ "group_class": "project",
+ }})
+ if err != nil {
+ return fmt.Errorf("create project: %s", err)
+ }
+ diag.debugf("created project, uuid = %s", project.UUID)
+ return nil
+ })
+
+ var collection arvados.Collection
+ diag.dotest(90, "creating temporary collection", func() error {
+ if project.UUID == "" {
+ return fmt.Errorf("skipping, no project to work in")
+ }
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ err := client.RequestAndDecodeContext(ctx, &collection, "POST", "arvados/v1/collections", nil, map[string]interface{}{
+ "ensure_unique_name": true,
+ "collection": map[string]interface{}{
+ "owner_uuid": project.UUID,
+ "name": "test collection",
+ "trash_at": time.Now().Add(time.Hour)}})
+ if err != nil {
+ return err
+ }
+ diag.debugf("ok, uuid = %s", collection.UUID)
+ return nil
+ })
+
+ if collection.UUID != "" {
+ defer func() {
+ diag.dotest(9990, "deleting temporary collection", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ return client.RequestAndDecodeContext(ctx, nil, "DELETE", "arvados/v1/collections/"+collection.UUID, nil, nil)
+ })
+ }()
+ }
+
+ diag.dotest(100, "uploading file via webdav", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ if collection.UUID == "" {
+ return fmt.Errorf("skipping, no test collection")
+ }
+ req, err := http.NewRequestWithContext(ctx, "PUT", cluster.Services.WebDAVDownload.ExternalURL.String()+"c="+collection.UUID+"/testfile", bytes.NewBufferString("testfiledata"))
+ if err != nil {
+ return fmt.Errorf("BUG? http.NewRequest: %s", err)
+ }
+ req.Header.Set("Authorization", "Bearer "+client.AuthToken)
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return fmt.Errorf("error performing http request: %s", err)
+ }
+ resp.Body.Close()
+ if resp.StatusCode != http.StatusCreated {
+ return fmt.Errorf("status %s", resp.Status)
+ }
+ diag.debugf("ok, status %s", resp.Status)
+ err = client.RequestAndDecodeContext(ctx, &collection, "GET", "arvados/v1/collections/"+collection.UUID, nil, nil)
+ if err != nil {
+ return fmt.Errorf("get updated collection: %s", err)
+ }
+ diag.debugf("ok, pdh %s", collection.PortableDataHash)
+ return nil
+ })
+
+ davurl := cluster.Services.WebDAV.ExternalURL
+ diag.dotest(110, fmt.Sprintf("checking WebDAV ExternalURL wildcard (%s)", davurl), func() error {
+ if davurl.Host == "" {
+ return fmt.Errorf("host missing - content previews will not work")
+ }
+ if !strings.HasPrefix(davurl.Host, "*--") && !strings.HasPrefix(davurl.Host, "*.") && !cluster.Collections.TrustAllContent {
+ diag.warnf("WebDAV ExternalURL has no leading wildcard and TrustAllContent==false - content previews will not work")
+ }
+ return nil
+ })
+
+ for i, trial := range []struct {
+ needcoll bool
+ status int
+ fileurl string
+ }{
+ {false, http.StatusNotFound, strings.Replace(davurl.String(), "*", "d41d8cd98f00b204e9800998ecf8427e-0", 1) + "foo"},
+ {false, http.StatusNotFound, strings.Replace(davurl.String(), "*", "d41d8cd98f00b204e9800998ecf8427e-0", 1) + "testfile"},
+ {false, http.StatusNotFound, cluster.Services.WebDAVDownload.ExternalURL.String() + "c=d41d8cd98f00b204e9800998ecf8427e+0/_/foo"},
+ {false, http.StatusNotFound, cluster.Services.WebDAVDownload.ExternalURL.String() + "c=d41d8cd98f00b204e9800998ecf8427e+0/_/testfile"},
+ {true, http.StatusOK, strings.Replace(davurl.String(), "*", strings.Replace(collection.PortableDataHash, "+", "-", -1), 1) + "testfile"},
+ {true, http.StatusOK, cluster.Services.WebDAVDownload.ExternalURL.String() + "c=" + collection.UUID + "/_/testfile"},
+ } {
+ diag.dotest(120+i, fmt.Sprintf("downloading from webdav (%s)", trial.fileurl), func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ if trial.needcoll && collection.UUID == "" {
+ return fmt.Errorf("skipping, no test collection")
+ }
+ req, err := http.NewRequestWithContext(ctx, "GET", trial.fileurl, nil)
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Authorization", "Bearer "+client.AuthToken)
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return fmt.Errorf("reading response: %s", err)
+ }
+ if resp.StatusCode != trial.status {
+ return fmt.Errorf("unexpected response status: %s", resp.Status)
+ }
+ if trial.status == http.StatusOK && string(body) != "testfiledata" {
+ return fmt.Errorf("unexpected response content: %q", body)
+ }
+ return nil
+ })
+ }
+
+ var vm arvados.VirtualMachine
+ diag.dotest(130, "getting list of virtual machines", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ var vmlist arvados.VirtualMachineList
+ err := client.RequestAndDecodeContext(ctx, &vmlist, "GET", "arvados/v1/virtual_machines", nil, arvados.ListOptions{Limit: 999999})
+ if err != nil {
+ return err
+ }
+ if len(vmlist.Items) < 1 {
+ return fmt.Errorf("no VMs found")
+ }
+ vm = vmlist.Items[0]
+ return nil
+ })
+
+ diag.dotest(140, "getting workbench1 webshell page", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ if vm.UUID == "" {
+ return fmt.Errorf("skipping, no vm available")
+ }
+ webshelltermurl := cluster.Services.Workbench1.ExternalURL.String() + "virtual_machines/" + vm.UUID + "/webshell/testusername"
+ diag.debugf("url %s", webshelltermurl)
+ req, err := http.NewRequestWithContext(ctx, "GET", webshelltermurl, nil)
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Authorization", "Bearer "+client.AuthToken)
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return fmt.Errorf("reading response: %s", err)
+ }
+ if resp.StatusCode != http.StatusOK {
+ return fmt.Errorf("unexpected response status: %s %q", resp.Status, body)
+ }
+ return nil
+ })
+
+ diag.dotest(150, "connecting to webshell service", func() error {
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+ if vm.UUID == "" {
+ return fmt.Errorf("skipping, no vm available")
+ }
+ u := cluster.Services.WebShell.ExternalURL
+ webshellurl := u.String() + vm.Hostname + "?"
+ if strings.HasPrefix(u.Host, "*") {
+ u.Host = vm.Hostname + u.Host[1:]
+ webshellurl = u.String() + "?"
+ }
+ diag.debugf("url %s", webshellurl)
+ req, err := http.NewRequestWithContext(ctx, "POST", webshellurl, bytes.NewBufferString(url.Values{
+ "width": {"80"},
+ "height": {"25"},
+ "session": {"xyzzy"},
+ "rooturl": {webshellurl},
+ }.Encode()))
+ if err != nil {
+ return err
+ }
+ req.Header.Set("Content-Type", "application/x-www-form-urlencoded; charset=UTF-8")
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ diag.debugf("response status %s", resp.Status)
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return fmt.Errorf("reading response: %s", err)
+ }
+ diag.debugf("response body %q", body)
+ // We don't speak the protocol, so we get a 400 error
+ // from the webshell server even if everything is
+ // OK. Anything else (404, 502, ???) indicates a
+ // problem.
+ if resp.StatusCode != http.StatusBadRequest {
+ return fmt.Errorf("unexpected response status: %s, %q", resp.Status, body)
+ }
+ return nil
+ })
+
+ diag.dotest(160, "running a container", func() error {
+ if diag.priority < 1 {
+ diag.infof("skipping (use priority > 0 if you want to run a container)")
+ return nil
+ }
+ if project.UUID == "" {
+ return fmt.Errorf("skipping, no project to work in")
+ }
+
+ var cr arvados.ContainerRequest
+ ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(diag.timeout))
+ defer cancel()
+
+ timestamp := time.Now().Format(time.RFC3339)
+ err := client.RequestAndDecodeContext(ctx, &cr, "POST", "arvados/v1/container_requests", nil, map[string]interface{}{"container_request": map[string]interface{}{
+ "owner_uuid": project.UUID,
+ "name": fmt.Sprintf("diagnostics container request %s", timestamp),
+ "container_image": "arvados/jobs",
+ "command": []string{"echo", timestamp},
+ "use_existing": false,
+ "output_path": "/mnt/output",
+ "output_name": fmt.Sprintf("diagnostics output %s", timestamp),
+ "priority": diag.priority,
+ "state": arvados.ContainerRequestStateCommitted,
+ "mounts": map[string]map[string]interface{}{
+ "/mnt/output": {
+ "kind": "collection",
+ "writable": true,
+ },
+ },
+ "runtime_constraints": arvados.RuntimeConstraints{
+ VCPUs: 1,
+ RAM: 1 << 26,
+ KeepCacheRAM: 1 << 26,
+ },
+ }})
+ if err != nil {
+ return err
+ }
+ diag.debugf("container request uuid = %s", cr.UUID)
+ diag.debugf("container uuid = %s", cr.ContainerUUID)
+
+ timeout := 10 * time.Minute
+ diag.infof("container request submitted, waiting up to %v for container to run", arvados.Duration(timeout))
+ ctx, cancel = context.WithDeadline(context.Background(), time.Now().Add(timeout))
+ defer cancel()
+
+ var c arvados.Container
+ for ; cr.State != arvados.ContainerRequestStateFinal; time.Sleep(2 * time.Second) {
+ ctx, cancel := context.WithDeadline(ctx, time.Now().Add(diag.timeout))
+ defer cancel()
+
+ crStateWas := cr.State
+ err := client.RequestAndDecodeContext(ctx, &cr, "GET", "arvados/v1/container_requests/"+cr.UUID, nil, nil)
+ if err != nil {
+ return err
+ }
+ if cr.State != crStateWas {
+ diag.debugf("container request state = %s", cr.State)
+ }
+
+ cStateWas := c.State
+ err = client.RequestAndDecodeContext(ctx, &c, "GET", "arvados/v1/containers/"+cr.ContainerUUID, nil, nil)
+ if err != nil {
+ return err
+ }
+ if c.State != cStateWas {
+ diag.debugf("container state = %s", c.State)
+ }
+ }
+
+ if c.State != arvados.ContainerStateComplete {
+ return fmt.Errorf("container request %s is final but container %s did not complete: container state = %q", cr.UUID, cr.ContainerUUID, c.State)
+ } else if c.ExitCode != 0 {
+ return fmt.Errorf("container exited %d", c.ExitCode)
+ }
+ return nil
+ })
+}
// populated.
Container arvados.Container `json:"container"`
InstanceType arvados.InstanceType `json:"instance_type"`
+ FirstSeenAt time.Time `json:"first_seen_at"`
}
// String implements fmt.Stringer by returning the queued container's
delete(cq.current, uuid)
}
+// Caller must have lock.
func (cq *Queue) addEnt(uuid string, ctr arvados.Container) {
it, err := cq.chooseType(&ctr)
if err != nil && (ctr.State == arvados.ContainerStateQueued || ctr.State == arvados.ContainerStateLocked) {
"Priority": ctr.Priority,
"InstanceType": it.Name,
}).Info("adding container to queue")
- cq.current[uuid] = QueueEnt{Container: ctr, InstanceType: it}
+ cq.current[uuid] = QueueEnt{Container: ctr, InstanceType: it, FirstSeenAt: time.Now()}
}
// Lock acquires the dispatch lock for the given container.
CrunchRunArgumentsList: []string{"--foo", "--extra='args'"},
DispatchPrivateKey: string(dispatchprivraw),
StaleLockTimeout: arvados.Duration(5 * time.Millisecond),
+ RuntimeEngine: "stub",
CloudVMs: arvados.CloudVMsConfig{
Driver: "test",
SyncInterval: arvados.Duration(10 * time.Millisecond),
stubvm.CrunchRunDetachDelay = time.Duration(rand.Int63n(int64(10 * time.Millisecond)))
stubvm.ExecuteContainer = executeContainer
stubvm.CrashRunningContainer = finishContainer
- stubvm.ExtraCrunchRunArgs = "'--foo' '--extra='\\''args'\\'''"
+ stubvm.ExtraCrunchRunArgs = "'--runtime-engine=stub' '--foo' '--extra='\\''args'\\'''"
switch n % 7 {
case 0:
stubvm.Broken = time.Now().Add(time.Duration(rand.Int63n(90)) * time.Millisecond)
ticker *time.Ticker
}
+func (is rateLimitedInstanceSet) Instances(tags cloud.InstanceTags) ([]cloud.Instance, error) {
+ <-is.ticker.C
+ insts, err := is.InstanceSet.Instances(tags)
+ for i, inst := range insts {
+ insts[i] = &rateLimitedInstance{inst, is.ticker}
+ }
+ return insts, err
+}
+
func (is rateLimitedInstanceSet) Create(it arvados.InstanceType, image cloud.ImageID, tags cloud.InstanceTags, init cloud.InitCommand, pk ssh.PublicKey) (cloud.Instance, error) {
<-is.ticker.C
inst, err := is.InstanceSet.Create(it, image, tags, init, pk)
return inst.Instance.Destroy()
}
+func (inst *rateLimitedInstance) SetTags(tags cloud.InstanceTags) error {
+ <-inst.ticker.C
+ return inst.Instance.SetTags(tags)
+}
+
// Adds the specified defaultTags to every Create() call.
type defaultTaggingInstanceSet struct {
cloud.InstanceSet
needRAM := ctr.RuntimeConstraints.RAM + ctr.RuntimeConstraints.KeepCacheRAM
needRAM += int64(cc.Containers.ReserveExtraRAM)
+ needRAM += int64(cc.Containers.LocalKeepBlobBuffersPerVCPU * needVCPUs * (1 << 26))
needRAM = (needRAM * 100) / int64(100-discountConfiguredRAMPercent)
ok := false
sorted = append(sorted, ent)
}
sort.Slice(sorted, func(i, j int) bool {
- return sorted[i].Container.Priority > sorted[j].Container.Priority
+ if pi, pj := sorted[i].Container.Priority, sorted[j].Container.Priority; pi != pj {
+ return pi > pj
+ } else {
+ // When containers have identical priority,
+ // start them in the order we first noticed
+ // them. This avoids extra lock/unlock cycles
+ // when we unlock the containers that don't
+ // fit in the available pool.
+ return sorted[i].FirstSeenAt.Before(sorted[j].FirstSeenAt)
+ }
})
running := sch.pool.Running()
// starve this one by using keeping
// idle workers alive on different
// instance types.
- logger.Debug("unlocking: AtQuota and no unalloc workers")
- sch.queue.Unlock(ctr.UUID)
+ logger.Trace("overquota")
overquota = sorted[i:]
break tryrun
- } else if logger.Info("creating new instance"); sch.pool.Create(it) {
+ } else if sch.pool.Create(it) {
// Success. (Note pool.Create works
// asynchronously and does its own
- // logging, so we don't need to.)
+ // logging about the eventual outcome,
+ // so we don't need to.)
+ logger.Info("creating new instance")
} else {
// Failed despite not being at quota,
// e.g., cloud ops throttled. TODO:
// avoid getting starved here if
// instances of a specific type always
// fail.
+ logger.Trace("pool declined to create new instance")
continue
}
starts: []string{},
canCreate: 0,
}
- New(ctx, &queue, &pool, nil, time.Millisecond, time.Millisecond).runQueue()
+ sch := New(ctx, &queue, &pool, nil, time.Millisecond, time.Millisecond)
+ sch.runQueue()
+ sch.sync()
+ sch.runQueue()
+ sch.sync()
c.Check(pool.creates, check.DeepEquals, shouldCreate)
if len(shouldCreate) == 0 {
c.Check(pool.starts, check.DeepEquals, []string{})
- c.Check(pool.shutdowns, check.Not(check.Equals), 0)
} else {
c.Check(pool.starts, check.DeepEquals, []string{test.ContainerUUID(2)})
- c.Check(pool.shutdowns, check.Equals, 0)
}
+ c.Check(pool.shutdowns, check.Equals, 3-quota)
+ c.Check(queue.StateChanges(), check.DeepEquals, []test.QueueStateChange{
+ {UUID: "zzzzz-dz642-000000000000003", From: "Locked", To: "Queued"},
+ {UUID: "zzzzz-dz642-000000000000002", From: "Locked", To: "Queued"},
+ })
+ }
+}
+
+// Don't flap lock/unlock when equal-priority containers compete for
+// limited workers.
+//
+// (Unless we use FirstSeenAt as a secondary sort key, each runQueue()
+// tends to choose a different one of the equal-priority containers as
+// the "first" one that should be locked, and unlock the one it chose
+// last time. This generates logging noise, and fails containers by
+// reaching MaxDispatchAttempts quickly.)
+func (*SchedulerSuite) TestEqualPriorityContainers(c *check.C) {
+ logger := ctxlog.TestLogger(c)
+ ctx := ctxlog.Context(context.Background(), logger)
+ queue := test.Queue{
+ ChooseType: chooseType,
+ Logger: logger,
+ }
+ for i := 0; i < 8; i++ {
+ queue.Containers = append(queue.Containers, arvados.Container{
+ UUID: test.ContainerUUID(i),
+ Priority: 333,
+ State: arvados.ContainerStateQueued,
+ RuntimeConstraints: arvados.RuntimeConstraints{
+ VCPUs: 3,
+ RAM: 3 << 30,
+ },
+ })
+ }
+ queue.Update()
+ pool := stubPool{
+ quota: 2,
+ unalloc: map[arvados.InstanceType]int{
+ test.InstanceType(3): 1,
+ },
+ idle: map[arvados.InstanceType]int{
+ test.InstanceType(3): 1,
+ },
+ running: map[string]time.Time{},
+ creates: []arvados.InstanceType{},
+ starts: []string{},
+ canCreate: 1,
+ }
+ sch := New(ctx, &queue, &pool, nil, time.Millisecond, time.Millisecond)
+ for i := 0; i < 30; i++ {
+ sch.runQueue()
+ sch.sync()
+ time.Sleep(time.Millisecond)
+ }
+ c.Check(pool.shutdowns, check.Equals, 0)
+ c.Check(pool.starts, check.HasLen, 1)
+ unlocked := map[string]int{}
+ for _, chg := range queue.StateChanges() {
+ if chg.To == arvados.ContainerStateQueued {
+ unlocked[chg.UUID]++
+ }
+ }
+ for uuid, count := range unlocked {
+ c.Check(count, check.Equals, 1, check.Commentf("%s", uuid))
}
}
"github.com/sirupsen/logrus"
)
+var reportedUnexpectedState = false
+
// sync resolves discrepancies between the queue and the pool:
//
// Lingering crunch-run processes for finalized and unlocked/requeued
// a network outage and is still
// preparing to run a container that
// has already been unlocked/requeued.
- go sch.kill(uuid, fmt.Sprintf("state=%s", ent.Container.State))
+ go sch.kill(uuid, fmt.Sprintf("pool says running, but queue says state=%s", ent.Container.State))
} else if ent.Container.Priority == 0 {
sch.logger.WithFields(logrus.Fields{
"ContainerUUID": uuid,
go sch.requeue(ent, "priority=0")
}
default:
- sch.logger.WithFields(logrus.Fields{
- "ContainerUUID": uuid,
- "State": ent.Container.State,
- }).Error("BUG: unexpected state")
+ if !reportedUnexpectedState {
+ sch.logger.WithFields(logrus.Fields{
+ "ContainerUUID": uuid,
+ "State": ent.Container.State,
+ }).Error("BUG: unexpected state")
+ reportedUnexpectedState = true
+ }
}
}
for uuid := range running {
return
}
defer sch.uuidUnlock(uuid)
+ sch.logger.WithFields(logrus.Fields{
+ "ContainerUUID": uuid,
+ "reason": reason,
+ }).Debug("kill")
sch.pool.KillContainer(uuid, reason)
sch.pool.ForgetContainer(uuid)
}
Logger logrus.FieldLogger
- entries map[string]container.QueueEnt
- updTime time.Time
- subscribers map[<-chan struct{}]chan struct{}
+ entries map[string]container.QueueEnt
+ updTime time.Time
+ subscribers map[<-chan struct{}]chan struct{}
+ stateChanges []QueueStateChange
mtx sync.Mutex
}
+type QueueStateChange struct {
+ UUID string
+ From arvados.ContainerState
+ To arvados.ContainerState
+}
+
+// All calls to Lock/Unlock/Cancel to date.
+func (q *Queue) StateChanges() []QueueStateChange {
+ q.mtx.Lock()
+ defer q.mtx.Unlock()
+ return q.stateChanges
+}
+
// Entries returns the containers that were queued when Update was
// last called.
func (q *Queue) Entries() (map[string]container.QueueEnt, time.Time) {
// caller must have lock.
func (q *Queue) changeState(uuid string, from, to arvados.ContainerState) error {
ent := q.entries[uuid]
+ q.stateChanges = append(q.stateChanges, QueueStateChange{uuid, from, to})
if ent.Container.State != from {
return fmt.Errorf("changeState failed: state=%q", ent.Container.State)
}
upd[ctr.UUID] = container.QueueEnt{
Container: ctr,
InstanceType: it,
+ FirstSeenAt: time.Now(),
}
}
}
"time"
"git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/crunchrun"
"git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
ArvMountDeadlockRate float64
ExecuteContainer func(arvados.Container) int
CrashRunningContainer func(arvados.Container)
- ExtraCrunchRunArgs string // extra args expected after "crunch-run --detach --stdin-env "
+ ExtraCrunchRunArgs string // extra args expected after "crunch-run --detach --stdin-config "
sis *StubInstanceSet
id cloud.InstanceID
fmt.Fprint(stderr, "crunch-run: command not found\n")
return 1
}
- if strings.HasPrefix(command, "crunch-run --detach --stdin-env "+svm.ExtraCrunchRunArgs) {
- var stdinKV map[string]string
- err := json.Unmarshal(stdinData, &stdinKV)
+ if strings.HasPrefix(command, "crunch-run --detach --stdin-config "+svm.ExtraCrunchRunArgs) {
+ var configData crunchrun.ConfigData
+ err := json.Unmarshal(stdinData, &configData)
if err != nil {
fmt.Fprintf(stderr, "unmarshal stdin: %s (stdin was: %q)\n", err, stdinData)
return 1
}
for _, name := range []string{"ARVADOS_API_HOST", "ARVADOS_API_TOKEN"} {
- if stdinKV[name] == "" {
+ if configData.Env[name] == "" {
fmt.Fprintf(stderr, "%s env var missing from stdin %q\n", name, stdinData)
return 1
}
instanceSetID: instanceSetID,
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
newExecutor: newExecutor,
+ cluster: cluster,
bootProbeCommand: cluster.Containers.CloudVMs.BootProbeCommand,
runnerSource: cluster.Containers.CloudVMs.DeployRunnerBinary,
imageID: cloud.ImageID(cluster.Containers.CloudVMs.ImageID),
installPublicKey: installPublicKey,
tagKeyPrefix: cluster.Containers.CloudVMs.TagKeyPrefix,
runnerCmdDefault: cluster.Containers.CrunchRunCommand,
- runnerArgs: cluster.Containers.CrunchRunArgumentsList,
+ runnerArgs: append([]string{"--runtime-engine=" + cluster.Containers.RuntimeEngine}, cluster.Containers.CrunchRunArgumentsList...),
stop: make(chan bool),
}
wp.registerMetrics(reg)
instanceSetID cloud.InstanceSetID
instanceSet *throttledInstanceSet
newExecutor func(cloud.Instance) Executor
+ cluster *arvados.Cluster
bootProbeCommand string
runnerSource string
imageID cloud.ImageID
"time"
"git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/dispatchcloud/test"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
var less = &lessChecker{&check.CheckerInfo{Name: "less", Params: []string{"obtained", "expected"}}}
-type PoolSuite struct{}
+type PoolSuite struct {
+ logger logrus.FieldLogger
+ testCluster *arvados.Cluster
+}
+
+func (suite *PoolSuite) SetUpTest(c *check.C) {
+ suite.logger = ctxlog.TestLogger(c)
+ cfg, err := config.NewLoader(nil, suite.logger).Load()
+ c.Assert(err, check.IsNil)
+ suite.testCluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+}
func (suite *PoolSuite) TestResumeAfterRestart(c *check.C) {
type1 := test.InstanceType(1)
}
}
- logger := ctxlog.TestLogger(c)
driver := &test.StubDriver{}
instanceSetID := cloud.InstanceSetID("test-instance-set-id")
- is, err := driver.InstanceSet(nil, instanceSetID, nil, logger)
+ is, err := driver.InstanceSet(nil, instanceSetID, nil, suite.logger)
c.Assert(err, check.IsNil)
newExecutor := func(cloud.Instance) Executor {
}
}
- cluster := &arvados.Cluster{
- Containers: arvados.ContainersConfig{
- CloudVMs: arvados.CloudVMsConfig{
- BootProbeCommand: "true",
- MaxProbesPerSecond: 1000,
- ProbeInterval: arvados.Duration(time.Millisecond * 10),
- SyncInterval: arvados.Duration(time.Millisecond * 10),
- TagKeyPrefix: "testprefix:",
- },
- CrunchRunCommand: "crunch-run-custom",
- },
- InstanceTypes: arvados.InstanceTypeMap{
- type1.Name: type1,
- type2.Name: type2,
- type3.Name: type3,
- },
+ suite.testCluster.Containers.CloudVMs = arvados.CloudVMsConfig{
+ BootProbeCommand: "true",
+ MaxProbesPerSecond: 1000,
+ ProbeInterval: arvados.Duration(time.Millisecond * 10),
+ SyncInterval: arvados.Duration(time.Millisecond * 10),
+ TagKeyPrefix: "testprefix:",
+ }
+ suite.testCluster.Containers.CrunchRunCommand = "crunch-run-custom"
+ suite.testCluster.InstanceTypes = arvados.InstanceTypeMap{
+ type1.Name: type1,
+ type2.Name: type2,
+ type3.Name: type3,
}
- pool := NewPool(logger, arvados.NewClientFromEnv(), prometheus.NewRegistry(), instanceSetID, is, newExecutor, nil, cluster)
+ pool := NewPool(suite.logger, arvados.NewClientFromEnv(), prometheus.NewRegistry(), instanceSetID, is, newExecutor, nil, suite.testCluster)
notify := pool.Subscribe()
defer pool.Unsubscribe(notify)
pool.Create(type1)
}
}
// Wait for the tags to save to the cloud provider
- tagKey := cluster.Containers.CloudVMs.TagKeyPrefix + tagKeyIdleBehavior
+ tagKey := suite.testCluster.Containers.CloudVMs.TagKeyPrefix + tagKeyIdleBehavior
deadline := time.Now().Add(time.Second)
for !func() bool {
pool.mtx.RLock()
c.Log("------- starting new pool, waiting to recover state")
- pool2 := NewPool(logger, arvados.NewClientFromEnv(), prometheus.NewRegistry(), instanceSetID, is, newExecutor, nil, cluster)
+ pool2 := NewPool(suite.logger, arvados.NewClientFromEnv(), prometheus.NewRegistry(), instanceSetID, is, newExecutor, nil, suite.testCluster)
notify2 := pool2.Subscribe()
defer pool2.Unsubscribe(notify2)
waitForIdle(pool2, notify2)
}
func (suite *PoolSuite) TestDrain(c *check.C) {
- logger := ctxlog.TestLogger(c)
driver := test.StubDriver{}
- instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, logger)
+ instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, suite.logger)
c.Assert(err, check.IsNil)
ac := arvados.NewClientFromEnv()
type1 := test.InstanceType(1)
pool := &Pool{
arvClient: ac,
- logger: logger,
+ logger: suite.logger,
newExecutor: func(cloud.Instance) Executor { return &stubExecutor{} },
+ cluster: suite.testCluster,
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
instanceTypes: arvados.InstanceTypeMap{
type1.Name: type1,
}
func (suite *PoolSuite) TestNodeCreateThrottle(c *check.C) {
- logger := ctxlog.TestLogger(c)
driver := test.StubDriver{HoldCloudOps: true}
- instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, logger)
+ instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, suite.logger)
c.Assert(err, check.IsNil)
type1 := test.InstanceType(1)
pool := &Pool{
- logger: logger,
+ logger: suite.logger,
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
+ cluster: suite.testCluster,
maxConcurrentInstanceCreateOps: 1,
instanceTypes: arvados.InstanceTypeMap{
type1.Name: type1,
}
func (suite *PoolSuite) TestCreateUnallocShutdown(c *check.C) {
- logger := ctxlog.TestLogger(c)
driver := test.StubDriver{HoldCloudOps: true}
- instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, logger)
+ instanceSet, err := driver.InstanceSet(nil, "test-instance-set-id", nil, suite.logger)
c.Assert(err, check.IsNil)
type1 := arvados.InstanceType{Name: "a1s", ProviderType: "a1.small", VCPUs: 1, RAM: 1 * GiB, Price: .01}
type2 := arvados.InstanceType{Name: "a2m", ProviderType: "a2.medium", VCPUs: 2, RAM: 2 * GiB, Price: .02}
type3 := arvados.InstanceType{Name: "a2l", ProviderType: "a2.large", VCPUs: 4, RAM: 4 * GiB, Price: .04}
pool := &Pool{
- logger: logger,
+ logger: suite.logger,
newExecutor: func(cloud.Instance) Executor { return &stubExecutor{} },
+ cluster: suite.testCluster,
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
instanceTypes: arvados.InstanceTypeMap{
type1.Name: type1,
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/crunchrun"
"github.com/sirupsen/logrus"
)
type remoteRunner struct {
uuid string
executor Executor
- envJSON json.RawMessage
+ configJSON json.RawMessage
runnerCmd string
runnerArgs []string
remoteUser string
if err := enc.Encode(wkr.instType); err != nil {
panic(err)
}
- env := map[string]string{
+ var configData crunchrun.ConfigData
+ configData.Env = map[string]string{
"ARVADOS_API_HOST": wkr.wp.arvClient.APIHost,
"ARVADOS_API_TOKEN": wkr.wp.arvClient.AuthToken,
"InstanceType": instJSON.String(),
"GatewayAuthSecret": wkr.wp.gatewayAuthSecret(uuid),
}
if wkr.wp.arvClient.Insecure {
- env["ARVADOS_API_HOST_INSECURE"] = "1"
+ configData.Env["ARVADOS_API_HOST_INSECURE"] = "1"
}
- envJSON, err := json.Marshal(env)
+ if bufs := wkr.wp.cluster.Containers.LocalKeepBlobBuffersPerVCPU; bufs > 0 {
+ configData.Cluster = wkr.wp.cluster
+ configData.KeepBuffers = bufs * wkr.instType.VCPUs
+ }
+ configJSON, err := json.Marshal(configData)
if err != nil {
panic(err)
}
rr := &remoteRunner{
uuid: uuid,
executor: wkr.executor,
- envJSON: envJSON,
+ configJSON: configJSON,
runnerCmd: wkr.wp.runnerCmd,
runnerArgs: wkr.wp.runnerArgs,
remoteUser: wkr.instance.RemoteUser(),
// assume the remote process _might_ have started, at least until it
// probes the worker and finds otherwise.
func (rr *remoteRunner) Start() {
- cmd := rr.runnerCmd + " --detach --stdin-env"
+ cmd := rr.runnerCmd + " --detach --stdin-config"
for _, arg := range rr.runnerArgs {
cmd += " '" + strings.Replace(arg, "'", "'\\''", -1) + "'"
}
if rr.remoteUser != "root" {
cmd = "sudo " + cmd
}
- stdin := bytes.NewBuffer(rr.envJSON)
+ stdin := bytes.NewBuffer(rr.configJSON)
stdout, stderr, err := rr.executor.Execute(nil, cmd, stdin)
if err != nil {
rr.logger.WithField("stdout", string(stdout)).
"time"
"git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/dispatchcloud/test"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
var _ = check.Suite(&WorkerSuite{})
-type WorkerSuite struct{}
+type WorkerSuite struct {
+ logger logrus.FieldLogger
+ testCluster *arvados.Cluster
+}
+
+func (suite *WorkerSuite) SetUpTest(c *check.C) {
+ suite.logger = ctxlog.TestLogger(c)
+ cfg, err := config.NewLoader(nil, suite.logger).Load()
+ c.Assert(err, check.IsNil)
+ suite.testCluster, err = cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+}
func (suite *WorkerSuite) TestProbeAndUpdate(c *check.C) {
- logger := ctxlog.TestLogger(c)
bootTimeout := time.Minute
probeTimeout := time.Second
ac := arvados.NewClientFromEnv()
- is, err := (&test.StubDriver{}).InstanceSet(nil, "test-instance-set-id", nil, logger)
+ is, err := (&test.StubDriver{}).InstanceSet(nil, "test-instance-set-id", nil, suite.logger)
c.Assert(err, check.IsNil)
inst, err := is.Create(arvados.InstanceType{}, "", nil, "echo InitCommand", nil)
c.Assert(err, check.IsNil)
wp := &Pool{
arvClient: ac,
newExecutor: func(cloud.Instance) Executor { return exr },
+ cluster: suite.testCluster,
bootProbeCommand: "bootprobe",
timeoutBooting: bootTimeout,
timeoutProbe: probeTimeout,
exr.response[wp.runnerCmd+" --list"] = trial.respRunDeployed
}
wkr := &worker{
- logger: logger,
+ logger: suite.logger,
executor: exr,
wp: wp,
mtx: &wp.mtx,
var Command cmd.Handler = &installCommand{}
const devtestDatabasePassword = "insecure_arvados_test"
+const goversion = "1.17.1"
type installCommand struct {
ClusterType string
flags.StringVar(&inst.SourcePath, "source", "/arvados", "source tree location (required for -type=package)")
flags.StringVar(&inst.PackageVersion, "package-version", "0.0.0", "version string to embed in executable files")
flags.BoolVar(&inst.EatMyData, "eatmydata", false, "use eatmydata to speed up install")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
+
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
} else if *versionFlag {
return cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
- } else if len(flags.Args()) > 0 {
- err = fmt.Errorf("unrecognized command line arguments: %v", flags.Args())
- return 2
}
var dev, test, prod, pkg bool
"r-cran-roxygen2",
"r-cran-xml",
"sudo",
+ "uuid-dev",
"wget",
"xvfb",
)
+ if dev || test {
+ pkgs = append(pkgs,
+ "squashfs-tools", // for singularity
+ )
+ }
switch {
case osv.Debian && osv.Major >= 10:
pkgs = append(pkgs, "libcurl4")
}
if !prod {
- goversion := "1.14"
if havegoversion, err := exec.Command("/usr/local/bin/go", "version").CombinedOutput(); err == nil && bytes.HasPrefix(havegoversion, []byte("go version go"+goversion+" ")) {
logger.Print("go " + goversion + " already installed")
} else {
err = inst.runBash(`
cd /tmp
+rm -rf /var/lib/arvados/go/
wget --progress=dot:giga -O- https://storage.googleapis.com/golang/go`+goversion+`.linux-amd64.tar.gz | tar -C /var/lib/arvados -xzf -
ln -sf /var/lib/arvados/go/bin/* /usr/local/bin/
`, stdout, stderr)
}
}
- nodejsversion := "v10.23.1"
+ nodejsversion := "v12.22.2"
if havenodejsversion, err := exec.Command("/usr/local/bin/node", "--version").CombinedOutput(); err == nil && string(havenodejsversion) == nodejsversion+"\n" {
logger.Print("nodejs " + nodejsversion + " already installed")
} else {
}
}
+ singularityversion := "3.7.4"
+ if havesingularityversion, err := exec.Command("/var/lib/arvados/bin/singularity", "--version").CombinedOutput(); err == nil && strings.Contains(string(havesingularityversion), singularityversion) {
+ logger.Print("singularity " + singularityversion + " already installed")
+ } else if dev || test {
+ err = inst.runBash(`
+S=`+singularityversion+`
+tmp=/var/lib/arvados/tmp/singularity
+trap "rm -r ${tmp}" ERR EXIT
+cd /var/lib/arvados/tmp
+git clone https://github.com/sylabs/singularity
+cd singularity
+git checkout v${S}
+./mconfig --prefix=/var/lib/arvados
+make -C ./builddir
+make -C ./builddir install
+`, stdout, stderr)
+ if err != nil {
+ return 1
+ }
+ }
+
// The entry in /etc/locale.gen is "en_US.UTF-8"; once
// it's installed, locale -a reports it as
// "en_US.utf8".
{"mkdir", "-p", "log", "tmp", ".bundle", "/var/www/.gem", "/var/www/.bundle", "/var/www/.passenger"},
{"touch", "log/production.log"},
{"chown", "-R", "--from=root", "www-data:www-data", "/var/www/.gem", "/var/www/.bundle", "/var/www/.passenger", "log", "tmp", ".bundle", "Gemfile.lock", "config.ru", "config/environment.rb"},
- {"sudo", "-u", "www-data", "/var/lib/arvados/bin/gem", "install", "--user", "--conservative", "--no-document", "bundler:1.16.6", "bundler:1.17.3", "bundler:2.0.2"},
+ {"sudo", "-u", "www-data", "/var/lib/arvados/bin/gem", "install", "--user", "--conservative", "--no-document", "bundler:2.2.19"},
{"sudo", "-u", "www-data", "/var/lib/arvados/bin/bundle", "install", "--deployment", "--jobs", "8", "--path", "/var/www/.gem"},
{"sudo", "-u", "www-data", "/var/lib/arvados/bin/bundle", "exec", "passenger-config", "build-native-support"},
{"sudo", "-u", "www-data", "/var/lib/arvados/bin/bundle", "exec", "passenger-config", "install-standalone-runtime"},
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package install
+
+import (
+ "bytes"
+ "os/exec"
+ "testing"
+
+ "gopkg.in/check.v1"
+)
+
+func Test(t *testing.T) {
+ check.TestingT(t)
+}
+
+var _ = check.Suite(&Suite{})
+
+type Suite struct{}
+
+/*
+ TestExtractGoVersion tests the grep/awk command used in
+ tools/arvbox/bin/arvbox to extract the version of Go to install for
+ bootstrapping `arvados-server`.
+
+ If this test is changed, the arvbox code will also need to be updated.
+*/
+func (*Suite) TestExtractGoVersion(c *check.C) {
+ script := `
+ sourcepath="$(realpath ../..)"
+ (cd ${sourcepath} && grep 'const goversion =' lib/install/deps.go |awk -F'"' '{print $2}')
+ `
+ cmd := exec.Command("bash", "-")
+ cmd.Stdin = bytes.NewBufferString("set -ex -o pipefail\n" + script)
+ cmdOutput, err := cmd.Output()
+ c.Assert(err, check.IsNil)
+ c.Assert(string(cmdOutput), check.Equals, goversion+"\n")
+}
// Depending on host/network speed, Go's default 10m test timeout
// might be too short; recommend "go test -timeout 20m -tags docker".
//
+//go:build docker
// +build docker
package install
versionFlag := flags.Bool("version", false, "Write version information to stdout and exit 0")
flags.StringVar(&initcmd.ClusterID, "cluster-id", "", "cluster `id`, like x1234 for a dev cluster")
flags.StringVar(&initcmd.Domain, "domain", hostname, "cluster public DNS `name`, like x1234.arvadosapi.com")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
} else if *versionFlag {
return cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
- } else if len(flags.Args()) > 0 {
- err = fmt.Errorf("unrecognized command line arguments: %v", flags.Args())
- return 2
} else if !regexp.MustCompile(`^[a-z][a-z0-9]{4}`).MatchString(initcmd.ClusterID) {
err = fmt.Errorf("cluster ID %q is invalid; must be an ASCII letter followed by 4 alphanumerics (try -help)", initcmd.ClusterID)
return 1
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package lsf
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "math"
+ "net/http"
+ "regexp"
+ "strings"
+ "sync"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/dispatch"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "github.com/julienschmidt/httprouter"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+ "github.com/sirupsen/logrus"
+)
+
+var DispatchCommand cmd.Handler = service.Command(arvados.ServiceNameDispatchLSF, newHandler)
+
+func newHandler(ctx context.Context, cluster *arvados.Cluster, token string, reg *prometheus.Registry) service.Handler {
+ ac, err := arvados.NewClientFromConfig(cluster)
+ if err != nil {
+ return service.ErrorHandler(ctx, cluster, fmt.Errorf("error initializing client from cluster config: %s", err))
+ }
+ d := &dispatcher{
+ Cluster: cluster,
+ Context: ctx,
+ ArvClient: ac,
+ AuthToken: token,
+ Registry: reg,
+ }
+ go d.Start()
+ return d
+}
+
+type dispatcher struct {
+ Cluster *arvados.Cluster
+ Context context.Context
+ ArvClient *arvados.Client
+ AuthToken string
+ Registry *prometheus.Registry
+
+ logger logrus.FieldLogger
+ lsfcli lsfcli
+ lsfqueue lsfqueue
+ arvDispatcher *dispatch.Dispatcher
+ httpHandler http.Handler
+
+ initOnce sync.Once
+ stop chan struct{}
+ stopped chan struct{}
+}
+
+// Start starts the dispatcher. Start can be called multiple times
+// with no ill effect.
+func (disp *dispatcher) Start() {
+ disp.initOnce.Do(func() {
+ disp.init()
+ go func() {
+ disp.checkLsfQueueForOrphans()
+ err := disp.arvDispatcher.Run(disp.Context)
+ if err != nil {
+ disp.logger.Error(err)
+ disp.Close()
+ }
+ }()
+ })
+}
+
+// ServeHTTP implements service.Handler.
+func (disp *dispatcher) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ disp.Start()
+ disp.httpHandler.ServeHTTP(w, r)
+}
+
+// CheckHealth implements service.Handler.
+func (disp *dispatcher) CheckHealth() error {
+ disp.Start()
+ select {
+ case <-disp.stopped:
+ return errors.New("stopped")
+ default:
+ return nil
+ }
+}
+
+// Done implements service.Handler.
+func (disp *dispatcher) Done() <-chan struct{} {
+ return disp.stopped
+}
+
+// Stop dispatching containers and release resources. Used by tests.
+func (disp *dispatcher) Close() {
+ disp.Start()
+ select {
+ case disp.stop <- struct{}{}:
+ default:
+ }
+ <-disp.stopped
+}
+
+func (disp *dispatcher) init() {
+ disp.logger = ctxlog.FromContext(disp.Context)
+ disp.lsfcli.logger = disp.logger
+ disp.lsfqueue = lsfqueue{
+ logger: disp.logger,
+ period: time.Duration(disp.Cluster.Containers.CloudVMs.PollInterval),
+ lsfcli: &disp.lsfcli,
+ }
+ disp.ArvClient.AuthToken = disp.AuthToken
+ disp.stop = make(chan struct{}, 1)
+ disp.stopped = make(chan struct{})
+
+ arv, err := arvadosclient.New(disp.ArvClient)
+ if err != nil {
+ disp.logger.Fatalf("Error making Arvados client: %v", err)
+ }
+ arv.Retries = 25
+ arv.ApiToken = disp.AuthToken
+ disp.arvDispatcher = &dispatch.Dispatcher{
+ Arv: arv,
+ Logger: disp.logger,
+ BatchSize: disp.Cluster.API.MaxItemsPerResponse,
+ RunContainer: disp.runContainer,
+ PollPeriod: time.Duration(disp.Cluster.Containers.CloudVMs.PollInterval),
+ MinRetryPeriod: time.Duration(disp.Cluster.Containers.MinRetryPeriod),
+ }
+
+ if disp.Cluster.ManagementToken == "" {
+ disp.httpHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ http.Error(w, "Management API authentication is not configured", http.StatusForbidden)
+ })
+ } else {
+ mux := httprouter.New()
+ metricsH := promhttp.HandlerFor(disp.Registry, promhttp.HandlerOpts{
+ ErrorLog: disp.logger,
+ })
+ mux.Handler("GET", "/metrics", metricsH)
+ mux.Handler("GET", "/metrics.json", metricsH)
+ mux.Handler("GET", "/_health/:check", &health.Handler{
+ Token: disp.Cluster.ManagementToken,
+ Prefix: "/_health/",
+ Routes: health.Routes{"ping": disp.CheckHealth},
+ })
+ disp.httpHandler = auth.RequireLiteralToken(disp.Cluster.ManagementToken, mux)
+ }
+}
+
+func (disp *dispatcher) runContainer(_ *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) error {
+ ctx, cancel := context.WithCancel(disp.Context)
+ defer cancel()
+
+ if ctr.State != dispatch.Locked {
+ // already started by prior invocation
+ } else if _, ok := disp.lsfqueue.Lookup(ctr.UUID); !ok {
+ disp.logger.Printf("Submitting container %s to LSF", ctr.UUID)
+ cmd := []string{disp.Cluster.Containers.CrunchRunCommand}
+ cmd = append(cmd, "--runtime-engine="+disp.Cluster.Containers.RuntimeEngine)
+ cmd = append(cmd, disp.Cluster.Containers.CrunchRunArgumentsList...)
+ err := disp.submit(ctr, cmd)
+ if err != nil {
+ return err
+ }
+ }
+
+ disp.logger.Printf("Start monitoring container %v in state %q", ctr.UUID, ctr.State)
+ defer disp.logger.Printf("Done monitoring container %s", ctr.UUID)
+
+ go func(uuid string) {
+ cancelled := false
+ for ctx.Err() == nil {
+ qent, ok := disp.lsfqueue.Lookup(uuid)
+ if !ok {
+ // If the container disappears from
+ // the lsf queue, there is no point in
+ // waiting for further dispatch
+ // updates: just clean up and return.
+ disp.logger.Printf("container %s job disappeared from LSF queue", uuid)
+ cancel()
+ return
+ }
+ if !cancelled && qent.Stat == "PEND" && strings.Contains(qent.PendReason, "There are no suitable hosts for the job") {
+ disp.logger.Printf("container %s: %s", uuid, qent.PendReason)
+ err := disp.arvDispatcher.Arv.Update("containers", uuid, arvadosclient.Dict{
+ "container": map[string]interface{}{
+ "runtime_status": map[string]string{
+ "error": qent.PendReason,
+ },
+ },
+ }, nil)
+ if err != nil {
+ disp.logger.Printf("error setting runtime_status on %s: %s", uuid, err)
+ continue // retry
+ }
+ err = disp.arvDispatcher.UpdateState(uuid, dispatch.Cancelled)
+ if err != nil {
+ continue // retry (UpdateState() already logged the error)
+ }
+ cancelled = true
+ }
+ }
+ }(ctr.UUID)
+
+ for done := false; !done; {
+ select {
+ case <-ctx.Done():
+ // Disappeared from lsf queue
+ if err := disp.arvDispatcher.Arv.Get("containers", ctr.UUID, nil, &ctr); err != nil {
+ disp.logger.Printf("error getting final container state for %s: %s", ctr.UUID, err)
+ }
+ switch ctr.State {
+ case dispatch.Running:
+ disp.arvDispatcher.UpdateState(ctr.UUID, dispatch.Cancelled)
+ case dispatch.Locked:
+ disp.arvDispatcher.Unlock(ctr.UUID)
+ }
+ return nil
+ case updated, ok := <-status:
+ if !ok {
+ // status channel is closed, which is
+ // how arvDispatcher tells us to stop
+ // touching the container record, kill
+ // off any remaining LSF processes,
+ // etc.
+ done = true
+ break
+ }
+ if updated.State != ctr.State {
+ disp.logger.Infof("container %s changed state from %s to %s", ctr.UUID, ctr.State, updated.State)
+ }
+ ctr = updated
+ if ctr.Priority < 1 {
+ disp.logger.Printf("container %s has state %s, priority %d: cancel lsf job", ctr.UUID, ctr.State, ctr.Priority)
+ disp.bkill(ctr)
+ } else {
+ disp.lsfqueue.SetPriority(ctr.UUID, int64(ctr.Priority))
+ }
+ }
+ }
+ disp.logger.Printf("container %s is done", ctr.UUID)
+
+ // Try "bkill" every few seconds until the LSF job disappears
+ // from the queue.
+ ticker := time.NewTicker(5 * time.Second)
+ defer ticker.Stop()
+ for qent, ok := disp.lsfqueue.Lookup(ctr.UUID); ok; _, ok = disp.lsfqueue.Lookup(ctr.UUID) {
+ err := disp.lsfcli.Bkill(qent.ID)
+ if err != nil {
+ disp.logger.Warnf("%s: bkill(%s): %s", ctr.UUID, qent.ID, err)
+ }
+ <-ticker.C
+ }
+ return nil
+}
+
+func (disp *dispatcher) submit(container arvados.Container, crunchRunCommand []string) error {
+ // Start with an empty slice here to ensure append() doesn't
+ // modify crunchRunCommand's underlying array
+ var crArgs []string
+ crArgs = append(crArgs, crunchRunCommand...)
+ crArgs = append(crArgs, container.UUID)
+ crScript := execScript(crArgs)
+
+ bsubArgs, err := disp.bsubArgs(container)
+ if err != nil {
+ return err
+ }
+ return disp.lsfcli.Bsub(crScript, bsubArgs, disp.ArvClient)
+}
+
+func (disp *dispatcher) bkill(ctr arvados.Container) {
+ if qent, ok := disp.lsfqueue.Lookup(ctr.UUID); !ok {
+ disp.logger.Debugf("bkill(%s): redundant, job not in queue", ctr.UUID)
+ } else if err := disp.lsfcli.Bkill(qent.ID); err != nil {
+ disp.logger.Warnf("%s: bkill(%s): %s", ctr.UUID, qent.ID, err)
+ }
+}
+
+func (disp *dispatcher) bsubArgs(container arvados.Container) ([]string, error) {
+ args := []string{"bsub"}
+
+ tmp := int64(math.Ceil(float64(dispatchcloud.EstimateScratchSpace(&container)) / 1048576))
+ vcpus := container.RuntimeConstraints.VCPUs
+ mem := int64(math.Ceil(float64(container.RuntimeConstraints.RAM+
+ container.RuntimeConstraints.KeepCacheRAM+
+ int64(disp.Cluster.Containers.ReserveExtraRAM)) / 1048576))
+
+ repl := map[string]string{
+ "%%": "%",
+ "%C": fmt.Sprintf("%d", vcpus),
+ "%M": fmt.Sprintf("%d", mem),
+ "%T": fmt.Sprintf("%d", tmp),
+ "%U": container.UUID,
+ }
+
+ re := regexp.MustCompile(`%.`)
+ var substitutionErrors string
+ for _, a := range disp.Cluster.Containers.LSF.BsubArgumentsList {
+ args = append(args, re.ReplaceAllStringFunc(a, func(s string) string {
+ subst := repl[s]
+ if len(subst) == 0 {
+ substitutionErrors += fmt.Sprintf("Unknown substitution parameter %s in BsubArgumentsList, ", s)
+ }
+ return subst
+ }))
+ }
+ if len(substitutionErrors) != 0 {
+ return nil, fmt.Errorf("%s", substitutionErrors[:len(substitutionErrors)-2])
+ }
+
+ if u := disp.Cluster.Containers.LSF.BsubSudoUser; u != "" {
+ args = append([]string{"sudo", "-E", "-u", u}, args...)
+ }
+ return args, nil
+}
+
+// Check the next bjobs report, and invoke TrackContainer for all the
+// containers in the report. This gives us a chance to cancel existing
+// Arvados LSF jobs (started by a previous dispatch process) that
+// never released their LSF job allocations even though their
+// container states are Cancelled or Complete. See
+// https://dev.arvados.org/issues/10979
+func (disp *dispatcher) checkLsfQueueForOrphans() {
+ containerUuidPattern := regexp.MustCompile(`^[a-z0-9]{5}-dz642-[a-z0-9]{15}$`)
+ for _, uuid := range disp.lsfqueue.All() {
+ if !containerUuidPattern.MatchString(uuid) || !strings.HasPrefix(uuid, disp.Cluster.ClusterID) {
+ continue
+ }
+ err := disp.arvDispatcher.TrackContainer(uuid)
+ if err != nil {
+ disp.logger.Warnf("checkLsfQueueForOrphans: TrackContainer(%s): %s", uuid, err)
+ }
+ }
+}
+
+func execScript(args []string) []byte {
+ s := "#!/bin/sh\nexec"
+ for _, w := range args {
+ s += ` '`
+ s += strings.Replace(w, `'`, `'\''`, -1)
+ s += `'`
+ }
+ return []byte(s + "\n")
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package lsf
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "math/rand"
+ "os/exec"
+ "strconv"
+ "sync"
+ "testing"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/prometheus/client_golang/prometheus"
+ "gopkg.in/check.v1"
+)
+
+func Test(t *testing.T) {
+ check.TestingT(t)
+}
+
+var _ = check.Suite(&suite{})
+
+type suite struct {
+ disp *dispatcher
+ crTooBig arvados.ContainerRequest
+}
+
+func (s *suite) TearDownTest(c *check.C) {
+ arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil)
+}
+
+func (s *suite) SetUpTest(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ cluster, err := cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ cluster.Containers.CloudVMs.PollInterval = arvados.Duration(time.Second)
+ s.disp = newHandler(context.Background(), cluster, arvadostest.Dispatch1Token, prometheus.NewRegistry()).(*dispatcher)
+ s.disp.lsfcli.stubCommand = func(string, ...string) *exec.Cmd {
+ return exec.Command("bash", "-c", "echo >&2 unimplemented stub; false")
+ }
+ err = arvados.NewClientFromEnv().RequestAndDecode(&s.crTooBig, "POST", "arvados/v1/container_requests", nil, map[string]interface{}{
+ "container_request": map[string]interface{}{
+ "runtime_constraints": arvados.RuntimeConstraints{
+ RAM: 1000000000000,
+ VCPUs: 1,
+ },
+ "container_image": arvadostest.DockerImage112PDH,
+ "command": []string{"sleep", "1"},
+ "mounts": map[string]arvados.Mount{"/mnt/out": {Kind: "tmp", Capacity: 1000}},
+ "output_path": "/mnt/out",
+ "state": arvados.ContainerRequestStateCommitted,
+ "priority": 1,
+ "container_count_max": 1,
+ },
+ })
+ c.Assert(err, check.IsNil)
+}
+
+type lsfstub struct {
+ sudoUser string
+ errorRate float64
+}
+
+func (stub lsfstub) stubCommand(s *suite, c *check.C) func(prog string, args ...string) *exec.Cmd {
+ mtx := sync.Mutex{}
+ nextjobid := 100
+ fakejobq := map[int]string{}
+ return func(prog string, args ...string) *exec.Cmd {
+ c.Logf("stubCommand: %q %q", prog, args)
+ if rand.Float64() < stub.errorRate {
+ return exec.Command("bash", "-c", "echo >&2 'stub random failure' && false")
+ }
+ if stub.sudoUser != "" && len(args) > 3 &&
+ prog == "sudo" &&
+ args[0] == "-E" &&
+ args[1] == "-u" &&
+ args[2] == stub.sudoUser {
+ prog, args = args[3], args[4:]
+ }
+ switch prog {
+ case "bsub":
+ defaultArgs := s.disp.Cluster.Containers.LSF.BsubArgumentsList
+ c.Assert(len(args), check.Equals, len(defaultArgs))
+ // %%J must have been rewritten to %J
+ c.Check(args[1], check.Equals, "/tmp/crunch-run.%J.out")
+ args = args[4:]
+ switch args[1] {
+ case arvadostest.LockedContainerUUID:
+ c.Check(args, check.DeepEquals, []string{
+ "-J", arvadostest.LockedContainerUUID,
+ "-n", "4",
+ "-D", "11701MB",
+ "-R", "rusage[mem=11701MB:tmp=0MB] span[hosts=1]",
+ "-R", "select[mem>=11701MB]",
+ "-R", "select[tmp>=0MB]",
+ "-R", "select[ncpus>=4]"})
+ mtx.Lock()
+ fakejobq[nextjobid] = args[1]
+ nextjobid++
+ mtx.Unlock()
+ case arvadostest.QueuedContainerUUID:
+ c.Check(args, check.DeepEquals, []string{
+ "-J", arvadostest.QueuedContainerUUID,
+ "-n", "4",
+ "-D", "11701MB",
+ "-R", "rusage[mem=11701MB:tmp=45777MB] span[hosts=1]",
+ "-R", "select[mem>=11701MB]",
+ "-R", "select[tmp>=45777MB]",
+ "-R", "select[ncpus>=4]"})
+ mtx.Lock()
+ fakejobq[nextjobid] = args[1]
+ nextjobid++
+ mtx.Unlock()
+ case s.crTooBig.ContainerUUID:
+ c.Check(args, check.DeepEquals, []string{
+ "-J", s.crTooBig.ContainerUUID,
+ "-n", "1",
+ "-D", "954187MB",
+ "-R", "rusage[mem=954187MB:tmp=256MB] span[hosts=1]",
+ "-R", "select[mem>=954187MB]",
+ "-R", "select[tmp>=256MB]",
+ "-R", "select[ncpus>=1]"})
+ mtx.Lock()
+ fakejobq[nextjobid] = args[1]
+ nextjobid++
+ mtx.Unlock()
+ default:
+ c.Errorf("unexpected uuid passed to bsub: args %q", args)
+ return exec.Command("false")
+ }
+ return exec.Command("echo", "submitted job")
+ case "bjobs":
+ c.Check(args, check.DeepEquals, []string{"-u", "all", "-o", "jobid stat job_name pend_reason", "-json"})
+ var records []map[string]interface{}
+ for jobid, uuid := range fakejobq {
+ stat, reason := "RUN", ""
+ if uuid == s.crTooBig.ContainerUUID {
+ // The real bjobs output includes a trailing ';' here:
+ stat, reason = "PEND", "There are no suitable hosts for the job;"
+ }
+ records = append(records, map[string]interface{}{
+ "JOBID": fmt.Sprintf("%d", jobid),
+ "STAT": stat,
+ "JOB_NAME": uuid,
+ "PEND_REASON": reason,
+ })
+ }
+ out, err := json.Marshal(map[string]interface{}{
+ "COMMAND": "bjobs",
+ "JOBS": len(fakejobq),
+ "RECORDS": records,
+ })
+ if err != nil {
+ panic(err)
+ }
+ c.Logf("bjobs out: %s", out)
+ return exec.Command("printf", string(out))
+ case "bkill":
+ killid, _ := strconv.Atoi(args[0])
+ if uuid, ok := fakejobq[killid]; !ok {
+ return exec.Command("bash", "-c", fmt.Sprintf("printf >&2 'Job <%d>: No matching job found\n'", killid))
+ } else if uuid == "" {
+ return exec.Command("bash", "-c", fmt.Sprintf("printf >&2 'Job <%d>: Job has already finished\n'", killid))
+ } else {
+ go func() {
+ time.Sleep(time.Millisecond)
+ mtx.Lock()
+ delete(fakejobq, killid)
+ mtx.Unlock()
+ }()
+ return exec.Command("bash", "-c", fmt.Sprintf("printf 'Job <%d> is being terminated\n'", killid))
+ }
+ default:
+ return exec.Command("bash", "-c", fmt.Sprintf("echo >&2 'stub: command not found: %+q'", prog))
+ }
+ }
+}
+
+func (s *suite) TestSubmit(c *check.C) {
+ s.disp.lsfcli.stubCommand = lsfstub{
+ errorRate: 0.1,
+ sudoUser: s.disp.Cluster.Containers.LSF.BsubSudoUser,
+ }.stubCommand(s, c)
+ s.disp.Start()
+
+ deadline := time.Now().Add(20 * time.Second)
+ for range time.NewTicker(time.Second).C {
+ if time.Now().After(deadline) {
+ c.Error("timed out")
+ break
+ }
+ // "queuedcontainer" should be running
+ if _, ok := s.disp.lsfqueue.Lookup(arvadostest.QueuedContainerUUID); !ok {
+ continue
+ }
+ // "lockedcontainer" should be cancelled because it
+ // has priority 0 (no matching container requests)
+ if _, ok := s.disp.lsfqueue.Lookup(arvadostest.LockedContainerUUID); ok {
+ continue
+ }
+ // "crTooBig" should be cancelled because lsf stub
+ // reports there is no suitable instance type
+ if _, ok := s.disp.lsfqueue.Lookup(s.crTooBig.ContainerUUID); ok {
+ continue
+ }
+ var ctr arvados.Container
+ if err := s.disp.arvDispatcher.Arv.Get("containers", arvadostest.LockedContainerUUID, nil, &ctr); err != nil {
+ c.Logf("error getting container state for %s: %s", arvadostest.LockedContainerUUID, err)
+ continue
+ } else if ctr.State != arvados.ContainerStateQueued {
+ c.Logf("LockedContainer is not in the LSF queue but its arvados record has not been updated to state==Queued (state is %q)", ctr.State)
+ continue
+ }
+
+ if err := s.disp.arvDispatcher.Arv.Get("containers", s.crTooBig.ContainerUUID, nil, &ctr); err != nil {
+ c.Logf("error getting container state for %s: %s", s.crTooBig.ContainerUUID, err)
+ continue
+ } else if ctr.State != arvados.ContainerStateCancelled {
+ c.Logf("container %s is not in the LSF queue but its arvados record has not been updated to state==Cancelled (state is %q)", s.crTooBig.ContainerUUID, ctr.State)
+ continue
+ } else {
+ c.Check(ctr.RuntimeStatus["error"], check.Equals, "There are no suitable hosts for the job;")
+ }
+ c.Log("reached desired state")
+ break
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package lsf
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "os"
+ "os/exec"
+ "strings"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "github.com/sirupsen/logrus"
+)
+
+type bjobsEntry struct {
+ ID string `json:"JOBID"`
+ Name string `json:"JOB_NAME"`
+ Stat string `json:"STAT"`
+ PendReason string `json:"PEND_REASON"`
+}
+
+type lsfcli struct {
+ logger logrus.FieldLogger
+ // (for testing) if non-nil, call stubCommand() instead of
+ // exec.Command() when running lsf command line programs.
+ stubCommand func(string, ...string) *exec.Cmd
+}
+
+func (cli lsfcli) command(prog string, args ...string) *exec.Cmd {
+ if f := cli.stubCommand; f != nil {
+ return f(prog, args...)
+ } else {
+ return exec.Command(prog, args...)
+ }
+}
+
+func (cli lsfcli) Bsub(script []byte, args []string, arv *arvados.Client) error {
+ cli.logger.Infof("bsub command %q script %q", args, script)
+ cmd := cli.command(args[0], args[1:]...)
+ cmd.Env = append([]string(nil), os.Environ()...)
+ cmd.Env = append(cmd.Env, "ARVADOS_API_HOST="+arv.APIHost)
+ cmd.Env = append(cmd.Env, "ARVADOS_API_TOKEN="+arv.AuthToken)
+ if arv.Insecure {
+ cmd.Env = append(cmd.Env, "ARVADOS_API_HOST_INSECURE=1")
+ }
+ cmd.Stdin = bytes.NewReader(script)
+ out, err := cmd.Output()
+ cli.logger.WithField("stdout", string(out)).Infof("bsub finished")
+ return errWithStderr(err)
+}
+
+func (cli lsfcli) Bjobs() ([]bjobsEntry, error) {
+ cli.logger.Debugf("Bjobs()")
+ cmd := cli.command("bjobs", "-u", "all", "-o", "jobid stat job_name pend_reason", "-json")
+ buf, err := cmd.Output()
+ if err != nil {
+ return nil, errWithStderr(err)
+ }
+ var resp struct {
+ Records []bjobsEntry `json:"RECORDS"`
+ }
+ err = json.Unmarshal(buf, &resp)
+ return resp.Records, err
+}
+
+func (cli lsfcli) Bkill(id string) error {
+ cli.logger.Infof("Bkill(%s)", id)
+ cmd := cli.command("bkill", id)
+ buf, err := cmd.CombinedOutput()
+ if err == nil || strings.Index(string(buf), "already finished") >= 0 {
+ return nil
+ } else {
+ return fmt.Errorf("%s (%q)", err, buf)
+ }
+}
+
+func errWithStderr(err error) error {
+ if err, ok := err.(*exec.ExitError); ok {
+ return fmt.Errorf("%s (%q)", err, err.Stderr)
+ }
+ return err
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package lsf
+
+import (
+ "sync"
+ "time"
+
+ "github.com/sirupsen/logrus"
+)
+
+type lsfqueue struct {
+ logger logrus.FieldLogger
+ period time.Duration
+ lsfcli *lsfcli
+
+ initOnce sync.Once
+ mutex sync.Mutex
+ nextReady chan (<-chan struct{})
+ updated *sync.Cond
+ latest map[string]bjobsEntry
+}
+
+// Lookup waits for the next queue update (so even a job that was only
+// submitted a nanosecond ago will show up) and then returns the LSF
+// queue information corresponding to the given container UUID.
+func (q *lsfqueue) Lookup(uuid string) (bjobsEntry, bool) {
+ ent, ok := q.getNext()[uuid]
+ return ent, ok
+}
+
+// All waits for the next queue update, then returns the names of all
+// jobs in the queue. Used by checkLsfQueueForOrphans().
+func (q *lsfqueue) All() []string {
+ latest := q.getNext()
+ names := make([]string, 0, len(latest))
+ for name := range latest {
+ names = append(names, name)
+ }
+ return names
+}
+
+func (q *lsfqueue) SetPriority(uuid string, priority int64) {
+ q.initOnce.Do(q.init)
+ q.logger.Debug("SetPriority is not implemented")
+}
+
+func (q *lsfqueue) getNext() map[string]bjobsEntry {
+ q.initOnce.Do(q.init)
+ <-(<-q.nextReady)
+ q.mutex.Lock()
+ defer q.mutex.Unlock()
+ return q.latest
+}
+
+func (q *lsfqueue) init() {
+ q.updated = sync.NewCond(&q.mutex)
+ q.nextReady = make(chan (<-chan struct{}))
+ ticker := time.NewTicker(time.Second)
+ go func() {
+ for range ticker.C {
+ // Send a new "next update ready" channel to
+ // the next goroutine that wants one (and any
+ // others that have already queued up since
+ // the first one started waiting).
+ //
+ // Below, when we get a new update, we'll
+ // signal that to the other goroutines by
+ // closing the ready chan.
+ ready := make(chan struct{})
+ q.nextReady <- ready
+ for {
+ select {
+ case q.nextReady <- ready:
+ continue
+ default:
+ }
+ break
+ }
+ // Run bjobs repeatedly if needed, until we
+ // get valid output.
+ var ents []bjobsEntry
+ for {
+ q.logger.Debug("running bjobs")
+ var err error
+ ents, err = q.lsfcli.Bjobs()
+ if err == nil {
+ break
+ }
+ q.logger.Warnf("bjobs: %s", err)
+ <-ticker.C
+ }
+ next := make(map[string]bjobsEntry, len(ents))
+ for _, ent := range ents {
+ next[ent.Name] = ent
+ }
+ // Replace q.latest and notify all the
+ // goroutines that the "next update" they
+ // asked for is now ready.
+ q.mutex.Lock()
+ q.latest = next
+ q.mutex.Unlock()
+ close(ready)
+ }
+ }()
+}
"io"
"log"
"net/http"
+
// pprof is only imported to register its HTTP handlers
_ "net/http/pprof"
"os"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/arvados/cgofuse/fuse"
)
-var Command = &cmd{}
+var Command = &mountCommand{}
-type cmd struct {
+type mountCommand struct {
// ready, if non-nil, will be closed when the mount is
// initialized. If ready is non-nil, it RunCommand() should
// not be called more than once, or when ready is already
//
// The "-d" fuse option (and perhaps other features) ignores the
// stderr argument and prints to os.Stderr instead.
-func (c *cmd) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+func (c *mountCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
logger := log.New(stderr, prog+" ", 0)
flags := flag.NewFlagSet(prog, flag.ContinueOnError)
ro := flags.Bool("ro", false, "read-only")
experimental := flags.Bool("experimental", false, "acknowledge this is an experimental command, and should not be used in production (required)")
blockCache := flags.Int("block-cache", 4, "read cache size (number of 64MiB blocks)")
pprof := flags.String("pprof", "", "serve Go profile data at `[addr]:port`")
- err := flags.Parse(args)
- if err != nil {
- logger.Print(err)
- return 2
+ if ok, code := cmd.ParseFlags(flags, prog, args, "[FUSE mount options]", stderr); !ok {
+ return code
}
if !*experimental {
logger.Printf("error: experimental command %q used without --experimental flag", prog)
stdin := bytes.NewBufferString("stdin")
stdout := bytes.NewBuffer(nil)
stderr := bytes.NewBuffer(nil)
- mountCmd := cmd{ready: make(chan struct{})}
+ mountCmd := mountCommand{ready: make(chan struct{})}
ready := false
go func() {
exited <- mountCmd.RunCommand("test mount", []string{"--experimental", s.mnt}, stdin, stdout, stderr)
//
// SPDX-License-Identifier: Apache-2.0
-// +build never
+//go:build ignore
+// +build ignore
// This file is compiled by docker_test.go to build a test client.
// It's not part of the pam module itself.
"sync"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
loader := config.NewLoader(stdin, logger)
loader.SkipLegacy = true
- flags := flag.NewFlagSet("", flag.ContinueOnError)
- flags.SetOutput(stderr)
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
flags.Usage = func() {
fmt.Fprintf(flags.Output(), `Usage:
%s [options ...] { /path/to/manifest.txt | log-or-collection-uuid } [...]
}
loader.SetupFlags(flags)
loglevel := flags.String("log-level", "info", "logging level (debug, info, ...)")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
- }
-
- if len(flags.Args()) == 0 {
- flags.Usage()
+ if ok, code := cmd.ParseFlags(flags, prog, args, "source [...]", stderr); !ok {
+ return code
+ } else if flags.NArg() == 0 {
+ fmt.Fprintf(stderr, "missing required arguments (try -help)\n")
return 2
}
type Suite struct{}
func (*Suite) SetUpSuite(c *check.C) {
- arvadostest.StartAPI()
arvadostest.StartKeep(2, true)
}
"io"
"net"
"net/http"
+ _ "net/http/pprof"
"net/url"
"os"
"strings"
loader := config.NewLoader(stdin, log)
loader.SetupFlags(flags)
versionFlag := flags.Bool("version", false, "Write version information to stdout and exit 0")
- err = flags.Parse(args)
- if err == flag.ErrHelp {
- err = nil
- return 0
- } else if err != nil {
- return 2
+ pprofAddr := flags.String("pprof", "", "Serve Go profile data at `[addr]:port`")
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return code
} else if *versionFlag {
return cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
}
+ if *pprofAddr != "" {
+ go func() {
+ log.Println(http.ListenAndServe(*pprofAddr, nil))
+ }()
+ }
+
if strings.HasSuffix(prog, "controller") {
// Some config-loader checks try to make API calls via
// controller. Those can't be expected to work if this
// logger with a new one according to the logging config.
log = ctxlog.New(stderr, cluster.SystemLogs.Format, cluster.SystemLogs.LogLevel)
logger := log.WithFields(logrus.Fields{
- "PID": os.Getpid(),
+ "PID": os.Getpid(),
+ "ClusterID": cluster.ClusterID,
})
ctx := ctxlog.Context(c.ctx, logger)
}
instrumented := httpserver.Instrument(reg, log,
- httpserver.HandlerWithContext(ctx,
+ httpserver.HandlerWithDeadline(cluster.API.RequestTimeout.Duration(),
httpserver.AddRequestIDs(
httpserver.LogRequests(
httpserver.NewRequestLimiter(cluster.API.MaxConcurrentRequests, handler, reg)))))
srv := &httpserver.Server{
Server: http.Server{
- Handler: instrumented.ServeAPI(cluster.ManagementToken, instrumented),
+ Handler: instrumented.ServeAPI(cluster.ManagementToken, instrumented),
+ BaseContext: func(net.Listener) context.Context { return ctx },
},
Addr: listenURL.Host,
}
# SPDX-License-Identifier: Apache-2.0
#' api_clients.get
-#'
+#'
#' api_clients.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_clients.get(uuid)
#' @param uuid The UUID of the ApiClient in question.
#' @return ApiClient object.
NULL
#' api_clients.create
-#'
+#'
#' api_clients.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_clients.create(apiclient,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param apiClient ApiClient object.
NULL
#' api_clients.update
-#'
+#'
#' api_clients.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_clients.update(apiclient,
#' uuid)
#' @param apiClient ApiClient object.
NULL
#' api_clients.delete
-#'
+#'
#' api_clients.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_clients.delete(uuid)
#' @param uuid The UUID of the ApiClient in question.
#' @return ApiClient object.
NULL
#' api_clients.list
-#'
+#'
#' api_clients.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_clients.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return ApiClientList object.
NULL
#' api_client_authorizations.get
-#'
+#'
#' api_client_authorizations.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.get(uuid)
#' @param uuid The UUID of the ApiClientAuthorization in question.
#' @return ApiClientAuthorization object.
NULL
#' api_client_authorizations.create
-#'
+#'
#' api_client_authorizations.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.create(apiclientauthorization,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param apiClientAuthorization ApiClientAuthorization object.
NULL
#' api_client_authorizations.update
-#'
+#'
#' api_client_authorizations.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.update(apiclientauthorization,
#' uuid)
#' @param apiClientAuthorization ApiClientAuthorization object.
NULL
#' api_client_authorizations.delete
-#'
+#'
#' api_client_authorizations.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.delete(uuid)
#' @param uuid The UUID of the ApiClientAuthorization in question.
#' @return ApiClientAuthorization object.
NULL
#' api_client_authorizations.create_system_auth
-#'
+#'
#' api_client_authorizations.create_system_auth is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.create_system_auth(api_client_id = NULL,
#' scopes = NULL)
-#' @param api_client_id
-#' @param scopes
+#' @param api_client_id
+#' @param scopes
#' @return ApiClientAuthorization object.
#' @name api_client_authorizations.create_system_auth
NULL
#' api_client_authorizations.current
-#'
+#'
#' api_client_authorizations.current is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.current(NULL)
#' @return ApiClientAuthorization object.
#' @name api_client_authorizations.current
NULL
#' api_client_authorizations.list
-#'
+#'
#' api_client_authorizations.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$api_client_authorizations.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return ApiClientAuthorizationList object.
NULL
#' authorized_keys.get
-#'
+#'
#' authorized_keys.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$authorized_keys.get(uuid)
#' @param uuid The UUID of the AuthorizedKey in question.
#' @return AuthorizedKey object.
NULL
#' authorized_keys.create
-#'
+#'
#' authorized_keys.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$authorized_keys.create(authorizedkey,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param authorizedKey AuthorizedKey object.
NULL
#' authorized_keys.update
-#'
+#'
#' authorized_keys.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$authorized_keys.update(authorizedkey,
#' uuid)
#' @param authorizedKey AuthorizedKey object.
NULL
#' authorized_keys.delete
-#'
+#'
#' authorized_keys.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$authorized_keys.delete(uuid)
#' @param uuid The UUID of the AuthorizedKey in question.
#' @return AuthorizedKey object.
NULL
#' authorized_keys.list
-#'
+#'
#' authorized_keys.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$authorized_keys.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return AuthorizedKeyList object.
NULL
#' collections.get
-#'
+#'
#' collections.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.get(uuid)
#' @param uuid The UUID of the Collection in question.
#' @return Collection object.
NULL
#' collections.create
-#'
+#'
#' collections.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.create(collection,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param collection Collection object.
NULL
#' collections.update
-#'
+#'
#' collections.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.update(collection,
#' uuid)
#' @param collection Collection object.
NULL
#' collections.delete
-#'
+#'
#' collections.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.delete(uuid)
#' @param uuid The UUID of the Collection in question.
#' @return Collection object.
NULL
#' collections.provenance
-#'
+#'
#' collections.provenance is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.provenance(uuid)
-#' @param uuid
+#' @param uuid
#' @return Collection object.
#' @name collections.provenance
NULL
#' collections.used_by
-#'
+#'
#' collections.used_by is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.used_by(uuid)
-#' @param uuid
+#' @param uuid
#' @return Collection object.
#' @name collections.used_by
NULL
#' collections.trash
-#'
+#'
#' collections.trash is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.trash(uuid)
-#' @param uuid
+#' @param uuid
#' @return Collection object.
#' @name collections.trash
NULL
#' collections.untrash
-#'
+#'
#' collections.untrash is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.untrash(uuid)
-#' @param uuid
+#' @param uuid
#' @return Collection object.
#' @name collections.untrash
NULL
#' collections.list
-#'
+#'
#' collections.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$collections.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL,
#' include_trash = NULL, include_old_versions = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @param include_trash Include collections whose is_trashed attribute is true.
NULL
#' containers.get
-#'
+#'
#' containers.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.get(uuid)
#' @param uuid The UUID of the Container in question.
#' @return Container object.
NULL
#' containers.create
-#'
+#'
#' containers.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.create(container,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param container Container object.
NULL
#' containers.update
-#'
+#'
#' containers.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.update(container,
#' uuid)
#' @param container Container object.
NULL
#' containers.delete
-#'
+#'
#' containers.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.delete(uuid)
#' @param uuid The UUID of the Container in question.
#' @return Container object.
NULL
#' containers.auth
-#'
+#'
#' containers.auth is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.auth(uuid)
-#' @param uuid
+#' @param uuid
#' @return Container object.
#' @name containers.auth
NULL
#' containers.lock
-#'
+#'
#' containers.lock is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.lock(uuid)
-#' @param uuid
+#' @param uuid
#' @return Container object.
#' @name containers.lock
NULL
#' containers.unlock
-#'
+#'
#' containers.unlock is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.unlock(uuid)
-#' @param uuid
+#' @param uuid
#' @return Container object.
#' @name containers.unlock
NULL
#' containers.secret_mounts
-#'
+#'
#' containers.secret_mounts is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.secret_mounts(uuid)
-#' @param uuid
+#' @param uuid
#' @return Container object.
#' @name containers.secret_mounts
NULL
#' containers.current
-#'
+#'
#' containers.current is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.current(NULL)
#' @return Container object.
#' @name containers.current
NULL
#' containers.list
-#'
+#'
#' containers.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$containers.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return ContainerList object.
NULL
#' container_requests.get
-#'
+#'
#' container_requests.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$container_requests.get(uuid)
#' @param uuid The UUID of the ContainerRequest in question.
#' @return ContainerRequest object.
NULL
#' container_requests.create
-#'
+#'
#' container_requests.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$container_requests.create(containerrequest,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param containerRequest ContainerRequest object.
NULL
#' container_requests.update
-#'
+#'
#' container_requests.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$container_requests.update(containerrequest,
#' uuid)
#' @param containerRequest ContainerRequest object.
NULL
#' container_requests.delete
-#'
+#'
#' container_requests.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$container_requests.delete(uuid)
#' @param uuid The UUID of the ContainerRequest in question.
#' @return ContainerRequest object.
NULL
#' container_requests.list
-#'
+#'
#' container_requests.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$container_requests.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL,
#' include_trash = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @param include_trash Include container requests whose owner project is trashed.
NULL
#' groups.get
-#'
+#'
#' groups.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.get(uuid)
#' @param uuid The UUID of the Group in question.
#' @return Group object.
NULL
#' groups.create
-#'
+#'
#' groups.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.create(group, ensure_unique_name = "false",
#' cluster_id = NULL, async = "false")
#' @param group Group object.
NULL
#' groups.update
-#'
+#'
#' groups.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.update(group, uuid,
#' async = "false")
#' @param group Group object.
NULL
#' groups.delete
-#'
+#'
#' groups.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.delete(uuid)
#' @param uuid The UUID of the Group in question.
#' @return Group object.
NULL
#' groups.contents
-#'
+#'
#' groups.contents is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.contents(filters = NULL,
#' where = NULL, order = NULL, distinct = NULL,
#' limit = "100", offset = "0", count = "exact",
#' cluster_id = NULL, bypass_federation = NULL,
#' include_trash = NULL, uuid = NULL, recursive = NULL,
#' include = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @param include_trash Include items whose is_trashed attribute is true.
-#' @param uuid
+#' @param uuid
#' @param recursive Include contents from child groups recursively.
#' @param include Include objects referred to by listed field in "included" (only owner_uuid)
#' @return Group object.
NULL
#' groups.shared
-#'
+#'
#' groups.shared is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.shared(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL,
#' include_trash = NULL, include = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @param include_trash Include items whose is_trashed attribute is true.
-#' @param include
+#' @param include
#' @return Group object.
#' @name groups.shared
NULL
#' groups.trash
-#'
+#'
#' groups.trash is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.trash(uuid)
-#' @param uuid
+#' @param uuid
#' @return Group object.
#' @name groups.trash
NULL
#' groups.untrash
-#'
+#'
#' groups.untrash is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.untrash(uuid)
-#' @param uuid
+#' @param uuid
#' @return Group object.
#' @name groups.untrash
NULL
#' groups.list
-#'
+#'
#' groups.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$groups.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL,
#' include_trash = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @param include_trash Include items whose is_trashed attribute is true.
NULL
#' keep_services.get
-#'
+#'
#' keep_services.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.get(uuid)
#' @param uuid The UUID of the KeepService in question.
#' @return KeepService object.
NULL
#' keep_services.create
-#'
+#'
#' keep_services.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.create(keepservice,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param keepService KeepService object.
NULL
#' keep_services.update
-#'
+#'
#' keep_services.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.update(keepservice,
#' uuid)
#' @param keepService KeepService object.
NULL
#' keep_services.delete
-#'
+#'
#' keep_services.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.delete(uuid)
#' @param uuid The UUID of the KeepService in question.
#' @return KeepService object.
NULL
#' keep_services.accessible
-#'
+#'
#' keep_services.accessible is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.accessible(NULL)
#' @return KeepService object.
#' @name keep_services.accessible
NULL
#' keep_services.list
-#'
+#'
#' keep_services.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$keep_services.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return KeepServiceList object.
NULL
#' links.get
-#'
+#'
#' links.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.get(uuid)
#' @param uuid The UUID of the Link in question.
#' @return Link object.
NULL
#' links.create
-#'
+#'
#' links.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.create(link, ensure_unique_name = "false",
#' cluster_id = NULL)
#' @param link Link object.
NULL
#' links.update
-#'
+#'
#' links.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.update(link, uuid)
#' @param link Link object.
#' @param uuid The UUID of the Link in question.
NULL
#' links.delete
-#'
+#'
#' links.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.delete(uuid)
#' @param uuid The UUID of the Link in question.
#' @return Link object.
NULL
#' links.list
-#'
+#'
#' links.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return LinkList object.
NULL
#' links.get_permissions
-#'
+#'
#' links.get_permissions is a method defined in Arvados class.
-#'
+#'
#' @usage arv$links.get_permissions(uuid)
-#' @param uuid
+#' @param uuid
#' @return Link object.
#' @name links.get_permissions
NULL
#' logs.get
-#'
+#'
#' logs.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$logs.get(uuid)
#' @param uuid The UUID of the Log in question.
#' @return Log object.
NULL
#' logs.create
-#'
+#'
#' logs.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$logs.create(log, ensure_unique_name = "false",
#' cluster_id = NULL)
#' @param log Log object.
NULL
#' logs.update
-#'
+#'
#' logs.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$logs.update(log, uuid)
#' @param log Log object.
#' @param uuid The UUID of the Log in question.
NULL
#' logs.delete
-#'
+#'
#' logs.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$logs.delete(uuid)
#' @param uuid The UUID of the Log in question.
#' @return Log object.
NULL
#' logs.list
-#'
+#'
#' logs.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$logs.list(filters = NULL, where = NULL,
#' order = NULL, select = NULL, distinct = NULL,
#' limit = "100", offset = "0", count = "exact",
#' cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return LogList object.
NULL
#' users.get
-#'
+#'
#' users.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.get(uuid)
#' @param uuid The UUID of the User in question.
#' @return User object.
NULL
#' users.create
-#'
+#'
#' users.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.create(user, ensure_unique_name = "false",
#' cluster_id = NULL)
#' @param user User object.
NULL
#' users.update
-#'
+#'
#' users.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.update(user, uuid, bypass_federation = NULL)
#' @param user User object.
#' @param uuid The UUID of the User in question.
-#' @param bypass_federation
+#' @param bypass_federation
#' @return User object.
#' @name users.update
NULL
#' users.delete
-#'
+#'
#' users.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.delete(uuid)
#' @param uuid The UUID of the User in question.
#' @return User object.
NULL
#' users.current
-#'
+#'
#' users.current is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.current(NULL)
#' @return User object.
#' @name users.current
NULL
#' users.system
-#'
+#'
#' users.system is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.system(NULL)
#' @return User object.
#' @name users.system
NULL
#' users.activate
-#'
+#'
#' users.activate is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.activate(uuid)
-#' @param uuid
+#' @param uuid
#' @return User object.
#' @name users.activate
NULL
#' users.setup
-#'
+#'
#' users.setup is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.setup(uuid = NULL, user = NULL,
#' repo_name = NULL, vm_uuid = NULL, send_notification_email = "false")
-#' @param uuid
-#' @param user
-#' @param repo_name
-#' @param vm_uuid
-#' @param send_notification_email
+#' @param uuid
+#' @param user
+#' @param repo_name
+#' @param vm_uuid
+#' @param send_notification_email
#' @return User object.
#' @name users.setup
NULL
#' users.unsetup
-#'
+#'
#' users.unsetup is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.unsetup(uuid)
-#' @param uuid
+#' @param uuid
#' @return User object.
#' @name users.unsetup
NULL
-#' users.update_uuid
-#'
-#' users.update_uuid is a method defined in Arvados class.
-#'
-#' @usage arv$users.update_uuid(uuid, new_uuid)
-#' @param uuid
-#' @param new_uuid
-#' @return User object.
-#' @name users.update_uuid
-NULL
-
#' users.merge
-#'
+#'
#' users.merge is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.merge(new_owner_uuid,
#' new_user_token = NULL, redirect_to_new_user = NULL,
#' old_user_uuid = NULL, new_user_uuid = NULL)
-#' @param new_owner_uuid
-#' @param new_user_token
-#' @param redirect_to_new_user
-#' @param old_user_uuid
-#' @param new_user_uuid
+#' @param new_owner_uuid
+#' @param new_user_token
+#' @param redirect_to_new_user
+#' @param old_user_uuid
+#' @param new_user_uuid
#' @return User object.
#' @name users.merge
NULL
#' users.list
-#'
+#'
#' users.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$users.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return UserList object.
NULL
#' repositories.get
-#'
+#'
#' repositories.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.get(uuid)
#' @param uuid The UUID of the Repository in question.
#' @return Repository object.
NULL
#' repositories.create
-#'
+#'
#' repositories.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.create(repository,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param repository Repository object.
NULL
#' repositories.update
-#'
+#'
#' repositories.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.update(repository,
#' uuid)
#' @param repository Repository object.
NULL
#' repositories.delete
-#'
+#'
#' repositories.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.delete(uuid)
#' @param uuid The UUID of the Repository in question.
#' @return Repository object.
NULL
#' repositories.get_all_permissions
-#'
+#'
#' repositories.get_all_permissions is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.get_all_permissions(NULL)
#' @return Repository object.
#' @name repositories.get_all_permissions
NULL
#' repositories.list
-#'
+#'
#' repositories.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$repositories.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return RepositoryList object.
NULL
#' virtual_machines.get
-#'
+#'
#' virtual_machines.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.get(uuid)
#' @param uuid The UUID of the VirtualMachine in question.
#' @return VirtualMachine object.
NULL
#' virtual_machines.create
-#'
+#'
#' virtual_machines.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.create(virtualmachine,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param virtualMachine VirtualMachine object.
NULL
#' virtual_machines.update
-#'
+#'
#' virtual_machines.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.update(virtualmachine,
#' uuid)
#' @param virtualMachine VirtualMachine object.
NULL
#' virtual_machines.delete
-#'
+#'
#' virtual_machines.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.delete(uuid)
#' @param uuid The UUID of the VirtualMachine in question.
#' @return VirtualMachine object.
NULL
#' virtual_machines.logins
-#'
+#'
#' virtual_machines.logins is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.logins(uuid)
-#' @param uuid
+#' @param uuid
#' @return VirtualMachine object.
#' @name virtual_machines.logins
NULL
#' virtual_machines.get_all_logins
-#'
+#'
#' virtual_machines.get_all_logins is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.get_all_logins(NULL)
#' @return VirtualMachine object.
#' @name virtual_machines.get_all_logins
NULL
#' virtual_machines.list
-#'
+#'
#' virtual_machines.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$virtual_machines.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return VirtualMachineList object.
NULL
#' workflows.get
-#'
+#'
#' workflows.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$workflows.get(uuid)
#' @param uuid The UUID of the Workflow in question.
#' @return Workflow object.
NULL
#' workflows.create
-#'
+#'
#' workflows.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$workflows.create(workflow,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param workflow Workflow object.
NULL
#' workflows.update
-#'
+#'
#' workflows.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$workflows.update(workflow,
#' uuid)
#' @param workflow Workflow object.
NULL
#' workflows.delete
-#'
+#'
#' workflows.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$workflows.delete(uuid)
#' @param uuid The UUID of the Workflow in question.
#' @return Workflow object.
NULL
#' workflows.list
-#'
+#'
#' workflows.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$workflows.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return WorkflowList object.
NULL
#' user_agreements.get
-#'
+#'
#' user_agreements.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.get(uuid)
#' @param uuid The UUID of the UserAgreement in question.
#' @return UserAgreement object.
NULL
#' user_agreements.create
-#'
+#'
#' user_agreements.create is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.create(useragreement,
#' ensure_unique_name = "false", cluster_id = NULL)
#' @param userAgreement UserAgreement object.
NULL
#' user_agreements.update
-#'
+#'
#' user_agreements.update is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.update(useragreement,
#' uuid)
#' @param userAgreement UserAgreement object.
NULL
#' user_agreements.delete
-#'
+#'
#' user_agreements.delete is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.delete(uuid)
#' @param uuid The UUID of the UserAgreement in question.
#' @return UserAgreement object.
NULL
#' user_agreements.signatures
-#'
+#'
#' user_agreements.signatures is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.signatures(NULL)
#' @return UserAgreement object.
#' @name user_agreements.signatures
NULL
#' user_agreements.sign
-#'
+#'
#' user_agreements.sign is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.sign(NULL)
#' @return UserAgreement object.
#' @name user_agreements.sign
NULL
#' user_agreements.list
-#'
+#'
#' user_agreements.list is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.list(filters = NULL,
#' where = NULL, order = NULL, select = NULL,
#' distinct = NULL, limit = "100", offset = "0",
#' count = "exact", cluster_id = NULL, bypass_federation = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param select
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param select
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param cluster_id List objects on a remote federated cluster instead of the current one.
#' @param bypass_federation bypass federation behavior, list items from local instance database only
#' @return UserAgreementList object.
NULL
#' user_agreements.new
-#'
+#'
#' user_agreements.new is a method defined in Arvados class.
-#'
+#'
#' @usage arv$user_agreements.new(NULL)
#' @return UserAgreement object.
#' @name user_agreements.new
NULL
#' configs.get
-#'
+#'
#' configs.get is a method defined in Arvados class.
-#'
+#'
#' @usage arv$configs.get(NULL)
#' @return object.
#' @name configs.get
NULL
#' project.get
-#'
+#'
#' projects.get is equivalent to groups.get method.
-#'
+#'
#' @usage arv$projects.get(uuid)
#' @param uuid The UUID of the Group in question.
#' @return Group object.
NULL
#' project.create
-#'
+#'
#' projects.create wrapps groups.create method by setting group_class attribute to "project".
-#'
+#'
#' @usage arv$projects.create(group, ensure_unique_name = "false")
#' @param group Group object.
#' @param ensure_unique_name Adjust name to ensure uniqueness instead of returning an error on (owner_uuid, name) collision.
NULL
#' project.update
-#'
+#'
#' projects.update wrapps groups.update method by setting group_class attribute to "project".
-#'
+#'
#' @usage arv$projects.update(group, uuid)
#' @param group Group object.
#' @param uuid The UUID of the Group in question.
NULL
#' project.delete
-#'
+#'
#' projects.delete is equivalent to groups.delete method.
-#'
+#'
#' @usage arv$project.delete(uuid)
#' @param uuid The UUID of the Group in question.
#' @return Group object.
NULL
#' project.list
-#'
+#'
#' projects.list wrapps groups.list method by setting group_class attribute to "project".
-#'
+#'
#' @usage arv$projects.list(filters = NULL,
#' where = NULL, order = NULL, distinct = NULL,
#' limit = "100", offset = "0", count = "exact",
#' include_trash = NULL, uuid = NULL, recursive = NULL)
-#' @param filters
-#' @param where
-#' @param order
-#' @param distinct
-#' @param limit
-#' @param offset
-#' @param count
+#' @param filters
+#' @param where
+#' @param order
+#' @param distinct
+#' @param limit
+#' @param offset
+#' @param count
#' @param include_trash Include items whose is_trashed attribute is true.
-#' @param uuid
+#' @param uuid
#' @param recursive Include contents from child groups recursively.
#' @return Group object.
#' @name projects.list
#' \item{}{\code{\link{users.system}}}
#' \item{}{\code{\link{users.unsetup}}}
#' \item{}{\code{\link{users.update}}}
-#' \item{}{\code{\link{users.update_uuid}}}
#' \item{}{\code{\link{virtual_machines.create}}}
#' \item{}{\code{\link{virtual_machines.delete}}}
#' \item{}{\code{\link{virtual_machines.get}}}
{
endPoint <- stringr::str_interp("api_clients/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_clients")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(apiclient) > 0)
- body <- jsonlite::toJSON(list(apiclient = apiclient),
+ body <- jsonlite::toJSON(list(apiclient = apiclient),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_clients/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(apiclient) > 0)
- body <- jsonlite::toJSON(list(apiclient = apiclient),
+ body <- jsonlite::toJSON(list(apiclient = apiclient),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_clients/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_clients")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(apiclientauthorization) > 0)
- body <- jsonlite::toJSON(list(apiclientauthorization = apiclientauthorization),
+ body <- jsonlite::toJSON(list(apiclientauthorization = apiclientauthorization),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(apiclientauthorization) > 0)
- body <- jsonlite::toJSON(list(apiclientauthorization = apiclientauthorization),
+ body <- jsonlite::toJSON(list(apiclientauthorization = apiclientauthorization),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations/create_system_auth")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(api_client_id = api_client_id,
scopes = scopes)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations/current")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("api_client_authorizations")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("authorized_keys/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("authorized_keys")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(authorizedkey) > 0)
- body <- jsonlite::toJSON(list(authorizedkey = authorizedkey),
+ body <- jsonlite::toJSON(list(authorizedkey = authorizedkey),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("authorized_keys/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(authorizedkey) > 0)
- body <- jsonlite::toJSON(list(authorizedkey = authorizedkey),
+ body <- jsonlite::toJSON(list(authorizedkey = authorizedkey),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("authorized_keys/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("authorized_keys")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(collection) > 0)
- body <- jsonlite::toJSON(list(collection = collection),
+ body <- jsonlite::toJSON(list(collection = collection),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(collection) > 0)
- body <- jsonlite::toJSON(list(collection = collection),
+ body <- jsonlite::toJSON(list(collection = collection),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}/provenance")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}/used_by")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}/trash")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections/${uuid}/untrash")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("collections")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation,
include_trash = include_trash, include_old_versions = include_old_versions)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(container) > 0)
- body <- jsonlite::toJSON(list(container = container),
+ body <- jsonlite::toJSON(list(container = container),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(container) > 0)
- body <- jsonlite::toJSON(list(container = container),
+ body <- jsonlite::toJSON(list(container = container),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}/auth")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}/lock")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}/unlock")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/${uuid}/secret_mounts")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers/current")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("containers")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("container_requests/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("container_requests")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(containerrequest) > 0)
- body <- jsonlite::toJSON(list(containerrequest = containerrequest),
+ body <- jsonlite::toJSON(list(containerrequest = containerrequest),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("container_requests/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(containerrequest) > 0)
- body <- jsonlite::toJSON(list(containerrequest = containerrequest),
+ body <- jsonlite::toJSON(list(containerrequest = containerrequest),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("container_requests/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("container_requests")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation,
include_trash = include_trash)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id, async = async)
-
+
if(length(group) > 0)
- body <- jsonlite::toJSON(list(group = group),
+ body <- jsonlite::toJSON(list(group = group),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(async = async)
-
+
if(length(group) > 0)
- body <- jsonlite::toJSON(list(group = group),
+ body <- jsonlite::toJSON(list(group = group),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/contents")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, distinct = distinct, limit = limit,
offset = offset, count = count, cluster_id = cluster_id,
bypass_federation = bypass_federation, include_trash = include_trash,
uuid = uuid, recursive = recursive, include = include)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/shared")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation,
include_trash = include_trash, include = include)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/${uuid}/trash")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups/${uuid}/untrash")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("groups")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation,
include_trash = include_trash)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(keepservice) > 0)
- body <- jsonlite::toJSON(list(keepservice = keepservice),
+ body <- jsonlite::toJSON(list(keepservice = keepservice),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(keepservice) > 0)
- body <- jsonlite::toJSON(list(keepservice = keepservice),
+ body <- jsonlite::toJSON(list(keepservice = keepservice),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services/accessible")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("keep_services")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("links/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("links")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(link) > 0)
- body <- jsonlite::toJSON(list(link = link),
+ body <- jsonlite::toJSON(list(link = link),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("links/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(link) > 0)
- body <- jsonlite::toJSON(list(link = link),
+ body <- jsonlite::toJSON(list(link = link),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("links/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("links")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("permissions/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("logs/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("logs")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(log) > 0)
- body <- jsonlite::toJSON(list(log = log),
+ body <- jsonlite::toJSON(list(log = log),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("logs/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(log) > 0)
- body <- jsonlite::toJSON(list(log = log),
+ body <- jsonlite::toJSON(list(log = log),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("logs/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("logs")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(user) > 0)
- body <- jsonlite::toJSON(list(user = user),
+ body <- jsonlite::toJSON(list(user = user),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(bypass_federation = bypass_federation)
-
+
if(length(user) > 0)
- body <- jsonlite::toJSON(list(user = user),
+ body <- jsonlite::toJSON(list(user = user),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/current")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/system")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/${uuid}/activate")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/setup")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(uuid = uuid, user = user,
repo_name = repo_name, vm_uuid = vm_uuid,
send_notification_email = send_notification_email)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/${uuid}/unsetup")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
- body <- NULL
-
- response <- private$REST$http$exec("POST", url, headers, body,
- queryArgs, private$numRetries)
- resource <- private$REST$httpParser$parseJSONResponse(response)
-
- if(!is.null(resource$errors))
- stop(resource$errors)
-
- resource
- },
- users.update_uuid = function(uuid, new_uuid)
- {
- endPoint <- stringr::str_interp("users/${uuid}/update_uuid")
- url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
- "Content-Type" = "application/json")
- queryArgs <- list(new_uuid = new_uuid)
-
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users/merge")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(new_owner_uuid = new_owner_uuid,
new_user_token = new_user_token, redirect_to_new_user = redirect_to_new_user,
old_user_uuid = old_user_uuid, new_user_uuid = new_user_uuid)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("users")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(repository) > 0)
- body <- jsonlite::toJSON(list(repository = repository),
+ body <- jsonlite::toJSON(list(repository = repository),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(repository) > 0)
- body <- jsonlite::toJSON(list(repository = repository),
+ body <- jsonlite::toJSON(list(repository = repository),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories/get_all_permissions")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("repositories")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(virtualmachine) > 0)
- body <- jsonlite::toJSON(list(virtualmachine = virtualmachine),
+ body <- jsonlite::toJSON(list(virtualmachine = virtualmachine),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(virtualmachine) > 0)
- body <- jsonlite::toJSON(list(virtualmachine = virtualmachine),
+ body <- jsonlite::toJSON(list(virtualmachine = virtualmachine),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines/${uuid}/logins")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines/get_all_logins")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("virtual_machines")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("workflows/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("workflows")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(workflow) > 0)
- body <- jsonlite::toJSON(list(workflow = workflow),
+ body <- jsonlite::toJSON(list(workflow = workflow),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("workflows/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(workflow) > 0)
- body <- jsonlite::toJSON(list(workflow = workflow),
+ body <- jsonlite::toJSON(list(workflow = workflow),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("workflows/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("workflows")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(ensure_unique_name = ensure_unique_name,
cluster_id = cluster_id)
-
+
if(length(useragreement) > 0)
- body <- jsonlite::toJSON(list(useragreement = useragreement),
+ body <- jsonlite::toJSON(list(useragreement = useragreement),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
if(length(useragreement) > 0)
- body <- jsonlite::toJSON(list(useragreement = useragreement),
+ body <- jsonlite::toJSON(list(useragreement = useragreement),
auto_unbox = TRUE)
else
body <- NULL
-
+
response <- private$REST$http$exec("PUT", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/${uuid}")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("DELETE", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/signatures")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/sign")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("POST", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- list(filters = filters, where = where,
order = order, select = select, distinct = distinct,
limit = limit, offset = offset, count = count,
cluster_id = cluster_id, bypass_federation = bypass_federation)
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("user_agreements/new")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
{
endPoint <- stringr::str_interp("config")
url <- paste0(private$host, endPoint)
- headers <- list(Authorization = paste("Bearer", private$token),
+ headers <- list(Authorization = paste("Bearer", private$token),
"Content-Type" = "application/json")
queryArgs <- NULL
-
+
body <- NULL
-
+
response <- private$REST$http$exec("GET", url, headers, body,
queryArgs, private$numRetries)
resource <- private$REST$httpParser$parseJSONResponse(response)
-
+
if(!is.null(resource$errors))
stop(resource$errors)
-
+
resource
},
\item{}{\code{\link{users.system}}}
\item{}{\code{\link{users.unsetup}}}
\item{}{\code{\link{users.update}}}
- \item{}{\code{\link{users.update_uuid}}}
\item{}{\code{\link{virtual_machines.create}}}
\item{}{\code{\link{virtual_machines.delete}}}
\item{}{\code{\link{virtual_machines.get}}}
+++ /dev/null
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/Arvados.R
-\name{users.update_uuid}
-\alias{users.update_uuid}
-\title{users.update_uuid}
-\usage{
-arv$users.update_uuid(uuid, new_uuid)
-}
-\arguments{
-\item{uuid}{}
-
-\item{new_uuid}{}
-}
-\value{
-User object.
-}
-\description{
-users.update_uuid is a method defined in Arvados class.
-}
s.required_ruby_version = '>= 2.1.0'
s.add_runtime_dependency 'arvados', '>= 1.4.1.20190320201707'
# Our google-api-client dependency used to be < 0.9, but that could be
- # satisfied by the buggy 0.9.pre*. https://dev.arvados.org/issues/9213
- s.add_runtime_dependency 'arvados-google-api-client', '~> 0.6', '>= 0.6.3', '<0.8.9'
+ # satisfied by the buggy 0.9.pre*, cf. https://dev.arvados.org/issues/9213
+ # We need at least version 0.8.7.3, cf. https://dev.arvados.org/issues/15673
+ s.add_runtime_dependency('arvados-google-api-client', '>= 0.8.7.3', '< 0.8.9')
s.add_runtime_dependency 'activesupport', '>= 3.2.13', '< 5.3'
s.add_runtime_dependency 'json', '>= 1.7.7', '<3'
s.add_runtime_dependency 'optimist', '~> 3.0'
s.add_runtime_dependency 'oj', '< 3.10.9'
s.add_runtime_dependency 'curb', '~> 0.8'
s.add_runtime_dependency 'launchy', '< 2.5'
- # arvados-google-api-client 0.8.7.2 is incompatible with faraday 0.16.2
- s.add_dependency('faraday', '< 0.16')
s.homepage =
'https://arvados.org'
end
# Arvados cli client
#
-# Ward Vandewege <ward@curoverse.com>
+# Ward Vandewege <ward@curii.com>
require 'fileutils'
require 'shellwords'
end
-subcommands = %w(copy create edit get keep pipeline run tag ws)
+subcommands = %w(copy create edit get keep tag ws)
def exec_bin bin, opts
bin_path = `which #{bin.shellescape}`.strip
arv_edit client, arvados, global_opts, remaining_opts
when 'get'
arv_get client, arvados, global_opts, remaining_opts
- when 'copy', 'tag', 'ws', 'run'
+ when 'copy', 'tag', 'ws'
exec_bin "arv-#{subcommand}", remaining_opts
when 'keep'
@sub = remaining_opts.shift
import cwltool.workflow
import cwltool.process
import cwltool.argparser
+from cwltool.errors import WorkflowException
from cwltool.process import shortname, UnsupportedRequirement, use_custom_schema
from cwltool.utils import adjustFileObjs, adjustDirObjs, get_listing
help="Enable loading and running development versions "
"of the CWL standards.", default=False)
parser.add_argument('--storage-classes', default="default",
- help="Specify comma separated list of storage classes to be used when saving workflow output to Keep.")
+ help="Specify comma separated list of storage classes to be used when saving final workflow output to Keep.")
+ parser.add_argument('--intermediate-storage-classes', default="default",
+ help="Specify comma separated list of storage classes to be used when saving intermediate workflow output to Keep.")
parser.add_argument("--intermediate-output-ttl", type=int, metavar="N",
help="If N > 0, intermediate output collections will be trashed N seconds after creation. Default is 0 (don't trash).",
help=argparse.SUPPRESS)
parser.add_argument("--thread-count", type=int,
- default=4, help="Number of threads to use for job submit and output collection.")
+ default=0, help="Number of threads to use for job submit and output collection.")
parser.add_argument("--http-timeout", type=int,
default=5*60, dest="http_timeout", help="API request timeout in seconds. Default is 300 seconds (5 minutes).")
"http://commonwl.org/cwltool#LoadListingRequirement",
"http://arvados.org/cwl#IntermediateOutput",
"http://arvados.org/cwl#ReuseRequirement",
- "http://arvados.org/cwl#ClusterTarget"
+ "http://arvados.org/cwl#ClusterTarget",
+ "http://arvados.org/cwl#OutputStorageClass",
+ "http://arvados.org/cwl#ProcessProperties"
])
def exit_signal_handler(sigcode, frame):
job_order_object = None
arvargs = parser.parse_args(args)
- if len(arvargs.storage_classes.strip().split(',')) > 1:
- logger.error(str(u"Multiple storage classes are not supported currently."))
- return 1
-
arvargs.use_container = True
arvargs.relax_path_checks = True
arvargs.print_supported_versions = False
api_client.users().current().execute()
if keep_client is None:
keep_client = arvados.keep.KeepClient(api_client=api_client, num_retries=4)
- executor = ArvCwlExecutor(api_client, arvargs, keep_client=keep_client, num_retries=4)
+ executor = ArvCwlExecutor(api_client, arvargs, keep_client=keep_client, num_retries=4, stdout=stdout)
+ except WorkflowException as e:
+ logger.error(e, exc_info=(sys.exc_info()[1] if arvargs.debug else False))
+ return 1
except Exception:
logger.exception("Error creating the Arvados CWL Executor")
return 1
project_uuid:
type: string?
doc: The project that will own the container requests and intermediate collections
+
+
+- name: OutputStorageClass
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify the storage class to be used for intermediate and final output
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:StorageClassHint"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ intermediateStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+ finalStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+
+- type: record
+ name: PropertyDef
+ doc: |
+ Define a property that will be set on the submitted container
+ request associated with this workflow or step.
+ fields:
+ - name: propertyName
+ type: string
+ doc: The property key
+ - name: propertyValue
+ type: [Any]
+ doc: The property value
+
+
+- name: ProcessProperties
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify metadata properties that will be set on the submitted
+ container request associated with this workflow or step.
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:ProcessProperties"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ processProperties:
+ type: PropertyDef[]
+ jsonldPredicate:
+ mapSubject: propertyName
+ mapPredicate: propertyValue
project_uuid:
type: string?
doc: The project that will own the container requests and intermediate collections
+
+- name: OutputStorageClass
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify the storage class to be used for intermediate and final output
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:StorageClassHint"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ intermediateStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+ finalStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+
+- type: record
+ name: PropertyDef
+ doc: |
+ Define a property that will be set on the submitted container
+ request associated with this workflow or step.
+ fields:
+ - name: propertyName
+ type: string
+ doc: The property key
+ - name: propertyValue
+ type: [Any]
+ doc: The property value
+
+
+- name: ProcessProperties
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify metadata properties that will be set on the submitted
+ container request associated with this workflow or step.
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:ProcessProperties"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ processProperties:
+ type: PropertyDef[]
+ jsonldPredicate:
+ mapSubject: propertyName
+ mapPredicate: propertyValue
project_uuid:
type: string?
doc: The project that will own the container requests and intermediate collections
+
+
+- name: OutputStorageClass
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify the storage class to be used for intermediate and final output
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:StorageClassHint"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ intermediateStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+ finalStorageClass:
+ type:
+ - "null"
+ - string
+ - type: array
+ items: string
+ doc: One or more storages classes
+
+
+- type: record
+ name: PropertyDef
+ doc: |
+ Define a property that will be set on the submitted container
+ request associated with this workflow or step.
+ fields:
+ - name: propertyName
+ type: string
+ doc: The property key
+ - name: propertyValue
+ type: [Any]
+ doc: The property value
+
+
+- name: ProcessProperties
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify metadata properties that will be set on the submitted
+ container request associated with this workflow or step.
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:ProcessProperties"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ processProperties:
+ type: PropertyDef[]
+ jsonldPredicate:
+ mapSubject: propertyName
+ mapPredicate: propertyValue
from .arvdocker import arv_docker_get_image
from . import done
-from .runner import Runner, arvados_jobs_image, packed_workflow, trim_anonymous_location, remove_redundant_fields
+from .runner import Runner, arvados_jobs_image, packed_workflow, trim_anonymous_location, remove_redundant_fields, make_builder
from .fsaccess import CollectionFetcher
from .pathmapper import NoFollowPathMapper, trim_listing
from .perf import Perf
def update_pipeline_component(self, r):
pass
+ def _required_env(self):
+ env = {}
+ env["HOME"] = self.outdir
+ env["TMPDIR"] = self.tmpdir
+ return env
+
def run(self, runtimeContext):
# ArvadosCommandTool subclasses from cwltool.CommandLineTool,
# which calls makeJobRunner() to get a new ArvadosContainer
runtimeContext = self.job_runtime
- container_request = {
- "command": self.command_line,
- "name": self.name,
- "output_path": self.outdir,
- "cwd": self.outdir,
- "priority": runtimeContext.priority,
- "state": "Committed",
- "properties": {},
- }
+ if runtimeContext.submit_request_uuid:
+ container_request = self.arvrunner.api.container_requests().get(
+ uuid=runtimeContext.submit_request_uuid
+ ).execute(num_retries=self.arvrunner.num_retries)
+ else:
+ container_request = {}
+
+ container_request["command"] = self.command_line
+ container_request["name"] = self.name
+ container_request["output_path"] = self.outdir
+ container_request["cwd"] = self.outdir
+ container_request["priority"] = runtimeContext.priority
+ container_request["state"] = "Committed"
+ container_request.setdefault("properties", {})
+
runtime_constraints = {}
if runtimeContext.project_uuid:
"path": "%s/%s" % (self.outdir, self.stdout)}
(docker_req, docker_is_req) = self.get_requirement("DockerRequirement")
- if not docker_req:
- docker_req = {"dockerImageId": "arvados/jobs:"+__version__}
container_request["container_image"] = arv_docker_get_image(self.arvrunner.api,
docker_req,
if self.output_ttl < 0:
raise WorkflowException("Invalid value %d for output_ttl, cannot be less than zero" % container_request["output_ttl"])
+
+ if self.arvrunner.api._rootDesc["revision"] >= "20210628":
+ storage_class_req, _ = self.get_requirement("http://arvados.org/cwl#OutputStorageClass")
+ if storage_class_req and storage_class_req.get("intermediateStorageClass"):
+ container_request["output_storage_classes"] = aslist(storage_class_req["intermediateStorageClass"])
+ else:
+ container_request["output_storage_classes"] = runtimeContext.intermediate_storage_classes.strip().split(",")
+
if self.timelimit is not None and self.timelimit > 0:
scheduling_parameters["max_run_time"] = self.timelimit
enable_reuse = reuse_req["enableReuse"]
container_request["use_existing"] = enable_reuse
+ properties_req, _ = self.get_requirement("http://arvados.org/cwl#ProcessProperties")
+ if properties_req:
+ for pr in properties_req["processProperties"]:
+ container_request["properties"][pr["propertyName"]] = self.builder.do_eval(pr["propertyValue"])
+
if runtimeContext.runnerjob.startswith("arvwf:"):
wfuuid = runtimeContext.runnerjob[6:runtimeContext.runnerjob.index("#")]
wfrecord = self.arvrunner.api.workflows().get(uuid=wfuuid).execute(num_retries=self.arvrunner.num_retries)
if self.embedded_tool.tool.get("id", "").startswith("arvwf:"):
container_req["properties"]["template_uuid"] = self.embedded_tool.tool["id"][6:33]
+ properties_req, _ = self.embedded_tool.get_requirement("http://arvados.org/cwl#ProcessProperties")
+ if properties_req:
+ builder = make_builder(self.job_order, self.embedded_tool.hints, self.embedded_tool.requirements, runtimeContext, self.embedded_tool.metadata)
+ for pr in properties_req["processProperties"]:
+ container_req["properties"][pr["propertyName"]] = builder.do_eval(pr["propertyValue"])
# --local means execute the workflow instead of submitting a container request
# --api=containers means use the containers API
if runtimeContext.debug:
command.append("--debug")
- if runtimeContext.storage_classes != "default":
+ if runtimeContext.storage_classes != "default" and runtimeContext.storage_classes:
command.append("--storage-classes=" + runtimeContext.storage_classes)
+ if runtimeContext.intermediate_storage_classes != "default" and runtimeContext.intermediate_storage_classes:
+ command.append("--intermediate-storage-classes=" + runtimeContext.intermediate_storage_classes)
+
if self.on_error:
command.append("--on-error=" + self.on_error)
if not images:
# Fetch Docker image if necessary.
try:
- cwltool.docker.DockerCommandLineJob.get_image(dockerRequirement, pull_image,
+ result = cwltool.docker.DockerCommandLineJob.get_image(dockerRequirement, pull_image,
force_pull, tmp_outdir_prefix)
+ if not result:
+ raise WorkflowException("Docker image '%s' not available" % dockerRequirement["dockerImageId"])
except OSError as e:
raise WorkflowException("While trying to get Docker image '%s', failed to execute 'docker': %s" % (dockerRequirement["dockerImageId"], e))
from .arvcontainer import ArvadosContainer
from .pathmapper import ArvPathMapper
from .runner import make_builder
+from ._version import __version__
from functools import partial
from schema_salad.sourceline import SourceLine
from cwltool.errors import WorkflowException
runtimeContext.submit_runner_cluster not in arvrunner.api._rootDesc["remoteHosts"] and
runtimeContext.submit_runner_cluster != arvrunner.api._rootDesc["uuidPrefix"]):
raise WorkflowException("Unknown or invalid cluster id '%s' known remote clusters are %s" % (runtimeContext.submit_runner_cluster,
- ", ".join(list(arvrunner.api._rootDesc["remoteHosts"].keys()))))
+ ", ".join(list(arvrunner.api._rootDesc["remoteHosts"].keys()))))
+ if runtimeContext.project_uuid:
+ cluster_target = runtimeContext.submit_runner_cluster or arvrunner.api._rootDesc["uuidPrefix"]
+ if not runtimeContext.project_uuid.startswith(cluster_target):
+ raise WorkflowException("Project uuid '%s' should start with id of target cluster '%s'" % (runtimeContext.project_uuid, cluster_target))
+
+ try:
+ if runtimeContext.project_uuid[5:12] == '-tpzed-':
+ arvrunner.api.users().get(uuid=runtimeContext.project_uuid).execute()
+ else:
+ proj = arvrunner.api.groups().get(uuid=runtimeContext.project_uuid).execute()
+ if proj["group_class"] != "project":
+ raise Exception("not a project, group_class is '%s'" % (proj["group_class"]))
+ except Exception as e:
+ raise WorkflowException("Invalid project uuid '%s': %s" % (runtimeContext.project_uuid, e))
+
def set_cluster_target(tool, arvrunner, builder, runtimeContext):
cluster_target_req = None
for field in ("hints", "requirements"):
def __init__(self, arvrunner, toolpath_object, loadingContext):
super(ArvadosCommandTool, self).__init__(toolpath_object, loadingContext)
+
+ (docker_req, docker_is_req) = self.get_requirement("DockerRequirement")
+ if not docker_req:
+ self.hints.append({"class": "DockerRequirement",
+ "dockerPull": "arvados/jobs:"+__version__})
+
self.arvrunner = arvrunner
def make_job_runner(self, runtimeContext):
from cwltool.load_tool import fetch_document, resolve_and_validate_document
from cwltool.process import shortname
from cwltool.workflow import Workflow, WorkflowException, WorkflowStep
-from cwltool.utils import adjustFileObjs, adjustDirObjs, visit_class
+from cwltool.utils import adjustFileObjs, adjustDirObjs, visit_class, normalizeFilesDirs
from cwltool.context import LoadingContext
import ruamel.yaml as yaml
discover_secondary_files(self.arvrunner.fs_access, builder,
self.tool["inputs"], joborder)
+ normalizeFilesDirs(joborder)
with Perf(metrics, "subworkflow upload_deps"):
upload_dependencies(self.arvrunner,
self.wait = True
self.cwl_runner_job = None
self.storage_classes = "default"
+ self.intermediate_storage_classes = "default"
self.current_container = None
self.http_timeout = 300
self.submit_runner_cluster = None
if self.submit_request_uuid:
self.submit_runner_cluster = self.submit_request_uuid[0:5]
+
+ def get_outdir(self) -> str:
+ """Return self.outdir or create one with self.tmp_outdir_prefix."""
+ return self.outdir
+
+ def get_tmpdir(self) -> str:
+ """Return self.tmpdir or create one with self.tmpdir_prefix."""
+ return self.tmpdir
+
+ def create_tmpdir(self) -> str:
+ """Return self.tmpdir or create one with self.tmpdir_prefix."""
+ return self.tmpdir
from ._version import __version__
from cwltool.process import shortname, UnsupportedRequirement, use_custom_schema
-from cwltool.utils import adjustFileObjs, adjustDirObjs, get_listing, visit_class
+from cwltool.utils import adjustFileObjs, adjustDirObjs, get_listing, visit_class, aslist
from cwltool.command_line_tool import compute_checksums
from cwltool.load_tool import load_tool
arvargs=None,
keep_client=None,
num_retries=4,
- thread_count=4):
+ thread_count=4,
+ stdout=sys.stdout):
if arvargs is None:
arvargs = argparse.Namespace()
self.should_estimate_cache_size = True
self.fs_access = None
self.secret_store = None
+ self.stdout = stdout
if keep_client is not None:
self.keep_client = keep_client
if runtimeContext.submit_request_uuid and self.work_api != "containers":
raise Exception("--submit-request-uuid requires containers API, but using '{}' api".format(self.work_api))
+ default_storage_classes = ",".join([k for k,v in self.api.config().get("StorageClasses", {"default": {"Default": True}}).items() if v.get("Default") is True])
+ if runtimeContext.storage_classes == "default":
+ runtimeContext.storage_classes = default_storage_classes
+ if runtimeContext.intermediate_storage_classes == "default":
+ runtimeContext.intermediate_storage_classes = default_storage_classes
+
if not runtimeContext.name:
runtimeContext.name = self.name = updated_tool.tool.get("label") or updated_tool.metadata.get("label") or os.path.basename(updated_tool.tool["id"])
loadingContext = self.loadingContext.copy()
loadingContext.do_validate = False
- loadingContext.do_update = False
if submitting:
+ loadingContext.do_update = False
# Document may have been auto-updated. Reload the original
# document with updating disabled because we want to
# submit the document with its original CWL version, not
if existing_uuid or runtimeContext.create_workflow:
# Create a pipeline template or workflow record and exit.
if self.work_api == "containers":
- return (upload_workflow(self, tool, job_order,
+ uuid = upload_workflow(self, tool, job_order,
self.project_uuid,
uuid=existing_uuid,
submit_runner_ram=runtimeContext.submit_runner_ram,
name=runtimeContext.name,
merged_map=merged_map,
- submit_runner_image=runtimeContext.submit_runner_image),
- "success")
+ submit_runner_image=runtimeContext.submit_runner_image)
+ self.stdout.write(uuid + "\n")
+ return (None, "success")
self.apply_reqs(job_order, tool)
if runtimeContext.submit and not runtimeContext.wait:
runnerjob = next(jobiter)
runnerjob.run(runtimeContext)
- return (runnerjob.uuid, "success")
+ self.stdout.write(runnerjob.uuid+"\n")
+ return (None, "success")
current_container = arvados_cwl.util.get_current_container(self.api, self.num_retries, logger)
if current_container:
if self.output_tags is None:
self.output_tags = ""
- storage_classes = runtimeContext.storage_classes.strip().split(",")
+ storage_classes = ""
+ storage_class_req, _ = tool.get_requirement("http://arvados.org/cwl#OutputStorageClass")
+ if storage_class_req and storage_class_req.get("finalStorageClass"):
+ storage_classes = aslist(storage_class_req["finalStorageClass"])
+ else:
+ storage_classes = runtimeContext.storage_classes.strip().split(",")
+
self.final_output, self.final_output_collection = self.make_output_collection(self.output_name, storage_classes, self.output_tags, self.final_output)
self.set_crunch_output()
if p.startswith("keep:") and (arvados.util.keep_locator_pattern.match(p[5:]) or
arvados.util.collection_uuid_pattern.match(p[5:])):
locator = p[5:]
- return (self.collection_cache.get(locator), urllib.parse.unquote(sp[1]) if len(sp) == 2 else None)
+ rest = os.path.normpath(urllib.parse.unquote(sp[1])) if len(sp) == 2 else None
+ return (self.collection_cache.get(locator), rest)
else:
return (None, path)
def glob(self, pattern):
collection, rest = self.get_collection(pattern)
- if collection is not None and not rest:
+ if collection is not None and rest in (None, "", "."):
return [pattern]
patternsegments = rest.split("/")
return sorted(self._match(collection, patternsegments, "keep:" + collection.manifest_locator()))
self.fsaccess = fs_access
self.num_retries = num_retries
- def fetch_text(self, url):
+ def fetch_text(self, url, content_types=None):
if url.startswith("keep:"):
with self.fsaccess.open(url, "r", encoding="utf-8") as f:
return f.read()
outdir="", # type: Text
tmpdir="", # type: Text
stagedir="", # type: Text
- cwlVersion=metadata.get("http://commonwl.org/cwltool#original_cwlVersion") or metadata.get("cwlVersion")
+ cwlVersion=metadata.get("http://commonwl.org/cwltool#original_cwlVersion") or metadata.get("cwlVersion"),
+ container_engine="docker"
)
def search_schemadef(name, reqs):
elif isinstance(pattern, dict):
specs.append(pattern)
elif isinstance(pattern, str):
- specs.append({"pattern": pattern})
+ if builder.cwlVersion == "v1.0":
+ specs.append({"pattern": pattern, "required": True})
+ else:
+ specs.append({"pattern": pattern, "required": sf.get("required")})
else:
raise SourceLine(primary["secondaryFiles"], i, validate.ValidationException).makeError(
"Expression must return list, object, string or null")
for i, sf in enumerate(specs):
if isinstance(sf, dict):
if sf.get("class") == "File":
- pattern = sf["basename"]
+ pattern = None
+ if sf.get("location") is None:
+ raise SourceLine(primary["secondaryFiles"], i, validate.ValidationException).makeError(
+ "File object is missing 'location': %s" % sf)
+ sfpath = sf["location"]
+ required = True
else:
pattern = sf["pattern"]
required = sf.get("required")
raise SourceLine(primary["secondaryFiles"], i, validate.ValidationException).makeError(
"Expression must return list, object, string or null")
- sfpath = substitute(primary["location"], pattern)
+ if pattern is not None:
+ sfpath = substitute(primary["location"], pattern)
+
required = builder.do_eval(required, context=primary)
if fsaccess.exists(sfpath):
- found.append({"location": sfpath, "class": "File"})
+ if pattern is not None:
+ found.append({"location": sfpath, "class": "File"})
+ else:
+ found.append(sf)
elif required:
raise SourceLine(primary["secondaryFiles"], i, validate.ValidationException).makeError(
"Required secondary file '%s' does not exist" % sfpath)
def visit(v, cur_id):
if isinstance(v, dict):
- if v.get("class") in ("CommandLineTool", "Workflow"):
+ if v.get("class") in ("CommandLineTool", "Workflow", "ExpressionTool"):
if tool.metadata["cwlVersion"] == "v1.0" and "id" not in v:
raise SourceLine(v, None, Exception).makeError("Embedded process object is missing required 'id' field, add an 'id' or use to cwlVersion: v1.1")
if "id" in v:
if "path" in v and "location" not in v:
v["location"] = v["path"]
del v["path"]
- if "location" in v and not v["location"].startswith("keep:"):
- v["location"] = merged_map[cur_id].resolved[v["location"]]
- if "location" in v and v["location"] in merged_map[cur_id].secondaryFiles:
- v["secondaryFiles"] = merged_map[cur_id].secondaryFiles[v["location"]]
+ if "location" in v and cur_id in merged_map:
+ if v["location"] in merged_map[cur_id].resolved:
+ v["location"] = merged_map[cur_id].resolved[v["location"]]
+ if v["location"] in merged_map[cur_id].secondaryFiles:
+ v["secondaryFiles"] = merged_map[cur_id].secondaryFiles[v["location"]]
if v.get("class") == "DockerRequirement":
v["http://arvados.org/cwl#dockerCollectionPDH"] = arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, v, True,
arvrunner.project_uuid,
fpm_depends+=(nodejs)
case "$TARGET" in
- ubuntu1604)
- fpm_depends+=(libcurl3-gnutls)
- ;;
debian* | ubuntu*)
fpm_depends+=(libcurl3-gnutls python3-distutils)
;;
# file to determine what version of cwltool and schema-salad to
# build.
install_requires=[
- 'cwltool==3.0.20201121085451',
- 'schema-salad==7.0.20200612160654',
+ 'cwltool==3.1.20211107152837',
+ 'schema-salad==8.2.20211116214159',
'arvados-python-client{}'.format(pysdk_dep),
'setuptools',
- 'ciso8601 >= 2.0.0'
+ 'ciso8601 >= 2.0.0',
+ 'networkx < 2.6'
],
- extras_require={
- ':os.name=="posix" and python_version<"3"': ['subprocess32 >= 3.5.1'],
- ':python_version<"3"': ['pytz'],
- },
data_files=[
('share/doc/arvados-cwl-runner', ['LICENSE-2.0.txt', 'README.rst']),
],
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+cwlVersion: v1.2
+class: CommandLineTool
+inputs: []
+outputs:
+ stuff:
+ type: Directory
+ outputBinding:
+ glob: './foo/'
+requirements:
+ ShellCommandRequirement: {}
+arguments: [{shellQuote: false, valueFrom: "mkdir -p foo && touch baz.txt && touch foo/bar.txt"}]
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+cwlVersion: v1.2
+class: CommandLineTool
+inputs: []
+outputs:
+ stuff:
+ type: File
+ outputBinding:
+ glob: './foo/*.txt'
+requirements:
+ ShellCommandRequirement: {}
+arguments: [{shellQuote: false, valueFrom: "mkdir -p foo && touch baz.txt && touch foo/bar.txt"}]
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+cwlVersion: v1.2
+class: CommandLineTool
+inputs: []
+outputs:
+ stuff:
+ type: Directory
+ outputBinding:
+ glob: $(runtime.outdir)
+requirements:
+ ShellCommandRequirement: {}
+arguments: [{shellQuote: false, valueFrom: "mkdir -p foo && touch baz.txt && touch foo/bar.txt"}]
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+cwlVersion: v1.1
+class: ExpressionTool
+inputs:
+ file1:
+ type: File
+ default:
+ class: File
+ location: keep:f225e6259bdd63bc7240599648dde9f1+97/hg19.fa
+outputs:
+ val: string
+requirements:
+ InlineJavascriptRequirement: {}
+expression: "$({val: inputs.file1.location})"
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+sampleName: woble
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+$namespaces:
+ sbg: https://www.sevenbridges.com/
+class: "Workflow"
+cwlVersion: v1.1
+label: "check that sbg x/y fields are correctly ignored"
+inputs:
+ - id: sampleName
+ type: string
+ label: Sample name
+ 'sbg:x': -22
+ 'sbg:y': 33.4296875
+outputs:
+ - id: outstr
+ type: string
+ outputSource: step1/outstr
+steps:
+ step1:
+ in:
+ sampleName: sampleName
+ out: [outstr]
+ run:
+ class: CommandLineTool
+ inputs:
+ sampleName: string
+ stdout: out.txt
+ outputs:
+ outstr:
+ type: string
+ outputBinding:
+ glob: out.txt
+ loadContents: true
+ outputEval: $(self[0].contents)
+ arguments: [echo, "-n", "foo", $(inputs.sampleName), "bar"]
"size": 4
tool: 17267-broken-schemas.cwl
doc: "Test issue 17267 - inaccessible $schemas URL is not a fatal error"
+
+- job: null
+ output: {}
+ tool: wf/trick_defaults2.cwl
+ doc: "Test issue 17462 - secondary file objects on file defaults are not resolved"
+
+- job: null
+ output: {
+ "stuff": {
+ "location": "bar.txt",
+ "basename": "bar.txt",
+ "class": "File",
+ "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
+ "size": 0
+ }
+ }
+ tool: 17521-dot-slash-glob.cwl
+ doc: "Test issue 17521 - bug with leading './' capturing files in subdirectories"
+
+- job: null
+ output: {
+ "stuff": {
+ "basename": "foo",
+ "class": "Directory",
+ "listing": [
+ {
+ "basename": "bar.txt",
+ "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
+ "class": "File",
+ "location": "foo/bar.txt",
+ "size": 0
+ }
+ ],
+ "location": "foo"
+ }
+ }
+ tool: 10380-trailing-slash-dir.cwl
+ doc: "Test issue 10380 - bug with trailing slash when capturing an output directory"
+
+- job: null
+ output: {
+ "stuff": {
+ "basename": "78f3957c41d044352303a3fa326dff1e+102",
+ "class": "Directory",
+ "listing": [
+ {
+ "basename": "baz.txt",
+ "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
+ "class": "File",
+ "location": "78f3957c41d044352303a3fa326dff1e+102/baz.txt",
+ "size": 0
+ },
+ {
+ "basename": "foo",
+ "class": "Directory",
+ "listing": [
+ {
+ "basename": "bar.txt",
+ "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
+ "class": "File",
+ "location": "78f3957c41d044352303a3fa326dff1e+102/foo/bar.txt",
+ "size": 0
+ }
+ ],
+ "location": "78f3957c41d044352303a3fa326dff1e+102/foo"
+ }
+ ],
+ "location": "78f3957c41d044352303a3fa326dff1e+102"
+ }
+ }
+ tool: 17801-runtime-outdir.cwl
+ doc: "Test issue 17801 - bug using $(runtime.outdir) to capture the output directory"
+
+- job: null
+ output:
+ "val": "keep:f225e6259bdd63bc7240599648dde9f1+97/hg19.fa"
+ tool: 17858-pack-visit-crash.cwl
+ doc: "Test issue 17858 - keep ref default inputs on ExpressionTool"
+
+- job: 17879-ignore-sbg-fields-job.yml
+ output:
+ "outstr": "foo woble bar"
+ tool: 17879-ignore-sbg-fields.cwl
+ doc: "Test issue 17879 - ignores sbg fields"
def setUp(self):
cwltool.process._names = set()
+ arv_docker_clear_cache()
def helper(self, runner, enable_reuse=True):
document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema(INTERNAL_VERSION)
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
"baseCommand": "ls",
"arguments": [{"valueFrom": "$(runtime.outdir)"}],
"id": "#",
- "class": "CommandLineTool"
+ "class": "org.w3id.cwl.cwl.CommandLineTool"
})
loadingContext, runtimeContext = self.helper(runner, enable_reuse)
'cwd': '/var/spool/cwl',
'scheduling_parameters': {},
'properties': {},
- 'secret_mounts': {}
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
}))
# The test passes some fields in builder.resources
# For the remaining fields, the defaults will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
def test_resource_requirements(self, keepdocker):
- arv_docker_clear_cache()
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 3600
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
}],
"baseCommand": "ls",
"id": "#",
- "class": "CommandLineTool"
+ "class": "org.w3id.cwl.cwl.CommandLineTool"
})
loadingContext, runtimeContext = self.helper(runner)
'partitions': ['blurb']
},
'properties': {},
- 'secret_mounts': {}
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
}
call_body = call_kwargs.get('body', None)
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
@mock.patch("arvados.collection.Collection")
def test_initial_work_dir(self, collection_mock, keepdocker):
- arv_docker_clear_cache()
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
}],
"baseCommand": "ls",
"id": "#",
- "class": "CommandLineTool"
+ "class": "org.w3id.cwl.cwl.CommandLineTool"
})
loadingContext, runtimeContext = self.helper(runner)
'scheduling_parameters': {
},
'properties': {},
- 'secret_mounts': {}
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
}
call_body = call_kwargs.get('body', None)
# Test redirecting stdin/stdout/stderr
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
def test_redirects(self, keepdocker):
- arv_docker_clear_cache()
-
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
"stdin": "/keep/99999999999999999999999999999996+99/file.txt",
"arguments": [{"valueFrom": "$(runtime.outdir)"}],
"id": "#",
- "class": "CommandLineTool"
+ "class": "org.w3id.cwl.cwl.CommandLineTool"
})
loadingContext, runtimeContext = self.helper(runner)
'cwd': '/var/spool/cwl',
'scheduling_parameters': {},
'properties': {},
- 'secret_mounts': {}
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
}))
@mock.patch("arvados.collection.Collection")
# Hence the default resources will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
def test_mounts(self, keepdocker):
- arv_docker_clear_cache()
-
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
"baseCommand": "ls",
"arguments": [{"valueFrom": "$(runtime.outdir)"}],
"id": "#",
- "class": "CommandLineTool"
+ "class": "org.w3id.cwl.cwl.CommandLineTool"
})
loadingContext, runtimeContext = self.helper(runner)
'cwd': '/var/spool/cwl',
'scheduling_parameters': {},
'properties': {},
- 'secret_mounts': {}
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
}))
# The test passes no builder.resources
# Hence the default resources will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
def test_secrets(self, keepdocker):
- arv_docker_clear_cache()
-
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.1")
tool = cmap({"arguments": ["md5sum", "example.conf"],
- "class": "CommandLineTool",
+ "class": "org.w3id.cwl.cwl.CommandLineTool",
"hints": [
{
"class": "http://commonwl.org/cwltool#Secrets",
"content": "username: user\npassword: blorp\n",
"kind": "text"
}
- }
+ },
+ 'output_storage_classes': ["default"]
}))
# The test passes no builder.resources
# Hence the default resources will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
@mock.patch("arvados.commands.keepdocker.list_images_in_arv")
def test_timelimit(self, keepdocker):
- arv_docker_clear_cache()
-
runner = mock.MagicMock()
runner.ignore_docker_for_reuse = False
runner.intermediate_output_ttl = 0
runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
runner.api.collections().get().execute.return_value = {
"baseCommand": "ls",
"arguments": [{"valueFrom": "$(runtime.outdir)"}],
"id": "#",
- "class": "CommandLineTool",
+ "class": "org.w3id.cwl.cwl.CommandLineTool",
"hints": [
{
"class": "ToolTimeLimit",
self.assertEqual(42, kwargs['body']['scheduling_parameters'].get('max_run_time'))
+ # The test passes no builder.resources
+ # Hence the default resources will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
+ @mock.patch("arvados.commands.keepdocker.list_images_in_arv")
+ def test_setting_storage_class(self, keepdocker):
+ arv_docker_clear_cache()
+
+ runner = mock.MagicMock()
+ runner.ignore_docker_for_reuse = False
+ runner.intermediate_output_ttl = 0
+ runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
+
+ keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
+ runner.api.collections().get().execute.return_value = {
+ "portable_data_hash": "99999999999999999999999999999993+99"}
+
+ tool = cmap({
+ "inputs": [],
+ "outputs": [],
+ "baseCommand": "ls",
+ "arguments": [{"valueFrom": "$(runtime.outdir)"}],
+ "id": "#",
+ "class": "org.w3id.cwl.cwl.CommandLineTool",
+ "hints": [
+ {
+ "class": "http://arvados.org/cwl#OutputStorageClass",
+ "finalStorageClass": ["baz_sc", "qux_sc"],
+ "intermediateStorageClass": ["foo_sc", "bar_sc"]
+ }
+ ]
+ })
+
+ loadingContext, runtimeContext = self.helper(runner, True)
+
+ arvtool = arvados_cwl.ArvadosCommandTool(runner, tool, loadingContext)
+ arvtool.formatgraph = None
+
+ for j in arvtool.job({}, mock.MagicMock(), runtimeContext):
+ j.run(runtimeContext)
+ runner.api.container_requests().create.assert_called_with(
+ body=JsonDiffMatcher({
+ 'environment': {
+ 'HOME': '/var/spool/cwl',
+ 'TMPDIR': '/tmp'
+ },
+ 'name': 'test_run_True',
+ 'runtime_constraints': {
+ 'vcpus': 1,
+ 'ram': 1073741824
+ },
+ 'use_existing': True,
+ 'priority': 500,
+ 'mounts': {
+ '/tmp': {'kind': 'tmp',
+ "capacity": 1073741824
+ },
+ '/var/spool/cwl': {'kind': 'tmp',
+ "capacity": 1073741824 }
+ },
+ 'state': 'Committed',
+ 'output_name': 'Output for step test_run_True',
+ 'owner_uuid': 'zzzzz-8i9sb-zzzzzzzzzzzzzzz',
+ 'output_path': '/var/spool/cwl',
+ 'output_ttl': 0,
+ 'container_image': '99999999999999999999999999999993+99',
+ 'command': ['ls', '/var/spool/cwl'],
+ 'cwd': '/var/spool/cwl',
+ 'scheduling_parameters': {},
+ 'properties': {},
+ 'secret_mounts': {},
+ 'output_storage_classes': ["foo_sc", "bar_sc"]
+ }))
+
+
+ # The test passes no builder.resources
+ # Hence the default resources will apply: {'cores': 1, 'ram': 1024, 'outdirSize': 1024, 'tmpdirSize': 1024}
+ @mock.patch("arvados.commands.keepdocker.list_images_in_arv")
+ def test_setting_process_properties(self, keepdocker):
+ arv_docker_clear_cache()
+
+ runner = mock.MagicMock()
+ runner.ignore_docker_for_reuse = False
+ runner.intermediate_output_ttl = 0
+ runner.secret_store = cwltool.secrets.SecretStore()
+ runner.api._rootDesc = {"revision": "20210628"}
+
+ keepdocker.return_value = [("zzzzz-4zz18-zzzzzzzzzzzzzz3", "")]
+ runner.api.collections().get().execute.return_value = {
+ "portable_data_hash": "99999999999999999999999999999993+99"}
+
+ tool = cmap({
+ "inputs": [
+ {"id": "x", "type": "string"}],
+ "outputs": [],
+ "baseCommand": "ls",
+ "arguments": [{"valueFrom": "$(runtime.outdir)"}],
+ "id": "#",
+ "class": "org.w3id.cwl.cwl.CommandLineTool",
+ "hints": [
+ {
+ "class": "http://arvados.org/cwl#ProcessProperties",
+ "processProperties": [
+ {"propertyName": "foo",
+ "propertyValue": "bar"},
+ {"propertyName": "baz",
+ "propertyValue": "$(inputs.x)"},
+ {"propertyName": "quux",
+ "propertyValue": {
+ "q1": 1,
+ "q2": 2
+ }
+ }
+ ],
+ }
+ ]
+ })
+
+ loadingContext, runtimeContext = self.helper(runner, True)
+
+ arvtool = arvados_cwl.ArvadosCommandTool(runner, tool, loadingContext)
+ arvtool.formatgraph = None
+
+ for j in arvtool.job({"x": "blorp"}, mock.MagicMock(), runtimeContext):
+ j.run(runtimeContext)
+ runner.api.container_requests().create.assert_called_with(
+ body=JsonDiffMatcher({
+ 'environment': {
+ 'HOME': '/var/spool/cwl',
+ 'TMPDIR': '/tmp'
+ },
+ 'name': 'test_run_True',
+ 'runtime_constraints': {
+ 'vcpus': 1,
+ 'ram': 1073741824
+ },
+ 'use_existing': True,
+ 'priority': 500,
+ 'mounts': {
+ '/tmp': {'kind': 'tmp',
+ "capacity": 1073741824
+ },
+ '/var/spool/cwl': {'kind': 'tmp',
+ "capacity": 1073741824 }
+ },
+ 'state': 'Committed',
+ 'output_name': 'Output for step test_run_True',
+ 'owner_uuid': 'zzzzz-8i9sb-zzzzzzzzzzzzzzz',
+ 'output_path': '/var/spool/cwl',
+ 'output_ttl': 0,
+ 'container_image': '99999999999999999999999999999993+99',
+ 'command': ['ls', '/var/spool/cwl'],
+ 'cwd': '/var/spool/cwl',
+ 'scheduling_parameters': {},
+ 'properties': {
+ "baz": "blorp",
+ "foo": "bar",
+ "quux": {
+ "q1": 1,
+ "q2": 2
+ }
+ },
+ 'secret_mounts': {},
+ 'output_storage_classes': ["default"]
+ }))
+
+
class TestWorkflow(unittest.TestCase):
def setUp(self):
cwltool.process._names = set()
+ arv_docker_clear_cache()
def helper(self, runner, enable_reuse=True):
document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.0")
@mock.patch("arvados.collection.Collection")
@mock.patch('arvados.commands.keepdocker.list_images_in_arv')
def test_run(self, list_images_in_arv, mockcollection, mockcollectionreader):
- arv_docker_clear_cache()
arvados_cwl.add_arv_hints()
api = mock.MagicMock()
loadingContext, runtimeContext = self.helper(runner)
runner.fs_access = runtimeContext.make_fs_access(runtimeContext.basedir)
+ mockcollectionreader().exists.return_value = True
+
tool, metadata = loadingContext.loader.resolve_ref("tests/wf/scatter2.cwl")
metadata["cwlVersion"] = tool["cwlVersion"]
"scheduling_parameters": {},
"secret_mounts": {},
"state": "Committed",
- "use_existing": True
+ "use_existing": True,
+ 'output_storage_classes': ["default"]
}))
mockc.open().__enter__().write.assert_has_calls([mock.call(subwf)])
mockc.open().__enter__().write.assert_has_calls([mock.call(
@mock.patch("arvados.collection.Collection")
@mock.patch('arvados.commands.keepdocker.list_images_in_arv')
def test_overall_resource_singlecontainer(self, list_images_in_arv, mockcollection, mockcollectionreader):
- arv_docker_clear_cache()
arvados_cwl.add_arv_hints()
api = mock.MagicMock()
],
'use_existing': True,
'output_name': u'Output for step echo-subwf',
- 'cwd': '/var/spool/cwl'
+ 'cwd': '/var/spool/cwl',
+ 'output_storage_classes': ["default"]
}))
def test_default_work_api(self):
stubs.api = mock.MagicMock()
stubs.api._rootDesc = get_rootDesc()
stubs.api._rootDesc["uuidPrefix"] = "zzzzz"
+ stubs.api._rootDesc["revision"] = "20210628"
stubs.api.users().current().execute.return_value = {
"uuid": stubs.fake_user_uuid,
stubs.api.containers().current().execute.return_value = {
"uuid": stubs.fake_container_uuid,
}
+ stubs.api.config()["StorageClasses"].items.return_value = {
+ "default": {
+ "Default": True
+ }
+ }.items()
class CollectionExecute(object):
def __init__(self, exe):
'state': 'Committed',
'command': ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json'],
'name': 'submit_wf.cwl',
def setUp(self):
cwltool.process._names = set()
+ arvados_cwl.arvdocker.arv_docker_clear_cache()
- @stubs
- def test_error_when_multiple_storage_classes_specified(self, stubs):
- storage_classes = "foo,bar"
- exited = arvados_cwl.main(
- ["--debug", "--storage-classes", storage_classes,
- "tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
- sys.stdin, sys.stderr, api_client=stubs.api)
- self.assertEqual(exited, 1)
@mock.patch("time.sleep")
@stubs
expect_container["command"] = [
'arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--disable-reuse', "--collection-cache-size=256",
'--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
expect_container["command"] = [
'arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--disable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
expect_container["use_existing"] = False
"enableReuse": False,
},
]
- expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$graph"][0]["$namespaces"] = {
+ expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$namespaces"] = {
"arv": "http://arvados.org/cwl#",
"cwltool": "http://commonwl.org/cwltool#"
}
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256",
'--debug', '--on-error=stop',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256",
"--output-name="+output_name, '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256", "--debug",
"--storage-classes=foo", '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
stubs.expect_container_request_uuid + '\n')
self.assertEqual(exited, 0)
+ @stubs
+ def test_submit_multiple_storage_classes(self, stubs):
+ exited = arvados_cwl.main(
+ ["--debug", "--submit", "--no-wait", "--api=containers", "--storage-classes=foo,bar", "--intermediate-storage-classes=baz",
+ "tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+
+ expect_container = copy.deepcopy(stubs.expect_container_spec)
+ expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
+ '--no-log-timestamps', '--disable-validate', '--disable-color',
+ '--eval-timeout=20', '--thread-count=0',
+ '--enable-reuse', "--collection-cache-size=256", "--debug",
+ "--storage-classes=foo,bar", "--intermediate-storage-classes=baz", '--on-error=continue',
+ '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
+
+ stubs.api.container_requests().create.assert_called_with(
+ body=JsonDiffMatcher(expect_container))
+ self.assertEqual(stubs.capture_stdout.getvalue(),
+ stubs.expect_container_request_uuid + '\n')
+ self.assertEqual(exited, 0)
+
@mock.patch("cwltool.task_queue.TaskQueue")
@mock.patch("arvados_cwl.arvworkflow.ArvadosWorkflow.job")
@mock.patch("arvados_cwl.executor.ArvCwlExecutor.make_output_collection")
def test_default_storage_classes_correctly_propagate_to_make_output_collection(self, stubs, make_output, job, tq):
final_output_c = arvados.collection.Collection()
make_output.return_value = ({},final_output_c)
+ stubs.api.config().get.return_value = {"default": {"Default": True}}
def set_final_output(job_order, output_callback, runtimeContext):
output_callback("zzzzz-4zz18-zzzzzzzzzzzzzzzz", "success")
make_output.assert_called_with(u'Output of submit_wf.cwl', ['default'], '', 'zzzzz-4zz18-zzzzzzzzzzzzzzzz')
self.assertEqual(exited, 0)
+ @mock.patch("cwltool.task_queue.TaskQueue")
+ @mock.patch("arvados_cwl.arvworkflow.ArvadosWorkflow.job")
+ @mock.patch("arvados_cwl.executor.ArvCwlExecutor.make_output_collection")
+ @stubs
+ def test_storage_class_hint_to_make_output_collection(self, stubs, make_output, job, tq):
+ final_output_c = arvados.collection.Collection()
+ make_output.return_value = ({},final_output_c)
+
+ def set_final_output(job_order, output_callback, runtimeContext):
+ output_callback("zzzzz-4zz18-zzzzzzzzzzzzzzzz", "success")
+ return []
+ job.side_effect = set_final_output
+
+ exited = arvados_cwl.main(
+ ["--debug", "--local",
+ "tests/wf/submit_storage_class_wf.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+
+ make_output.assert_called_with(u'Output of submit_storage_class_wf.cwl', ['foo', 'bar'], '', 'zzzzz-4zz18-zzzzzzzzzzzzzzzz')
+ self.assertEqual(exited, 0)
+
@stubs
def test_submit_container_output_ttl(self, stubs):
exited = arvados_cwl.main(
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256", '--debug',
'--on-error=continue',
"--intermediate-output-ttl=3600",
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256",
'--debug', '--on-error=continue',
"--trash-intermediate",
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256",
"--output-tags="+output_tags, '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@mock.patch("time.sleep")
@stubs
def test_submit_file_keepref(self, stubs, tm, collectionReader):
+ collectionReader().exists.return_value = True
collectionReader().find.return_value = arvados.arvfile.ArvadosFile(mock.MagicMock(), "blorp.txt")
exited = arvados_cwl.main(
["--submit", "--no-wait", "--api=containers", "--debug",
'container_image': '999999999999999999999999999999d3+99',
'command': ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
'/var/lib/cwl/workflow/expect_arvworkflow.cwl#main', '/var/lib/cwl/cwl.input.json'],
'cwd': '/var/spool/cwl',
'container_image': "999999999999999999999999999999d3+99",
'command': ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json'],
'cwd': '/var/spool/cwl',
@stubs
def test_submit_container_project(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--submit", "--no-wait", "--api=containers", "--debug", "--project-uuid="+project_uuid,
"tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
expect_container["owner_uuid"] = project_uuid
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- "--eval-timeout=20", "--thread-count=4",
+ "--eval-timeout=20", "--thread-count=0",
'--enable-reuse', "--collection-cache-size=256", '--debug',
'--on-error=continue',
'--project-uuid='+project_uuid,
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=60.0', '--thread-count=4',
+ '--eval-timeout=60.0', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=256",
'--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
expect_container = copy.deepcopy(stubs.expect_container_spec)
expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=500",
'--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
"keep_cache": 512
}
]
- expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$graph"][0]["$namespaces"] = {
+ expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$namespaces"] = {
"arv": "http://arvados.org/cwl#",
}
expect_container['command'] = ['arvados-cwl-runner', '--local', '--api=containers',
'--no-log-timestamps', '--disable-validate', '--disable-color',
- '--eval-timeout=20', '--thread-count=4',
+ '--eval-timeout=20', '--thread-count=0',
'--enable-reuse', "--collection-cache-size=512", '--debug', '--on-error=continue',
'/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
"--disable-validate",
"--disable-color",
"--eval-timeout=20",
- '--thread-count=4',
+ '--thread-count=0',
"--enable-reuse",
"--collection-cache-size=256",
'--debug',
"content": {
"$graph": [
{
- "$namespaces": {
- "cwltool": "http://commonwl.org/cwltool#"
- },
"arguments": [
"md5sum",
"example.conf"
]
}
],
+ "$namespaces": {
+ "cwltool": "http://commonwl.org/cwltool#"
+ },
"cwlVersion": "v1.0"
},
"kind": "json"
stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
self.assertEqual(exited, 1)
+ @stubs
+ def test_submit_validate_project_uuid(self, stubs):
+ # Fails with bad cluster prefix
+ exited = arvados_cwl.main(
+ ["--submit", "--no-wait", "--api=containers", "--debug", "--project-uuid=zzzzb-j7d0g-zzzzzzzzzzzzzzz",
+ "tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+ self.assertEqual(exited, 1)
+
+ # Project lookup fails
+ stubs.api.groups().get().execute.side_effect = Exception("Bad project")
+ exited = arvados_cwl.main(
+ ["--submit", "--no-wait", "--api=containers", "--debug", "--project-uuid=zzzzz-j7d0g-zzzzzzzzzzzzzzx",
+ "tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+ self.assertEqual(exited, 1)
+
+ # It should work this time because it is looking up a user (and only group is stubbed out to fail)
+ exited = arvados_cwl.main(
+ ["--submit", "--no-wait", "--api=containers", "--debug", "--project-uuid=zzzzz-tpzed-zzzzzzzzzzzzzzx",
+ "tests/wf/submit_wf.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+ self.assertEqual(exited, 0)
+
+
@mock.patch("arvados.collection.CollectionReader")
@stubs
def test_submit_uuid_inputs(self, stubs, collectionReader):
+ collectionReader().exists.return_value = True
collectionReader().find.return_value = arvados.arvfile.ArvadosFile(mock.MagicMock(), "file1.txt")
def list_side_effect(**kwargs):
m = mock.MagicMock()
finally:
cwltool_logger.removeHandler(stderr_logger)
+ @stubs
+ def test_submit_set_process_properties(self, stubs):
+ exited = arvados_cwl.main(
+ ["--submit", "--no-wait", "--api=containers", "--debug",
+ "tests/wf/submit_wf_process_properties.cwl", "tests/submit_test_job.json"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+
+ expect_container = copy.deepcopy(stubs.expect_container_spec)
+ expect_container["name"] = "submit_wf_process_properties.cwl"
+ expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$graph"][1]["hints"] = [
+ {
+ "class": "http://arvados.org/cwl#ProcessProperties",
+ "processProperties": [
+ {"propertyName": "baz",
+ "propertyValue": "$(inputs.x.basename)"},
+ {"propertyName": "foo",
+ "propertyValue": "bar"},
+ {"propertyName": "quux",
+ "propertyValue": {
+ "q1": 1,
+ "q2": 2
+ }
+ }
+ ],
+ }
+ ]
+ expect_container["mounts"]["/var/lib/cwl/workflow.json"]["content"]["$namespaces"] = {
+ "arv": "http://arvados.org/cwl#"
+ }
+
+ expect_container["properties"] = {
+ "baz": "blorp.txt",
+ "foo": "bar",
+ "quux": {
+ "q1": 1,
+ "q2": 2
+ }
+ }
+
+ stubs.api.container_requests().create.assert_called_with(
+ body=JsonDiffMatcher(expect_container))
+ self.assertEqual(stubs.capture_stdout.getvalue(),
+ stubs.expect_container_request_uuid + '\n')
+ self.assertEqual(exited, 0)
+
class TestCreateWorkflow(unittest.TestCase):
existing_workflow_uuid = "zzzzz-7fd4e-validworkfloyml"
expect_workflow = StripYAMLComments(
open("tests/wf/expect_upload_packed.cwl").read().rstrip())
+ def setUp(self):
+ cwltool.process._names = set()
+ arvados_cwl.arvdocker.arv_docker_clear_cache()
+
@stubs
def test_create(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--create-workflow", "--debug",
@stubs
def test_create_name(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--create-workflow", "--debug",
@stubs
def test_create_collection_per_tool(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--create-workflow", "--debug",
@stubs
def test_create_with_imports(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--create-workflow", "--debug",
@stubs
def test_create_with_no_input(self, stubs):
project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+ stubs.api.groups().get().execute.return_value = {"group_class": "project"}
exited = arvados_cwl.main(
["--create-workflow", "--debug",
StepInputExpressionRequirement: {}
hints:
DockerRequirement:
- dockerPull: arvados/jobs:1.4.0.20190604172024
+ dockerPull: arvados/jobs:2.2.2
steps:
substep:
in:
StepInputExpressionRequirement: {}
hints:
DockerRequirement:
- dockerPull: arvados/jobs:1.4.0.20190604172024
+ dockerPull: arvados/jobs:2.2.2
steps:
substep:
in:
StepInputExpressionRequirement: {}
hints:
DockerRequirement:
- dockerPull: arvados/jobs:1.4.0.20190604172024
+ dockerPull: arvados/jobs:2.2.2
steps:
substep:
in:
StepInputExpressionRequirement: {}
hints:
DockerRequirement:
- dockerPull: arvados/jobs:1.4.0.20190604172024
+ dockerPull: arvados/jobs:2.2.2
steps:
substep:
in:
hints:
- class: arv:RunInSingleContainer
- class: DockerRequirement
- dockerPull: arvados/jobs:1.4.0.20190604172024
+ dockerPull: arvados/jobs:2.2.2
run:
class: Workflow
id: mysub
]
}
],
+ "$namespaces": {
+ "arv": "http://arvados.org/cwl#"
+ },
"cwlVersion": "v1.0"
-}
\ No newline at end of file
+}
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# Test case for arvados-cwl-runner
+#
+# Used to test whether scanning a workflow file for dependencies
+# (e.g. submit_tool.cwl) and uploading to Keep works as intended.
+
+class: Workflow
+cwlVersion: v1.0
+$namespaces:
+ arv: "http://arvados.org/cwl#"
+hints:
+ arv:OutputStorageClass:
+ finalStorageClass: [foo, bar]
+inputs:
+ - id: x
+ type: File
+ - id: y
+ type: Directory
+ - id: z
+ type: Directory
+outputs: []
+steps:
+ - id: step1
+ in:
+ - { id: x, source: "#x" }
+ out: []
+ run: ../tool/submit_tool.cwl
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# Test case for arvados-cwl-runner
+#
+# Used to test whether scanning a workflow file for dependencies
+# (e.g. submit_tool.cwl) and uploading to Keep works as intended.
+
+$namespaces:
+ arv: "http://arvados.org/cwl#"
+
+class: Workflow
+cwlVersion: v1.0
+
+hints:
+ arv:ProcessProperties:
+ processProperties:
+ foo: bar
+ baz: $(inputs.x.basename)
+ quux:
+ propertyValue:
+ q1: 1
+ q2: 2
+
+inputs:
+ - id: x
+ type: File
+ - id: y
+ type: Directory
+ - id: z
+ type: Directory
+outputs: []
+steps:
+ - id: step1
+ in:
+ - { id: x, source: "#x" }
+ out: []
+ run: ../tool/submit_tool.cwl
--- /dev/null
+#!/usr/bin/env cwl-runner
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+class: CommandLineTool
+cwlVersion: v1.0
+inputs:
+ inp1:
+ type: File
+ default:
+ class: File
+ location: hello.txt
+ secondaryFiles:
+ - class: Directory
+ location: indir1
+outputs: []
+baseCommand: 'true'
"bufio"
"context"
"encoding/json"
+ "io"
"net"
"github.com/sirupsen/logrus"
var (
EndpointConfigGet = APIEndpoint{"GET", "arvados/v1/config", ""}
+ EndpointVocabularyGet = APIEndpoint{"GET", "arvados/v1/vocabulary", ""}
EndpointLogin = APIEndpoint{"GET", "login", ""}
EndpointLogout = APIEndpoint{"GET", "logout", ""}
EndpointCollectionCreate = APIEndpoint{"POST", "arvados/v1/collections", "collection"}
EndpointContainerRequestGet = APIEndpoint{"GET", "arvados/v1/container_requests/{uuid}", ""}
EndpointContainerRequestList = APIEndpoint{"GET", "arvados/v1/container_requests", ""}
EndpointContainerRequestDelete = APIEndpoint{"DELETE", "arvados/v1/container_requests/{uuid}", ""}
+ EndpointGroupCreate = APIEndpoint{"POST", "arvados/v1/groups", "group"}
+ EndpointGroupUpdate = APIEndpoint{"PATCH", "arvados/v1/groups/{uuid}", "group"}
+ EndpointGroupGet = APIEndpoint{"GET", "arvados/v1/groups/{uuid}", ""}
+ EndpointGroupList = APIEndpoint{"GET", "arvados/v1/groups", ""}
+ EndpointGroupContents = APIEndpoint{"GET", "arvados/v1/groups/contents", ""}
+ EndpointGroupContentsUUIDInPath = APIEndpoint{"GET", "arvados/v1/groups/{uuid}/contents", ""} // Alternative HTTP route; client-side code should always use EndpointGroupContents instead
+ EndpointGroupShared = APIEndpoint{"GET", "arvados/v1/groups/shared", ""}
+ EndpointGroupDelete = APIEndpoint{"DELETE", "arvados/v1/groups/{uuid}", ""}
+ EndpointGroupTrash = APIEndpoint{"POST", "arvados/v1/groups/{uuid}/trash", ""}
+ EndpointGroupUntrash = APIEndpoint{"POST", "arvados/v1/groups/{uuid}/untrash", ""}
+ EndpointLinkCreate = APIEndpoint{"POST", "arvados/v1/links", "link"}
+ EndpointLinkUpdate = APIEndpoint{"PATCH", "arvados/v1/links/{uuid}", "link"}
+ EndpointLinkGet = APIEndpoint{"GET", "arvados/v1/links/{uuid}", ""}
+ EndpointLinkList = APIEndpoint{"GET", "arvados/v1/links", ""}
+ EndpointLinkDelete = APIEndpoint{"DELETE", "arvados/v1/links/{uuid}", ""}
+ EndpointSysTrashSweep = APIEndpoint{"POST", "sys/trash_sweep", ""}
EndpointUserActivate = APIEndpoint{"POST", "arvados/v1/users/{uuid}/activate", ""}
EndpointUserCreate = APIEndpoint{"POST", "arvados/v1/users", "user"}
EndpointUserCurrent = APIEndpoint{"GET", "arvados/v1/users/current", ""}
EndpointUserSystem = APIEndpoint{"GET", "arvados/v1/users/system", ""}
EndpointUserUnsetup = APIEndpoint{"POST", "arvados/v1/users/{uuid}/unsetup", ""}
EndpointUserUpdate = APIEndpoint{"PATCH", "arvados/v1/users/{uuid}", "user"}
- EndpointUserUpdateUUID = APIEndpoint{"POST", "arvados/v1/users/{uuid}/update_uuid", ""}
EndpointUserBatchUpdate = APIEndpoint{"PATCH", "arvados/v1/users/batch_update", ""}
EndpointUserAuthenticate = APIEndpoint{"POST", "arvados/v1/users/authenticate", ""}
EndpointAPIClientAuthorizationCurrent = APIEndpoint{"GET", "arvados/v1/api_client_authorizations/current", ""}
+ EndpointAPIClientAuthorizationCreate = APIEndpoint{"POST", "arvados/v1/api_client_authorizations", "api_client_authorization"}
+ EndpointAPIClientAuthorizationUpdate = APIEndpoint{"PUT", "arvados/v1/api_client_authorizations/{uuid}", "api_client_authorization"}
+ EndpointAPIClientAuthorizationList = APIEndpoint{"GET", "arvados/v1/api_client_authorizations", ""}
+ EndpointAPIClientAuthorizationDelete = APIEndpoint{"DELETE", "arvados/v1/api_client_authorizations/{uuid}", ""}
+ EndpointAPIClientAuthorizationGet = APIEndpoint{"GET", "arvados/v1/api_client_authorizations/{uuid}", ""}
)
type ContainerSSHOptions struct {
IncludeOldVersions bool `json:"include_old_versions"`
BypassFederation bool `json:"bypass_federation"`
ForwardedFor string `json:"forwarded_for,omitempty"`
+ Include string `json:"include"`
}
type CreateOptions struct {
type UpdateOptions struct {
UUID string `json:"uuid"`
Attrs map[string]interface{} `json:"attrs"`
+ Select []string `json:"select"`
BypassFederation bool `json:"bypass_federation"`
}
-type UpdateUUIDOptions struct {
- UUID string `json:"uuid"`
- NewUUID string `json:"new_uuid"`
+type GroupContentsOptions struct {
+ ClusterID string `json:"cluster_id"`
+ UUID string `json:"uuid,omitempty"`
+ Select []string `json:"select"`
+ Filters []Filter `json:"filters"`
+ Limit int64 `json:"limit"`
+ Offset int64 `json:"offset"`
+ Order []string `json:"order"`
+ Distinct bool `json:"distinct"`
+ Count string `json:"count"`
+ Include string `json:"include"`
+ Recursive bool `json:"recursive"`
+ IncludeTrash bool `json:"include_trash"`
+ IncludeOldVersions bool `json:"include_old_versions"`
+ ExcludeHomeProject bool `json:"exclude_home_project"`
}
type UserActivateOptions struct {
ReturnTo string `json:"return_to"` // Redirect to this URL after logging out
}
+type BlockWriteOptions struct {
+ Hash string
+ Data []byte
+ Reader io.Reader
+ DataSize int // Must be set if Data is nil.
+ RequestID string
+ StorageClasses []string
+ Replicas int
+ Attempts int
+}
+
+type BlockWriteResponse struct {
+ Locator string
+ Replicas int
+}
+
type API interface {
ConfigGet(ctx context.Context) (json.RawMessage, error)
+ VocabularyGet(ctx context.Context) (Vocabulary, error)
Login(ctx context.Context, options LoginOptions) (LoginResponse, error)
Logout(ctx context.Context, options LogoutOptions) (LogoutResponse, error)
CollectionCreate(ctx context.Context, options CreateOptions) (Collection, error)
ContainerRequestGet(ctx context.Context, options GetOptions) (ContainerRequest, error)
ContainerRequestList(ctx context.Context, options ListOptions) (ContainerRequestList, error)
ContainerRequestDelete(ctx context.Context, options DeleteOptions) (ContainerRequest, error)
+ GroupCreate(ctx context.Context, options CreateOptions) (Group, error)
+ GroupUpdate(ctx context.Context, options UpdateOptions) (Group, error)
+ GroupGet(ctx context.Context, options GetOptions) (Group, error)
+ GroupList(ctx context.Context, options ListOptions) (GroupList, error)
+ GroupContents(ctx context.Context, options GroupContentsOptions) (ObjectList, error)
+ GroupShared(ctx context.Context, options ListOptions) (GroupList, error)
+ GroupDelete(ctx context.Context, options DeleteOptions) (Group, error)
+ GroupTrash(ctx context.Context, options DeleteOptions) (Group, error)
+ GroupUntrash(ctx context.Context, options UntrashOptions) (Group, error)
+ LinkCreate(ctx context.Context, options CreateOptions) (Link, error)
+ LinkUpdate(ctx context.Context, options UpdateOptions) (Link, error)
+ LinkGet(ctx context.Context, options GetOptions) (Link, error)
+ LinkList(ctx context.Context, options ListOptions) (LinkList, error)
+ LinkDelete(ctx context.Context, options DeleteOptions) (Link, error)
SpecimenCreate(ctx context.Context, options CreateOptions) (Specimen, error)
SpecimenUpdate(ctx context.Context, options UpdateOptions) (Specimen, error)
SpecimenGet(ctx context.Context, options GetOptions) (Specimen, error)
SpecimenList(ctx context.Context, options ListOptions) (SpecimenList, error)
SpecimenDelete(ctx context.Context, options DeleteOptions) (Specimen, error)
+ SysTrashSweep(ctx context.Context, options struct{}) (struct{}, error)
UserCreate(ctx context.Context, options CreateOptions) (User, error)
UserUpdate(ctx context.Context, options UpdateOptions) (User, error)
- UserUpdateUUID(ctx context.Context, options UpdateUUIDOptions) (User, error)
UserMerge(ctx context.Context, options UserMergeOptions) (User, error)
UserActivate(ctx context.Context, options UserActivateOptions) (User, error)
UserSetup(ctx context.Context, options UserSetupOptions) (map[string]interface{}, error)
UserBatchUpdate(context.Context, UserBatchUpdateOptions) (UserList, error)
UserAuthenticate(ctx context.Context, options UserAuthenticateOptions) (APIClientAuthorization, error)
APIClientAuthorizationCurrent(ctx context.Context, options GetOptions) (APIClientAuthorization, error)
+ APIClientAuthorizationCreate(ctx context.Context, options CreateOptions) (APIClientAuthorization, error)
+ APIClientAuthorizationList(ctx context.Context, options ListOptions) (APIClientAuthorizationList, error)
+ APIClientAuthorizationDelete(ctx context.Context, options DeleteOptions) (APIClientAuthorization, error)
+ APIClientAuthorizationUpdate(ctx context.Context, options UpdateOptions) (APIClientAuthorization, error)
+ APIClientAuthorizationGet(ctx context.Context, options GetOptions) (APIClientAuthorization, error)
}
package arvados
+import "time"
+
// APIClientAuthorization is an arvados#apiClientAuthorization resource.
type APIClientAuthorization struct {
- UUID string `json:"uuid"`
- APIToken string `json:"api_token"`
- ExpiresAt string `json:"expires_at"`
- Scopes []string `json:"scopes"`
+ UUID string `json:"uuid"`
+ APIClientID int `json:"api_client_id"`
+ APIToken string `json:"api_token"`
+ CreatedAt time.Time `json:"created_at"`
+ CreatedByIPAddress string `json:"created_by_ip_address"`
+ DefaultOwnerUUID string `json:"default_owner_uuid"`
+ Etag string `json:"etag"`
+ ExpiresAt time.Time `json:"expires_at"`
+ LastUsedAt time.Time `json:"last_used_at"`
+ LastUsedByIPAddress string `json:"last_used_by_ip_address"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ OwnerUUID string `json:"owner_uuid"`
+ Scopes []string `json:"scopes"`
+ UserID int `json:"user_id"`
}
// APIClientAuthorizationList is an arvados#apiClientAuthorizationList resource.
package arvados
import (
+ "bytes"
"crypto/hmac"
"crypto/sha1"
"errors"
"fmt"
"regexp"
"strconv"
- "strings"
"time"
)
// makePermSignature generates a SHA-1 HMAC digest for the given blob,
// token, expiry, and site secret.
-func makePermSignature(blobHash, apiToken, expiry, blobSignatureTTL string, permissionSecret []byte) string {
+func makePermSignature(blobHash []byte, apiToken, expiry, blobSignatureTTL string, permissionSecret []byte) string {
hmac := hmac.New(sha1.New, permissionSecret)
- hmac.Write([]byte(blobHash))
+ hmac.Write(blobHash)
hmac.Write([]byte("@"))
hmac.Write([]byte(apiToken))
hmac.Write([]byte("@"))
return blobLocator
}
// Strip off all hints: only the hash is used to sign.
- blobHash := strings.Split(blobLocator, "+")[0]
+ blobHash := []byte(blobLocator)
+ if hints := bytes.IndexRune(blobHash, '+'); hints > 0 {
+ blobHash = blobHash[:hints]
+ }
timestampHex := fmt.Sprintf("%08x", expiry.Unix())
blobSignatureTTLHex := strconv.FormatInt(int64(blobSignatureTTL.Seconds()), 16)
return blobLocator +
if matches == nil {
return ErrSignatureMissing
}
- blobHash := matches[1]
+ blobHash := []byte(matches[1])
signatureHex := matches[6]
expiryHex := matches[7]
if expiryTime, err := parseHexTimestamp(expiryHex); err != nil {
type BlobSignatureSuite struct{}
+func (s *BlobSignatureSuite) BenchmarkSignManifest(c *check.C) {
+ DebugLocksPanicMode = false
+ ts, err := parseHexTimestamp(knownTimestamp)
+ c.Check(err, check.IsNil)
+ c.Logf("test manifest is %d bytes", len(bigmanifest))
+ for i := 0; i < c.N; i++ {
+ m := SignManifest(bigmanifest, knownToken, ts, blobSignatureTTL, []byte(knownKey))
+ c.Check(m, check.Not(check.Equals), "")
+ }
+}
+
func (s *BlobSignatureSuite) TestSignLocator(c *check.C) {
ts, err := parseHexTimestamp(knownTimestamp)
c.Check(err, check.IsNil)
return err
}
switch {
+ case resp.StatusCode == http.StatusNoContent:
+ return nil
case resp.StatusCode == http.StatusOK && dst == nil:
return nil
case resp.StatusCode == http.StatusOK:
if c.APIHost == "" {
if c.loadedFromEnv {
return errors.New("ARVADOS_API_HOST and/or ARVADOS_API_TOKEN environment variables are not set")
- } else {
- return errors.New("arvados.Client cannot perform request: APIHost is not set")
}
+ return errors.New("arvados.Client cannot perform request: APIHost is not set")
}
urlString := c.apiURL(path)
urlValues, err := anythingToValues(params)
package arvados
import (
- "bufio"
+ "bytes"
"crypto/md5"
"fmt"
"regexp"
- "strings"
"time"
"git.arvados.org/arvados.git/sdk/go/blockdigest"
// SizedDigests returns the hash+size part of each data block
// referenced by the collection.
+//
+// Zero-length blocks are not included.
func (c *Collection) SizedDigests() ([]SizedDigest, error) {
- manifestText := c.ManifestText
- if manifestText == "" {
- manifestText = c.UnsignedManifestText
+ manifestText := []byte(c.ManifestText)
+ if len(manifestText) == 0 {
+ manifestText = []byte(c.UnsignedManifestText)
}
- if manifestText == "" && c.PortableDataHash != "d41d8cd98f00b204e9800998ecf8427e+0" {
+ if len(manifestText) == 0 && c.PortableDataHash != "d41d8cd98f00b204e9800998ecf8427e+0" {
// TODO: Check more subtle forms of corruption, too
return nil, fmt.Errorf("manifest is missing")
}
- var sds []SizedDigest
- scanner := bufio.NewScanner(strings.NewReader(manifestText))
- scanner.Buffer(make([]byte, 1048576), len(manifestText))
- for scanner.Scan() {
- line := scanner.Text()
- tokens := strings.Split(line, " ")
+ sds := make([]SizedDigest, 0, len(manifestText)/40)
+ for _, line := range bytes.Split(manifestText, []byte{'\n'}) {
+ if len(line) == 0 {
+ continue
+ }
+ tokens := bytes.Split(line, []byte{' '})
if len(tokens) < 3 {
return nil, fmt.Errorf("Invalid stream (<3 tokens): %q", line)
}
for _, token := range tokens[1:] {
- if !blockdigest.LocatorPattern.MatchString(token) {
+ if !blockdigest.LocatorPattern.Match(token) {
// FIXME: ensure it's a file token
break
}
+ if bytes.HasPrefix(token, []byte("d41d8cd98f00b204e9800998ecf8427e+0")) {
+ // Exclude "empty block" placeholder
+ continue
+ }
// FIXME: shouldn't assume 32 char hash
- if i := strings.IndexRune(token[33:], '+'); i >= 0 {
+ if i := bytes.IndexRune(token[33:], '+'); i >= 0 {
token = token[:33+i]
}
- sds = append(sds, SizedDigest(token))
+ sds = append(sds, SizedDigest(string(token)))
}
}
- return sds, scanner.Err()
+ return sds, nil
}
type CollectionList struct {
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import (
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&CollectionSuite{})
+
+type CollectionSuite struct{}
+
+func (s *CollectionSuite) TestSizedDigests(c *check.C) {
+ coll := Collection{ManifestText: ". d41d8cd98f00b204e9800998ecf8427e+0 acbd18db4cc2f85cedef654fccc4a4d8+3 73feffa4b7f6bb68e44cf984c85f6e88+3+Z+K@xyzzy 0:0:foo 0:3:bar 3:3:baz\n"}
+ sd, err := coll.SizedDigests()
+ c.Check(err, check.IsNil)
+ c.Check(sd, check.DeepEquals, []SizedDigest{"acbd18db4cc2f85cedef654fccc4a4d8+3", "73feffa4b7f6bb68e44cf984c85f6e88+3"})
+
+ coll = Collection{ManifestText: ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:foo\n. acbd18db4cc2f85cedef654fccc4a4d8+3 0:3:bar\n. 73feffa4b7f6bb68e44cf984c85f6e88+3+Z+K@xyzzy 0:3:baz\n"}
+ sd, err = coll.SizedDigests()
+ c.Check(err, check.IsNil)
+ c.Check(sd, check.DeepEquals, []SizedDigest{"acbd18db4cc2f85cedef654fccc4a4d8+3", "73feffa4b7f6bb68e44cf984c85f6e88+3"})
+
+ coll = Collection{ManifestText: ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:foo\n"}
+ sd, err = coll.SizedDigests()
+ c.Check(err, check.IsNil)
+ c.Check(sd, check.HasLen, 0)
+
+ coll = Collection{ManifestText: "", PortableDataHash: "d41d8cd98f00b204e9800998ecf8427e+0"}
+ sd, err = coll.SizedDigests()
+ c.Check(err, check.IsNil)
+ c.Check(sd, check.HasLen, 0)
+}
MaxBlockEntries int
MaxCollectionEntries int
MaxCollectionBytes int64
- MaxPermissionEntries int
MaxUUIDEntries int
+ MaxSessions int
+}
+
+type UploadDownloadPermission struct {
+ Upload bool
+ Download bool
+}
+
+type UploadDownloadRolePermissions struct {
+ User UploadDownloadPermission
+ Admin UploadDownloadPermission
+}
+
+type ManagedProperties map[string]struct {
+ Value interface{}
+ Function string
+ Protected bool
}
type Cluster struct {
MaxKeepBlobBuffers int
MaxRequestAmplification int
MaxRequestSize int
+ MaxTokenLifetime Duration
RequestTimeout Duration
SendTimeout Duration
WebsocketClientEventQueue int
WebsocketServerEventQueue int
KeepServiceRequestTimeout Duration
+ VocabularyPath string
}
AuditLogs struct {
MaxAge Duration
UnloggedAttributes StringSet
}
Collections struct {
- BlobSigning bool
- BlobSigningKey string
- BlobSigningTTL Duration
- BlobTrash bool
- BlobTrashLifetime Duration
- BlobTrashCheckInterval Duration
- BlobTrashConcurrency int
- BlobDeleteConcurrency int
- BlobReplicateConcurrency int
- CollectionVersioning bool
- DefaultTrashLifetime Duration
- DefaultReplication int
- ManagedProperties map[string]struct {
- Value interface{}
- Function string
- Protected bool
- }
+ BlobSigning bool
+ BlobSigningKey string
+ BlobSigningTTL Duration
+ BlobTrash bool
+ BlobTrashLifetime Duration
+ BlobTrashCheckInterval Duration
+ BlobTrashConcurrency int
+ BlobDeleteConcurrency int
+ BlobReplicateConcurrency int
+ CollectionVersioning bool
+ DefaultTrashLifetime Duration
+ DefaultReplication int
+ ManagedProperties ManagedProperties
PreserveVersionIfIdle Duration
TrashSweepInterval Duration
TrustAllContent bool
BalanceCollectionBatch int
BalanceCollectionBuffers int
BalanceTimeout Duration
+ BalanceUpdateLimit int
WebDAVCache WebDAVCacheConfig
+
+ KeepproxyPermission UploadDownloadRolePermissions
+ WebDAVPermission UploadDownloadRolePermissions
+ WebDAVLogEvents bool
}
Git struct {
GitCommand string
EmailClaim string
EmailVerifiedClaim string
UsernameClaim string
+ AcceptAccessToken bool
+ AcceptAccessTokenScope string
AuthenticationRequestParameters map[string]string
}
PAM struct {
Service string
DefaultEmailDomain string
}
- SSO struct {
- Enable bool
- ProviderAppID string
- ProviderAppSecret string
- }
Test struct {
Enable bool
Users map[string]TestUser
RemoteTokenRefresh Duration
TokenLifetime Duration
TrustedClients map[string]struct{}
+ IssueTrustedTokens bool
}
Mail struct {
MailchimpAPIKey string
Insecure bool
}
Users struct {
+ ActivatedUsersAreVisibleToOthers bool
AnonymousUserToken string
AdminNotifierEmailFrom string
AutoAdminFirstUser bool
NewUserNotificationRecipients StringSet
NewUsersAreActive bool
UserNotifierEmailFrom string
+ UserNotifierEmailBcc StringSet
UserProfileNotificationAddress string
PreferDomainForUsername string
UserSetupMailText string
+ RoleGroupsVisibleToAll bool
}
- Volumes map[string]Volume
- Workbench struct {
+ StorageClasses map[string]StorageClassConfig
+ Volumes map[string]Volume
+ Workbench struct {
ActivationContactLink string
APIClientConnectTimeout Duration
APIClientReceiveTimeout Duration
Options map[string]struct{}
}
UserProfileFormMessage string
- VocabularyURL string
WelcomePageHTML string
InactivePageHTML string
SSHHelpPageHTML string
SSHHelpHostSuffix string
IdleTimeout Duration
}
+}
- ForceLegacyAPI14 bool
+type StorageClassConfig struct {
+ Default bool
+ Priority int
}
type Volume struct {
type S3VolumeDriverParameters struct {
IAMRole string
- AccessKey string
- SecretKey string
+ AccessKeyID string
+ SecretAccessKey string
Endpoint string
Region string
Bucket string
ReadTimeout Duration
RaceWindow Duration
UnsafeDelete bool
+ PrefixLength int
}
type AzureVolumeDriverParameters struct {
Composer Service
Controller Service
DispatchCloud Service
+ DispatchLSF Service
GitHTTP Service
GitSSH Service
Health Service
Keepproxy Service
Keepstore Service
RailsAPI Service
- SSO Service
WebDAVDownload Service
WebDAV Service
WebShell Service
StaleLockTimeout Duration
SupportedDockerImageFormats StringSet
UsePreemptibleInstances bool
+ RuntimeEngine string
+ LocalKeepBlobBuffersPerVCPU int
+ LocalKeepLogsToContainerLog string
JobsAPI struct {
Enable string
AssignNodeHostname string
}
}
+ LSF struct {
+ BsubSudoUser string
+ BsubArgumentsList []string
+ }
}
type CloudVMsConfig struct {
ServiceNameRailsAPI ServiceName = "arvados-api-server"
ServiceNameController ServiceName = "arvados-controller"
ServiceNameDispatchCloud ServiceName = "arvados-dispatch-cloud"
+ ServiceNameDispatchLSF ServiceName = "arvados-dispatch-lsf"
ServiceNameHealth ServiceName = "arvados-health"
ServiceNameWorkbench1 ServiceName = "arvados-workbench1"
ServiceNameWorkbench2 ServiceName = "arvados-workbench2"
ServiceNameRailsAPI: svcs.RailsAPI,
ServiceNameController: svcs.Controller,
ServiceNameDispatchCloud: svcs.DispatchCloud,
+ ServiceNameDispatchLSF: svcs.DispatchLSF,
ServiceNameHealth: svcs.Health,
ServiceNameWorkbench1: svcs.Workbench1,
ServiceNameWorkbench2: svcs.Workbench2,
FinishedAt *time.Time `json:"finished_at"` // nil if not yet finished
GatewayAddress string `json:"gateway_address"`
InteractiveSessionStarted bool `json:"interactive_session_started"`
+ OutputStorageClasses []string `json:"output_storage_classes"`
+ RuntimeUserUUID string `json:"runtime_user_uuid"`
+ RuntimeAuthScopes []string `json:"runtime_auth_scopes"`
+ RuntimeToken string `json:"runtime_token"`
+ AuthUUID string `json:"auth_uuid"`
}
// ContainerRequest is an arvados#container_request resource.
ModifiedByUserUUID string `json:"modified_by_user_uuid"`
ModifiedAt time.Time `json:"modified_at"`
Href string `json:"href"`
- Kind string `json:"kind"`
Etag string `json:"etag"`
Name string `json:"name"`
Description string `json:"description"`
ExpiresAt time.Time `json:"expires_at"`
Filters []Filter `json:"filters"`
ContainerCount int `json:"container_count"`
+ OutputStorageClasses []string `json:"output_storage_classes"`
}
// Mount is special behavior to attach to a filesystem path or device.
// RuntimeConstraints specify a container's compute resources (RAM,
// CPU) and network connectivity.
type RuntimeConstraints struct {
- API bool `json:"API"`
- RAM int64 `json:"ram"`
- VCPUs int `json:"vcpus"`
- KeepCacheRAM int64 `json:"keep_cache_ram"`
+ API bool `json:"API"`
+ RAM int64 `json:"ram"`
+ VCPUs int `json:"vcpus"`
+ KeepCacheRAM int64 `json:"keep_cache_ram"`
+ CUDADriverVersion string `json:"cuda_driver_version,omitempty"`
+ CUDAHardwareCapability string `json:"cuda_hardware_capability,omitempty"`
+ CUDADeviceCount int `json:"cuda_device_count,omitempty"`
}
// SchedulingParameters specify a container's scheduling parameters
}
// Mimic error message returned by ParseDuration for a number
// without units.
- return fmt.Errorf("missing unit in duration %s", data)
+ return fmt.Errorf("missing unit in duration %q", data)
}
// MarshalJSON implements json.Marshaler.
D Duration
}
err := json.Unmarshal([]byte(`{"D":1.234}`), &d)
- c.Check(err, check.ErrorMatches, `missing unit in duration 1.234`)
+ c.Check(err, check.ErrorMatches, `.*missing unit in duration "?1\.234"?`)
err = json.Unmarshal([]byte(`{"D":"1.234"}`), &d)
- c.Check(err, check.ErrorMatches, `.*missing unit in duration 1.234`)
+ c.Check(err, check.ErrorMatches, `.*missing unit in duration "?1\.234"?`)
err = json.Unmarshal([]byte(`{"D":"1"}`), &d)
- c.Check(err, check.ErrorMatches, `.*missing unit in duration 1`)
+ c.Check(err, check.ErrorMatches, `.*missing unit in duration "?1"?`)
err = json.Unmarshal([]byte(`{"D":"foobar"}`), &d)
- c.Check(err, check.ErrorMatches, `.*invalid duration foobar`)
+ c.Check(err, check.ErrorMatches, `.*invalid duration "?foobar"?`)
err = json.Unmarshal([]byte(`{"D":"60s"}`), &d)
c.Check(err, check.IsNil)
c.Check(d.D.Duration(), check.Equals, time.Minute)
package arvados
-import "io"
+import (
+ "context"
+ "io"
+)
type fsBackend interface {
keepClient
type keepClient interface {
ReadAt(locator string, p []byte, off int) (int, error)
- PutB(p []byte) (string, int, error)
+ BlockWrite(context.Context, BlockWriteOptions) (BlockWriteResponse, error)
LocalLocator(locator string) (string, error)
}
ErrIsDirectory = errors.New("cannot rename file to overwrite existing directory")
ErrNotADirectory = errors.New("not a directory")
ErrPermission = os.ErrPermission
+ DebugLocksPanicMode = false
)
type syncer interface {
Sync() error
}
+func debugPanicIfNotLocked(l sync.Locker, writing bool) {
+ if !DebugLocksPanicMode {
+ return
+ }
+ race := false
+ if rl, ok := l.(interface {
+ RLock()
+ RUnlock()
+ }); ok && writing {
+ go func() {
+ // Fail if we can grab the read lock during an
+ // operation that purportedly has write lock.
+ rl.RLock()
+ race = true
+ rl.RUnlock()
+ }()
+ } else {
+ go func() {
+ l.Lock()
+ race = true
+ l.Unlock()
+ }()
+ }
+ time.Sleep(100)
+ if race {
+ panic("bug: caller-must-have-lock func called, but nobody has lock")
+ }
+}
+
// A File is an *os.File-like interface for reading and writing files
// in a FileSystem.
type File interface {
// path is "", flush all dirs/streams; otherwise, flush only
// the specified dir/stream.
Flush(path string, shortBlocks bool) error
+
+ // Estimate current memory usage.
+ MemorySize() int64
}
type inode interface {
sync.Locker
RLock()
RUnlock()
+ MemorySize() int64
}
type fileinfo struct {
return nil, ErrNotADirectory
}
+func (*nullnode) MemorySize() int64 {
+ // Types that embed nullnode should report their own size, but
+ // if they don't, we at least report a non-zero size to ensure
+ // a large tree doesn't get reported as 0 bytes.
+ return 64
+}
+
type treenode struct {
fs FileSystem
parent inode
}
func (n *treenode) Child(name string, replace func(inode) (inode, error)) (child inode, err error) {
+ debugPanicIfNotLocked(n, false)
child = n.inodes[name]
if name == "" || name == "." || name == ".." {
err = ErrInvalidArgument
return
}
if newchild == nil {
+ debugPanicIfNotLocked(n, true)
delete(n.inodes, name)
} else if newchild != child {
+ debugPanicIfNotLocked(n, true)
n.inodes[name] = newchild
n.fileinfo.modTime = time.Now()
child = newchild
return nil
}
+func (n *treenode) MemorySize() (size int64) {
+ n.RLock()
+ defer n.RUnlock()
+ debugPanicIfNotLocked(n, false)
+ for _, inode := range n.inodes {
+ size += inode.MemorySize()
+ }
+ return
+}
+
type fileSystem struct {
root inode
fsBackend
}
}
createMode := flag&os.O_CREATE != 0
- if createMode {
- parent.Lock()
- defer parent.Unlock()
- } else {
- parent.RLock()
- defer parent.RUnlock()
- }
+ // We always need to take Lock() here, not just RLock(). Even
+ // if we know we won't be creating a file, parent might be a
+ // lookupnode, which sometimes populates its inodes map during
+ // a Child() call.
+ parent.Lock()
+ defer parent.Unlock()
n, err := parent.Child(name, nil)
if err != nil {
return nil, err
return ErrInvalidOperation
}
+func (fs *fileSystem) MemorySize() int64 {
+ return fs.root.MemorySize()
+}
+
// rlookup (recursive lookup) returns the inode for the file/directory
// with the given name (which may contain "/" separators). If no such
// file/directory exists, the returned node is nil.
}
}
node, err = func() (inode, error) {
- node.RLock()
- defer node.RUnlock()
+ node.Lock()
+ defer node.Unlock()
return node.Child(name, nil)
}()
if node == nil || err != nil {
package arvados
import (
+ "bytes"
"context"
"encoding/json"
"fmt"
// Total data bytes in all files.
Size() int64
-
- // Memory consumed by buffered file data.
- memorySize() int64
}
type collectionFileSystem struct {
fileSystem
- uuid string
+ uuid string
+ replicas int
+ storageClasses []string
}
// FileSystem returns a CollectionFileSystem for the collection.
modTime = time.Now()
}
fs := &collectionFileSystem{
- uuid: c.UUID,
+ uuid: c.UUID,
+ storageClasses: c.StorageClassesDesired,
fileSystem: fileSystem{
fsBackend: keepBackend{apiClient: client, keepClient: kc},
thr: newThrottle(concurrentWriters),
},
}
+ if r := c.ReplicationDesired; r != nil {
+ fs.replicas = *r
+ }
root := &dirnode{
fs: fs,
treenode: treenode{
return dn.flush(context.TODO(), names, flushOpts{sync: false, shortBlocks: shortBlocks})
}
-func (fs *collectionFileSystem) memorySize() int64 {
+func (fs *collectionFileSystem) MemorySize() int64 {
fs.fileSystem.root.Lock()
defer fs.fileSystem.root.Unlock()
- return fs.fileSystem.root.(*dirnode).memorySize()
+ return fs.fileSystem.root.(*dirnode).MemorySize()
}
func (fs *collectionFileSystem) MarshalManifest(prefix string) (string, error) {
// filenode implements inode.
type filenode struct {
parent inode
- fs FileSystem
+ fs *collectionFileSystem
fileinfo fileinfo
segments []segment
// number of times `segments` has changed in a
fn.fs.throttle().Acquire()
go func() {
defer close(done)
- locator, _, err := fn.FS().PutB(buf)
+ resp, err := fn.FS().BlockWrite(context.Background(), BlockWriteOptions{
+ Data: buf,
+ Replicas: fn.fs.replicas,
+ StorageClasses: fn.fs.storageClasses,
+ })
fn.fs.throttle().Release()
fn.Lock()
defer fn.Unlock()
fn.memsize -= int64(len(buf))
fn.segments[idx] = storedSegment{
kc: fn.FS(),
- locator: locator,
+ locator: resp.Locator,
size: len(buf),
offset: 0,
length: len(buf),
if err != nil {
return nil, err
}
+ coll.UUID = dn.fs.uuid
data, err := json.Marshal(&coll)
if err == nil {
data = append(data, '\n')
go func() {
defer close(done)
defer close(errs)
- locator, _, err := dn.fs.PutB(block)
+ resp, err := dn.fs.BlockWrite(context.Background(), BlockWriteOptions{
+ Data: block,
+ Replicas: dn.fs.replicas,
+ StorageClasses: dn.fs.storageClasses,
+ })
dn.fs.throttle().Release()
if err != nil {
errs <- err
data := ref.fn.segments[ref.idx].(*memSegment).buf
ref.fn.segments[ref.idx] = storedSegment{
kc: dn.fs,
- locator: locator,
+ locator: resp.Locator,
size: blocksize,
offset: offsets[idx],
length: len(data),
}
// caller must have write lock.
-func (dn *dirnode) memorySize() (size int64) {
+func (dn *dirnode) MemorySize() (size int64) {
for _, name := range dn.sortedNames() {
node := dn.inodes[name]
node.Lock()
defer node.Unlock()
switch node := node.(type) {
case *dirnode:
- size += node.memorySize()
+ size += node.MemorySize()
case *filenode:
for _, seg := range node.segments {
switch seg := seg.(type) {
}
func (dn *dirnode) loadManifest(txt string) error {
- var dirname string
- streams := strings.Split(txt, "\n")
- if streams[len(streams)-1] != "" {
+ streams := bytes.Split([]byte(txt), []byte{'\n'})
+ if len(streams[len(streams)-1]) != 0 {
return fmt.Errorf("line %d: no trailing newline", len(streams))
}
streams = streams[:len(streams)-1]
segments := []storedSegment{}
+ // To reduce allocs, we reuse a single "pathparts" slice
+ // (pre-split on "/" separators) for the duration of this
+ // func.
+ var pathparts []string
+ // To reduce allocs, we reuse a single "toks" slice of 3 byte
+ // slices.
+ var toks = make([][]byte, 3)
+ // Similar to bytes.SplitN(token, []byte{c}, 3), but splits
+ // into the toks slice rather than allocating a new one, and
+ // returns the number of toks (1, 2, or 3).
+ splitToToks := func(src []byte, c rune) int {
+ c1 := bytes.IndexRune(src, c)
+ if c1 < 0 {
+ toks[0] = src
+ return 1
+ }
+ toks[0], src = src[:c1], src[c1+1:]
+ c2 := bytes.IndexRune(src, c)
+ if c2 < 0 {
+ toks[1] = src
+ return 2
+ }
+ toks[1], toks[2] = src[:c2], src[c2+1:]
+ return 3
+ }
for i, stream := range streams {
lineno := i + 1
var anyFileTokens bool
var pos int64
var segIdx int
segments = segments[:0]
- for i, token := range strings.Split(stream, " ") {
+ pathparts = nil
+ streamparts := 0
+ for i, token := range bytes.Split(stream, []byte{' '}) {
if i == 0 {
- dirname = manifestUnescape(token)
+ pathparts = strings.Split(manifestUnescape(string(token)), "/")
+ streamparts = len(pathparts)
continue
}
- if !strings.Contains(token, ":") {
+ if !bytes.ContainsRune(token, ':') {
if anyFileTokens {
return fmt.Errorf("line %d: bad file segment %q", lineno, token)
}
- toks := strings.SplitN(token, "+", 3)
- if len(toks) < 2 {
+ if splitToToks(token, '+') < 2 {
return fmt.Errorf("line %d: bad locator %q", lineno, token)
}
- length, err := strconv.ParseInt(toks[1], 10, 32)
+ length, err := strconv.ParseInt(string(toks[1]), 10, 32)
if err != nil || length < 0 {
return fmt.Errorf("line %d: bad locator %q", lineno, token)
}
segments = append(segments, storedSegment{
- locator: token,
+ locator: string(token),
size: int(length),
offset: 0,
length: int(length),
} else if len(segments) == 0 {
return fmt.Errorf("line %d: bad locator %q", lineno, token)
}
-
- toks := strings.SplitN(token, ":", 3)
- if len(toks) != 3 {
+ if splitToToks(token, ':') != 3 {
return fmt.Errorf("line %d: bad file segment %q", lineno, token)
}
anyFileTokens = true
- offset, err := strconv.ParseInt(toks[0], 10, 64)
+ offset, err := strconv.ParseInt(string(toks[0]), 10, 64)
if err != nil || offset < 0 {
return fmt.Errorf("line %d: bad file segment %q", lineno, token)
}
- length, err := strconv.ParseInt(toks[1], 10, 64)
+ length, err := strconv.ParseInt(string(toks[1]), 10, 64)
if err != nil || length < 0 {
return fmt.Errorf("line %d: bad file segment %q", lineno, token)
}
- name := dirname + "/" + manifestUnescape(toks[2])
- fnode, err := dn.createFileAndParents(name)
+ if !bytes.ContainsAny(toks[2], `\/`) {
+ // optimization for a common case
+ pathparts = append(pathparts[:streamparts], string(toks[2]))
+ } else {
+ pathparts = append(pathparts[:streamparts], strings.Split(manifestUnescape(string(toks[2])), "/")...)
+ }
+ fnode, err := dn.createFileAndParents(pathparts)
if fnode == nil && err == nil && length == 0 {
// Special case: an empty file used as
// a marker to preserve an otherwise
continue
}
if err != nil || (fnode == nil && length != 0) {
- return fmt.Errorf("line %d: cannot use path %q with length %d: %s", lineno, name, length, err)
+ return fmt.Errorf("line %d: cannot use name %q with length %d: %s", lineno, toks[2], length, err)
}
// Map the stream offset/range coordinates to
// block/offset/range coordinates and add
return fmt.Errorf("line %d: no file segments", lineno)
} else if len(segments) == 0 {
return fmt.Errorf("line %d: no locators", lineno)
- } else if dirname == "" {
+ } else if streamparts == 0 {
return fmt.Errorf("line %d: no stream name", lineno)
}
}
//
// If path is a "parent directory exists" marker (the last path
// component is "."), the returned values are both nil.
-func (dn *dirnode) createFileAndParents(path string) (fn *filenode, err error) {
+//
+// Newly added nodes have modtime==0. Caller is responsible for fixing
+// them with backdateTree.
+func (dn *dirnode) createFileAndParents(names []string) (fn *filenode, err error) {
var node inode = dn
- names := strings.Split(path, "/")
basename := names[len(names)-1]
for _, name := range names[:len(names)-1] {
switch name {
node = node.Parent()
continue
}
+ node.Lock()
+ unlock := node.Unlock
node, err = node.Child(name, func(child inode) (inode, error) {
if child == nil {
- child, err := node.FS().newNode(name, 0755|os.ModeDir, node.Parent().FileInfo().ModTime())
+ // note modtime will be fixed later in backdateTree()
+ child, err := node.FS().newNode(name, 0755|os.ModeDir, time.Time{})
if err != nil {
return nil, err
}
return child, nil
}
})
+ unlock()
if err != nil {
return
}
if basename == "." {
return
} else if !permittedName(basename) {
- err = fmt.Errorf("invalid file part %q in path %q", basename, path)
+ err = fmt.Errorf("invalid file part %q in path %q", basename, names)
return
}
+ node.Lock()
+ defer node.Unlock()
_, err = node.Child(basename, func(child inode) (inode, error) {
switch child := child.(type) {
case nil:
- child, err = node.FS().newNode(basename, 0755, node.FileInfo().ModTime())
+ child, err = node.FS().newNode(basename, 0755, time.Time{})
if err != nil {
return nil, err
}
import (
"bytes"
+ "context"
"crypto/md5"
"errors"
"fmt"
type keepClientStub struct {
blocks map[string][]byte
refreshable map[string]bool
- onPut func(bufcopy []byte) // called from PutB, before acquiring lock
+ onWrite func(bufcopy []byte) // called from WriteBlock, before acquiring lock
authToken string // client's auth token (used for signing locators)
sigkey string // blob signing key
sigttl time.Duration // blob signing ttl
return copy(p, buf[off:]), nil
}
-func (kcs *keepClientStub) PutB(p []byte) (string, int, error) {
- locator := SignLocator(fmt.Sprintf("%x+%d", md5.Sum(p), len(p)), kcs.authToken, time.Now().Add(kcs.sigttl), kcs.sigttl, []byte(kcs.sigkey))
- buf := make([]byte, len(p))
- copy(buf, p)
- if kcs.onPut != nil {
- kcs.onPut(buf)
+func (kcs *keepClientStub) BlockWrite(_ context.Context, opts BlockWriteOptions) (BlockWriteResponse, error) {
+ if opts.Data == nil {
+ panic("oops, stub is not made for this")
+ }
+ locator := SignLocator(fmt.Sprintf("%x+%d", md5.Sum(opts.Data), len(opts.Data)), kcs.authToken, time.Now().Add(kcs.sigttl), kcs.sigttl, []byte(kcs.sigkey))
+ buf := make([]byte, len(opts.Data))
+ copy(buf, opts.Data)
+ if kcs.onWrite != nil {
+ kcs.onWrite(buf)
+ }
+ for _, sc := range opts.StorageClasses {
+ if sc != "default" {
+ return BlockWriteResponse{}, fmt.Errorf("stub does not write storage class %q", sc)
+ }
}
kcs.Lock()
defer kcs.Unlock()
kcs.blocks[locator[:32]] = buf
- return locator, 1, nil
+ return BlockWriteResponse{Locator: locator, Replicas: 1}, nil
}
var reRemoteSignature = regexp.MustCompile(`\+[AR][^+]*`)
c.Check(ok, check.Equals, true)
}
+func (s *CollectionFSSuite) TestUnattainableStorageClasses(c *check.C) {
+ fs, err := (&Collection{
+ StorageClassesDesired: []string{"unobtainium"},
+ }).FileSystem(s.client, s.kc)
+ c.Assert(err, check.IsNil)
+
+ f, err := fs.OpenFile("/foo", os.O_CREATE|os.O_WRONLY, 0777)
+ c.Assert(err, check.IsNil)
+ _, err = f.Write([]byte("food"))
+ c.Assert(err, check.IsNil)
+ err = f.Close()
+ c.Assert(err, check.IsNil)
+ _, err = fs.MarshalManifest(".")
+ c.Assert(err, check.ErrorMatches, `.*stub does not write storage class \"unobtainium\"`)
+}
+
func (s *CollectionFSSuite) TestColonInFilename(c *check.C) {
fs, err := (&Collection{
ManifestText: "./foo:foo 3858f62230ac3c915f300c664312c63f+3 0:3:bar:bar\n",
proceed := make(chan struct{})
var started, concurrent int32
blk2done := false
- s.kc.onPut = func([]byte) {
+ s.kc.onWrite = func([]byte) {
atomic.AddInt32(&concurrent, 1)
switch atomic.AddInt32(&started, 1) {
case 1:
fs, err := (&Collection{}).FileSystem(s.client, s.kc)
c.Assert(err, check.IsNil)
- s.kc.onPut = func([]byte) {
+ s.kc.onWrite = func([]byte) {
// discard flushed data -- otherwise the stub will use
// unlimited memory
time.Sleep(time.Millisecond)
fs.Flush("", true)
}
- size := fs.memorySize()
+ size := fs.MemorySize()
if !c.Check(size <= 1<<24, check.Equals, true) {
- c.Logf("at dir%d fs.memorySize()=%d", i, size)
+ c.Logf("at dir%d fs.MemorySize()=%d", i, size)
return
}
}
c.Assert(err, check.IsNil)
var flushed int64
- s.kc.onPut = func(p []byte) {
+ s.kc.onWrite = func(p []byte) {
atomic.AddInt64(&flushed, int64(len(p)))
}
c.Assert(err, check.IsNil)
}
}
- c.Check(fs.memorySize(), check.Equals, int64(nDirs*67<<20))
+ c.Check(fs.MemorySize(), check.Equals, int64(nDirs*67<<20))
c.Check(flushed, check.Equals, int64(0))
waitForFlush := func(expectUnflushed, expectFlushed int64) {
- for deadline := time.Now().Add(5 * time.Second); fs.memorySize() > expectUnflushed && time.Now().Before(deadline); time.Sleep(10 * time.Millisecond) {
+ for deadline := time.Now().Add(5 * time.Second); fs.MemorySize() > expectUnflushed && time.Now().Before(deadline); time.Sleep(10 * time.Millisecond) {
}
- c.Check(fs.memorySize(), check.Equals, expectUnflushed)
+ c.Check(fs.MemorySize(), check.Equals, expectUnflushed)
c.Check(flushed, check.Equals, expectFlushed)
}
time.AfterFunc(10*time.Second, func() { close(timeout) })
var putCount, concurrency int64
var unflushed int64
- s.kc.onPut = func(p []byte) {
+ s.kc.onWrite = func(p []byte) {
defer atomic.AddInt64(&unflushed, -int64(len(p)))
cur := atomic.AddInt64(&concurrency, 1)
defer atomic.AddInt64(&concurrency, -1)
})
wrote := 0
- s.kc.onPut = func(p []byte) {
+ s.kc.onWrite = func(p []byte) {
s.kc.Lock()
s.kc.blocks = map[string][]byte{}
wrote++
}
func (s *CollectionFSSuite) TestFlushShort(c *check.C) {
- s.kc.onPut = func([]byte) {
+ s.kc.onWrite = func([]byte) {
s.kc.Lock()
s.kc.blocks = map[string][]byte{}
s.kc.Unlock()
}
}
+var bigmanifest = func() string {
+ var buf bytes.Buffer
+ for i := 0; i < 2000; i++ {
+ fmt.Fprintf(&buf, "./dir%d", i)
+ for i := 0; i < 100; i++ {
+ fmt.Fprintf(&buf, " d41d8cd98f00b204e9800998ecf8427e+99999")
+ }
+ for i := 0; i < 2000; i++ {
+ fmt.Fprintf(&buf, " 1200000:300000:file%d", i)
+ }
+ fmt.Fprintf(&buf, "\n")
+ }
+ return buf.String()
+}()
+
+func (s *CollectionFSSuite) BenchmarkParseManifest(c *check.C) {
+ DebugLocksPanicMode = false
+ c.Logf("test manifest is %d bytes", len(bigmanifest))
+ for i := 0; i < c.N; i++ {
+ fs, err := (&Collection{ManifestText: bigmanifest}).FileSystem(s.client, s.kc)
+ c.Check(err, check.IsNil)
+ c.Check(fs, check.NotNil)
+ }
+}
+
func (s *CollectionFSSuite) checkMemSize(c *check.C, f File) {
fn := f.(*filehandle).inode.(*filenode)
var memsize int64
func (dn *deferrednode) RUnlock() { dn.realinode().RUnlock() }
func (dn *deferrednode) FS() FileSystem { return dn.currentinode().FS() }
func (dn *deferrednode) Parent() inode { return dn.currentinode().Parent() }
+func (dn *deferrednode) MemorySize() int64 { return dn.currentinode().MemorySize() }
return nil, err
}
for _, child := range all {
+ ln.treenode.Lock()
_, err = ln.treenode.Child(child.FileInfo().Name(), func(inode) (inode, error) {
return child, nil
})
+ ln.treenode.Unlock()
if err != nil {
return nil, err
}
} else if strings.Contains(coll.UUID, "-4zz18-") {
return deferredCollectionFS(fs, parent, coll), nil
} else {
- log.Printf("projectnode: unrecognized UUID in response: %q", coll.UUID)
+ log.Printf("group contents: unrecognized UUID in response: %q", coll.UUID)
return nil, ErrInvalidArgument
}
}
var inodes []inode
- // Note: the "filters" slice's backing array might be reused
- // by append(filters,...) below. This isn't goroutine safe,
- // but all accesses are in the same goroutine, so it's OK.
- filters := []Filter{{"owner_uuid", "=", uuid}}
- params := ResourceListParams{
- Count: "none",
- Filters: filters,
- Order: "uuid",
- }
- for {
- var resp CollectionList
- err = fs.RequestAndDecode(&resp, "GET", "arvados/v1/collections", nil, params)
- if err != nil {
- return nil, err
+ // When #17424 is resolved, remove the outer loop here and use
+ // []string{"arvados#collection", "arvados#group"} directly as the uuid
+ // filter.
+ for _, class := range []string{"arvados#collection", "arvados#group"} {
+ // Note: the "filters" slice's backing array might be reused
+ // by append(filters,...) below. This isn't goroutine safe,
+ // but all accesses are in the same goroutine, so it's OK.
+ filters := []Filter{
+ {"uuid", "is_a", class},
}
- if len(resp.Items) == 0 {
- break
- }
- for _, i := range resp.Items {
- coll := i
- if fs.forwardSlashNameSubstitution != "" {
- coll.Name = strings.Replace(coll.Name, "/", fs.forwardSlashNameSubstitution, -1)
- }
- if !permittedName(coll.Name) {
- continue
- }
- inodes = append(inodes, deferredCollectionFS(fs, parent, coll))
+ if class == "arvados#group" {
+ filters = append(filters, Filter{"group_class", "=", "project"})
}
- params.Filters = append(filters, Filter{"uuid", ">", resp.Items[len(resp.Items)-1].UUID})
- }
- filters = append(filters, Filter{"group_class", "=", "project"})
- params.Filters = filters
- for {
- var resp GroupList
- err = fs.RequestAndDecode(&resp, "GET", "arvados/v1/groups", nil, params)
- if err != nil {
- return nil, err
+ params := ResourceListParams{
+ Count: "none",
+ Filters: filters,
+ Order: "uuid",
}
- if len(resp.Items) == 0 {
- break
- }
- for _, group := range resp.Items {
- if fs.forwardSlashNameSubstitution != "" {
- group.Name = strings.Replace(group.Name, "/", fs.forwardSlashNameSubstitution, -1)
+
+ for {
+ // The groups content endpoint returns Collection and Group (project)
+ // objects. This function only accesses the UUID and Name field. Both
+ // collections and groups have those fields, so it is easier to just treat
+ // the ObjectList that comes back as a CollectionList.
+ var resp CollectionList
+ err = fs.RequestAndDecode(&resp, "GET", "arvados/v1/groups/"+uuid+"/contents", nil, params)
+ if err != nil {
+ return nil, err
+ }
+ if len(resp.Items) == 0 {
+ break
}
- if !permittedName(group.Name) {
- continue
+ for _, i := range resp.Items {
+ if fs.forwardSlashNameSubstitution != "" {
+ i.Name = strings.Replace(i.Name, "/", fs.forwardSlashNameSubstitution, -1)
+ }
+ if !permittedName(i.Name) {
+ continue
+ }
+ if strings.Contains(i.UUID, "-j7d0g-") {
+ inodes = append(inodes, fs.newProjectNode(parent, i.Name, i.UUID))
+ } else if strings.Contains(i.UUID, "-4zz18-") {
+ inodes = append(inodes, deferredCollectionFS(fs, parent, i))
+ } else {
+ log.Printf("group contents: unrecognized UUID in response: %q", i.UUID)
+ return nil, ErrInvalidArgument
+ }
}
- inodes = append(inodes, fs.newProjectNode(parent, group.Name, group.UUID))
+ params.Filters = append(filters, Filter{"uuid", ">", resp.Items[len(resp.Items)-1].UUID})
}
- params.Filters = append(filters, Filter{"uuid", ">", resp.Items[len(resp.Items)-1].UUID})
}
return inodes, nil
}
return sc.Client.RequestAndDecode(dst, method, path, body, params)
}
+func (s *SiteFSSuite) TestFilterGroup(c *check.C) {
+ // Make sure that a collection and group that match the filter are present,
+ // and that a group that does not match the filter is not present.
+ s.fs.MountProject("fg", fixtureThisFilterGroupUUID)
+
+ _, err := s.fs.OpenFile("/fg/baz_file", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.OpenFile("/fg/A Subproject", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.OpenFile("/fg/A Project", 0, 0)
+ c.Assert(err, check.Not(check.IsNil))
+
+ // An empty filter means everything that is visible should be returned.
+ s.fs.MountProject("fg2", fixtureAFilterGroupTwoUUID)
+
+ _, err = s.fs.OpenFile("/fg2/baz_file", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.OpenFile("/fg2/A Subproject", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.OpenFile("/fg2/A Project", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ // An 'is_a' 'arvados#collection' filter means only collections should be returned.
+ s.fs.MountProject("fg3", fixtureAFilterGroupThreeUUID)
+
+ _, err = s.fs.OpenFile("/fg3/baz_file", 0, 0)
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.OpenFile("/fg3/A Subproject", 0, 0)
+ c.Assert(err, check.Not(check.IsNil))
+
+ // An 'exists' 'arvados#collection' filter means only collections with certain properties should be returned.
+ s.fs.MountProject("fg4", fixtureAFilterGroupFourUUID)
+
+ _, err = s.fs.Stat("/fg4/collection with list property with odd values")
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.Stat("/fg4/collection with list property with even values")
+ c.Assert(err, check.IsNil)
+
+ // A 'contains' 'arvados#collection' filter means only collections with certain properties should be returned.
+ s.fs.MountProject("fg5", fixtureAFilterGroupFiveUUID)
+
+ _, err = s.fs.Stat("/fg5/collection with list property with odd values")
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.Stat("/fg5/collection with list property with string value")
+ c.Assert(err, check.IsNil)
+
+ _, err = s.fs.Stat("/fg5/collection with prop2 5")
+ c.Assert(err, check.Not(check.IsNil))
+
+ _, err = s.fs.Stat("/fg5/collection with list property with even values")
+ c.Assert(err, check.Not(check.IsNil))
+}
+
func (s *SiteFSSuite) TestCurrentUserHome(c *check.C) {
s.fs.MountProject("home", "")
s.testHomeProject(c, "/home")
}
func (fs *customFileSystem) MountByID(mount string) {
+ fs.root.treenode.Lock()
+ defer fs.root.treenode.Unlock()
fs.root.treenode.Child(mount, func(inode) (inode, error) {
return &vdirnode{
treenode: treenode{
}
func (fs *customFileSystem) MountProject(mount, uuid string) {
+ fs.root.treenode.Lock()
+ defer fs.root.treenode.Unlock()
fs.root.treenode.Child(mount, func(inode) (inode, error) {
return fs.newProjectNode(fs.root, mount, uuid), nil
})
}
func (fs *customFileSystem) MountUsers(mount string) {
+ fs.root.treenode.Lock()
+ defer fs.root.treenode.Unlock()
fs.root.treenode.Child(mount, func(inode) (inode, error) {
return &lookupnode{
stale: fs.Stale,
// Importing arvadostest would be an import cycle, so these
// fixtures are duplicated here [until fs moves to a separate
// package].
- fixtureActiveToken = "3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
- fixtureAProjectUUID = "zzzzz-j7d0g-v955i6s2oi1cbso"
- fixtureFooAndBarFilesInDirUUID = "zzzzz-4zz18-foonbarfilesdir"
- fixtureFooCollectionName = "zzzzz-4zz18-fy296fx3hot09f7 added sometime"
- fixtureFooCollectionPDH = "1f4b0bc7583c2a7f9102c395f4ffc5e3+45"
- fixtureFooCollection = "zzzzz-4zz18-fy296fx3hot09f7"
- fixtureNonexistentCollection = "zzzzz-4zz18-totallynotexist"
- fixtureBlobSigningKey = "zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc"
- fixtureBlobSigningTTL = 336 * time.Hour
+ fixtureActiveToken = "3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
+ fixtureAProjectUUID = "zzzzz-j7d0g-v955i6s2oi1cbso"
+ fixtureThisFilterGroupUUID = "zzzzz-j7d0g-thisfiltergroup"
+ fixtureAFilterGroupTwoUUID = "zzzzz-j7d0g-afiltergrouptwo"
+ fixtureAFilterGroupThreeUUID = "zzzzz-j7d0g-filtergroupthre"
+ fixtureAFilterGroupFourUUID = "zzzzz-j7d0g-filtergroupfour"
+ fixtureAFilterGroupFiveUUID = "zzzzz-j7d0g-filtergroupfive"
+ fixtureFooAndBarFilesInDirUUID = "zzzzz-4zz18-foonbarfilesdir"
+ fixtureFooCollectionName = "zzzzz-4zz18-fy296fx3hot09f7 added sometime"
+ fixtureFooCollectionPDH = "1f4b0bc7583c2a7f9102c395f4ffc5e3+45"
+ fixtureFooCollection = "zzzzz-4zz18-fy296fx3hot09f7"
+ fixtureNonexistentCollection = "zzzzz-4zz18-totallynotexist"
+ fixtureStorageClassesDesiredArchive = "zzzzz-4zz18-3t236wr12769qqa"
+ fixtureBlobSigningKey = "zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc"
+ fixtureBlobSigningTTL = 336 * time.Hour
)
var _ = check.Suite(&SiteFSSuite{})
+func init() {
+ // Enable DebugLocksPanicMode sometimes. Don't enable it all
+ // the time, though -- it adds many calls to time.Sleep(),
+ // which could hide different bugs.
+ if time.Now().Second()&1 == 0 {
+ DebugLocksPanicMode = true
+ }
+}
+
type SiteFSSuite struct {
client *Client
fs CustomFileSystem
c.Check(len(fis), check.Equals, 0)
}
+func (s *SiteFSSuite) TestUpdateStorageClasses(c *check.C) {
+ f, err := s.fs.OpenFile("/by_id/"+fixtureStorageClassesDesiredArchive+"/newfile", os.O_CREATE|os.O_RDWR, 0777)
+ c.Assert(err, check.IsNil)
+ _, err = f.Write([]byte("nope"))
+ c.Assert(err, check.IsNil)
+ err = f.Close()
+ c.Assert(err, check.IsNil)
+ err = s.fs.Sync()
+ c.Assert(err, check.ErrorMatches, `.*stub does not write storage class "archive"`)
+}
+
func (s *SiteFSSuite) TestByUUIDAndPDH(c *check.C) {
f, err := s.fs.Open("/by_id")
c.Assert(err, check.IsNil)
package arvados
+import (
+ "time"
+)
+
// Group is an arvados#group record
type Group struct {
- UUID string `json:"uuid"`
- Name string `json:"name"`
- OwnerUUID string `json:"owner_uuid"`
- GroupClass string `json:"group_class"`
+ UUID string `json:"uuid"`
+ Name string `json:"name"`
+ OwnerUUID string `json:"owner_uuid"`
+ GroupClass string `json:"group_class"`
+ Etag string `json:"etag"`
+ Href string `json:"href"`
+ TrashAt *time.Time `json:"trash_at"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ DeleteAt *time.Time `json:"delete_at"`
+ IsTrashed bool `json:"is_trashed"`
+ Properties map[string]interface{} `json:"properties"`
+ WritableBy []string `json:"writable_by,omitempty"`
+ Description string `json:"description"`
}
// GroupList is an arvados#groupList resource.
type GroupList struct {
- Items []Group `json:"items"`
- ItemsAvailable int `json:"items_available"`
- Offset int `json:"offset"`
- Limit int `json:"limit"`
+ Items []Group `json:"items"`
+ ItemsAvailable int `json:"items_available"`
+ Offset int `json:"offset"`
+ Limit int `json:"limit"`
+ Included []interface{} `json:"included"`
+}
+
+// ObjectList is an arvados#objectList resource.
+type ObjectList struct {
+ Included []interface{} `json:"included"`
+ Items []interface{} `json:"items"`
+ ItemsAvailable int `json:"items_available"`
+ Offset int `json:"offset"`
+ Limit int `json:"limit"`
}
func (g Group) resourceName() string {
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import "time"
+
+// Job is an arvados#job record
+type Job struct {
+ UUID string `json:"uuid"`
+ Etag string `json:"etag"`
+ OwnerUUID string `json:"owner_uuid"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedAt time.Time `json:"modified_at"`
+ SubmitID string `json:"submit_id"`
+ Script string `json:"script"`
+ CancelledByClientUUID string `json:"cancelled_by_client_uuid"`
+ CancelledByUserUUID string `json:"cancelled_by_user_uuid"`
+ CancelledAt time.Time `json:"cancelled_at"`
+ StartedAt time.Time `json:"started_at"`
+ FinishedAt time.Time `json:"finished_at"`
+ Running bool `json:"running"`
+ Success bool `json:"success"`
+ Output string `json:"output"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+ IsLockedByUUID string `json:"is_locked_by_uuid"`
+ Log string `json:"log"`
+ TasksSummary map[string]interface{} `json:"tasks_summary"`
+ RuntimeConstraints map[string]interface{} `json:"runtime_constraints"`
+ Nondeterministic bool `json:"nondeterministic"`
+ Repository string `json:"repository"`
+ SuppliedScriptVersion string `json:"supplied_script_version"`
+ DockerImageLocator string `json:"docker_image_locator"`
+ Priority int `json:"priority"`
+ Description string `json:"description"`
+ State string `json:"state"`
+ ArvadosSDKVersion string `json:"arvados_sdk_version"`
+ Components map[string]interface{} `json:"components"`
+ ScriptParametersDigest string `json:"script_parameters_digest"`
+ WritableBy []string `json:"writable_by,omitempty"`
+}
+
+func (g Job) resourceName() string {
+ return "job"
+}
package arvados
+import "time"
+
// Link is an arvados#link record
type Link struct {
- UUID string `json:"uuid,omiempty"`
- OwnerUUID string `json:"owner_uuid"`
- Name string `json:"name"`
- LinkClass string `json:"link_class"`
- HeadUUID string `json:"head_uuid"`
- HeadKind string `json:"head_kind"`
- TailUUID string `json:"tail_uuid"`
- TailKind string `json:"tail_kind"`
- Properties map[string]interface{} `json:"properties"`
+ UUID string `json:"uuid,omitempty"`
+ Etag string `json:"etag"`
+ Href string `json:"href"`
+ OwnerUUID string `json:"owner_uuid"`
+ Name string `json:"name"`
+ LinkClass string `json:"link_class"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ HeadUUID string `json:"head_uuid"`
+ HeadKind string `json:"head_kind"`
+ TailUUID string `json:"tail_uuid"`
+ TailKind string `json:"tail_kind"`
+ Properties map[string]interface{} `json:"properties"`
}
// LinkList is an arvados#linkList resource.
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import "time"
+
+// PipelineInstance is an arvados#pipelineInstance record
+type PipelineInstance struct {
+ UUID string `json:"uuid"`
+ Etag string `json:"etag"`
+ OwnerUUID string `json:"owner_uuid"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedAt time.Time `json:"modified_at"`
+ PipelineTemplateUUID string `json:"pipeline_template_uuid"`
+ Name string `json:"name"`
+ Components map[string]interface{} `json:"components"`
+ UpdatedAt time.Time `json:"updated_at"`
+ Properties map[string]interface{} `json:"properties"`
+ State string `json:"state"`
+ ComponentsSummary map[string]interface{} `json:"components_summary"`
+ StartedAt time.Time `json:"started_at"`
+ FinishedAt time.Time `json:"finished_at"`
+ Description string `json:"description"`
+ WritableBy []string `json:"writable_by,omitempty"`
+}
+
+func (g PipelineInstance) resourceName() string {
+ return "pipelineInstance"
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import "time"
+
+// PipelineTemplate is an arvados#pipelineTemplate record
+type PipelineTemplate struct {
+ UUID string `json:"uuid"`
+ Etag string `json:"etag"`
+ OwnerUUID string `json:"owner_uuid"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedAt time.Time `json:"modified_at"`
+ Name string `json:"name"`
+ Components map[string]interface{} `json:"components"`
+ UpdatedAt time.Time `json:"updated_at"`
+ Description string `json:"description"`
+ WritableBy []string `json:"writable_by,omitempty"`
+}
+
+func (g PipelineTemplate) resourceName() string {
+ return "pipelineTemplate"
+}
// UnmarshalJSON decodes a JSON array to a Filter.
func (f *Filter) UnmarshalJSON(data []byte) error {
- var elements []interface{}
- err := json.Unmarshal(data, &elements)
+ var decoded interface{}
+ err := json.Unmarshal(data, &decoded)
if err != nil {
return err
}
- if len(elements) != 3 {
- return fmt.Errorf("invalid filter %q: must have 3 elements", data)
- }
- attr, ok := elements[0].(string)
- if !ok {
- return fmt.Errorf("invalid filter attr %q", elements[0])
- }
- op, ok := elements[1].(string)
- if !ok {
- return fmt.Errorf("invalid filter operator %q", elements[1])
- }
- operand := elements[2]
- switch operand.(type) {
- case string, float64, []interface{}, nil, bool:
+ switch decoded := decoded.(type) {
+ case string:
+ // Accept "(foo < bar)" as a more obvious way to spell
+ // ["(foo < bar)","=",true]
+ *f = Filter{decoded, "=", true}
+ case []interface{}:
+ if len(decoded) != 3 {
+ return fmt.Errorf("invalid filter %q: must have 3 decoded", data)
+ }
+ attr, ok := decoded[0].(string)
+ if !ok {
+ return fmt.Errorf("invalid filter attr %q", decoded[0])
+ }
+ op, ok := decoded[1].(string)
+ if !ok {
+ return fmt.Errorf("invalid filter operator %q", decoded[1])
+ }
+ operand := decoded[2]
+ switch operand.(type) {
+ case string, float64, []interface{}, nil, bool:
+ default:
+ return fmt.Errorf("invalid filter operand %q", decoded[2])
+ }
+ *f = Filter{attr, op, operand}
default:
- return fmt.Errorf("invalid filter operand %q", elements[2])
+ return fmt.Errorf("invalid filter: json decoded as %T instead of array or string", decoded)
}
- *f = Filter{attr, op, operand}
return nil
}
package arvados
import (
- "bytes"
"encoding/json"
- "testing"
"time"
+
+ check "gopkg.in/check.v1"
)
-func TestMarshalFiltersWithNanoseconds(t *testing.T) {
+var _ = check.Suite(&filterEncodingSuite{})
+
+type filterEncodingSuite struct{}
+
+func (s *filterEncodingSuite) TestMarshalNanoseconds(c *check.C) {
t0 := time.Now()
t0str := t0.Format(time.RFC3339Nano)
buf, err := json.Marshal([]Filter{
{Attr: "modified_at", Operator: "=", Operand: t0}})
- if err != nil {
- t.Fatal(err)
- }
- if expect := []byte(`[["modified_at","=","` + t0str + `"]]`); 0 != bytes.Compare(buf, expect) {
- t.Errorf("Encoded as %q, expected %q", buf, expect)
- }
+ c.Assert(err, check.IsNil)
+ c.Check(string(buf), check.Equals, `[["modified_at","=","`+t0str+`"]]`)
}
-func TestMarshalFiltersWithNil(t *testing.T) {
+func (s *filterEncodingSuite) TestMarshalNil(c *check.C) {
buf, err := json.Marshal([]Filter{
{Attr: "modified_at", Operator: "=", Operand: nil}})
- if err != nil {
- t.Fatal(err)
- }
- if expect := []byte(`[["modified_at","=",null]]`); 0 != bytes.Compare(buf, expect) {
- t.Errorf("Encoded as %q, expected %q", buf, expect)
- }
+ c.Assert(err, check.IsNil)
+ c.Check(string(buf), check.Equals, `[["modified_at","=",null]]`)
}
-func TestUnmarshalFiltersWithNil(t *testing.T) {
+func (s *filterEncodingSuite) TestUnmarshalNil(c *check.C) {
buf := []byte(`["modified_at","=",null]`)
- f := &Filter{}
+ var f Filter
err := f.UnmarshalJSON(buf)
- if err != nil {
- t.Fatal(err)
- }
- expect := Filter{Attr: "modified_at", Operator: "=", Operand: nil}
- if f.Attr != expect.Attr || f.Operator != expect.Operator || f.Operand != expect.Operand {
- t.Errorf("Decoded as %q, expected %q", f, expect)
- }
+ c.Assert(err, check.IsNil)
+ c.Check(f, check.DeepEquals, Filter{Attr: "modified_at", Operator: "=", Operand: nil})
}
-func TestMarshalFiltersWithBoolean(t *testing.T) {
+func (s *filterEncodingSuite) TestMarshalBoolean(c *check.C) {
buf, err := json.Marshal([]Filter{
{Attr: "is_active", Operator: "=", Operand: true}})
- if err != nil {
- t.Fatal(err)
- }
- if expect := []byte(`[["is_active","=",true]]`); 0 != bytes.Compare(buf, expect) {
- t.Errorf("Encoded as %q, expected %q", buf, expect)
- }
+ c.Assert(err, check.IsNil)
+ c.Check(string(buf), check.Equals, `[["is_active","=",true]]`)
}
-func TestUnmarshalFiltersWithBoolean(t *testing.T) {
+func (s *filterEncodingSuite) TestUnmarshalBoolean(c *check.C) {
buf := []byte(`["is_active","=",true]`)
- f := &Filter{}
+ var f Filter
+ err := f.UnmarshalJSON(buf)
+ c.Assert(err, check.IsNil)
+ c.Check(f, check.DeepEquals, Filter{Attr: "is_active", Operator: "=", Operand: true})
+}
+
+func (s *filterEncodingSuite) TestUnmarshalBooleanExpression(c *check.C) {
+ buf := []byte(`"(foo < bar)"`)
+ var f Filter
err := f.UnmarshalJSON(buf)
- if err != nil {
- t.Fatal(err)
- }
- expect := Filter{Attr: "is_active", Operator: "=", Operand: true}
- if f.Attr != expect.Attr || f.Operator != expect.Operator || f.Operand != expect.Operand {
- t.Errorf("Decoded as %q, expected %q", f, expect)
- }
+ c.Assert(err, check.IsNil)
+ c.Check(f, check.DeepEquals, Filter{Attr: "(foo < bar)", Operator: "=", Operand: true})
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import "time"
+
+// Trait is an arvados#trait record
+type Trait struct {
+ UUID string `json:"uuid"`
+ Etag string `json:"etag"`
+ OwnerUUID string `json:"owner_uuid"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedAt time.Time `json:"modified_at"`
+ Name string `json:"name"`
+ Properties map[string]interface{} `json:"properties"`
+ UpdatedAt time.Time `json:"updated_at"`
+ WritableBy []string `json:"writable_by,omitempty"`
+}
+
+func (g Trait) resourceName() string {
+ return "trait"
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "reflect"
+ "strings"
+)
+
+type Vocabulary struct {
+ reservedTagKeys map[string]bool `json:"-"`
+ StrictTags bool `json:"strict_tags"`
+ Tags map[string]VocabularyTag `json:"tags"`
+}
+
+type VocabularyTag struct {
+ Strict bool `json:"strict"`
+ Labels []VocabularyLabel `json:"labels"`
+ Values map[string]VocabularyTagValue `json:"values"`
+}
+
+// Cannot have a constant map in Go, so we have to use a function
+func (v *Vocabulary) systemTagKeys() map[string]bool {
+ return map[string]bool{
+ "type": true,
+ "template_uuid": true,
+ "groups": true,
+ "username": true,
+ "image_timestamp": true,
+ "docker-image-repo-tag": true,
+ "filters": true,
+ "container_request": true,
+ }
+}
+
+type VocabularyLabel struct {
+ Label string `json:"label"`
+}
+
+type VocabularyTagValue struct {
+ Labels []VocabularyLabel `json:"labels"`
+}
+
+// NewVocabulary creates a new Vocabulary from a JSON definition and a list
+// of reserved tag keys that will get special treatment when strict mode is
+// enabled.
+func NewVocabulary(data []byte, managedTagKeys []string) (voc *Vocabulary, err error) {
+ if r := bytes.Compare(data, []byte("")); r == 0 {
+ return &Vocabulary{}, nil
+ }
+ err = json.Unmarshal(data, &voc)
+ if err != nil {
+ return nil, fmt.Errorf("invalid JSON format error: %q", err)
+ }
+ if reflect.DeepEqual(voc, &Vocabulary{}) {
+ return nil, fmt.Errorf("JSON data provided doesn't match Vocabulary format: %q", data)
+ }
+ voc.reservedTagKeys = make(map[string]bool)
+ for _, managedKey := range managedTagKeys {
+ voc.reservedTagKeys[managedKey] = true
+ }
+ for systemKey := range voc.systemTagKeys() {
+ voc.reservedTagKeys[systemKey] = true
+ }
+ err = voc.validate()
+ if err != nil {
+ return nil, err
+ }
+ return voc, nil
+}
+
+func (v *Vocabulary) validate() error {
+ if v == nil {
+ return nil
+ }
+ tagKeys := map[string]string{}
+ // Checks for Vocabulary strictness
+ if v.StrictTags && len(v.Tags) == 0 {
+ return fmt.Errorf("vocabulary is strict but no tags are defined")
+ }
+ // Checks for collisions between tag keys, reserved tag keys
+ // and tag key labels.
+ for key := range v.Tags {
+ if v.reservedTagKeys[key] {
+ return fmt.Errorf("tag key %q is reserved", key)
+ }
+ lcKey := strings.ToLower(key)
+ if tagKeys[lcKey] != "" {
+ return fmt.Errorf("duplicate tag key %q", key)
+ }
+ tagKeys[lcKey] = key
+ for _, lbl := range v.Tags[key].Labels {
+ label := strings.ToLower(lbl.Label)
+ if tagKeys[label] != "" {
+ return fmt.Errorf("tag label %q for key %q already seen as a tag key or label", lbl.Label, key)
+ }
+ tagKeys[label] = lbl.Label
+ }
+ // Checks for value strictness
+ if v.Tags[key].Strict && len(v.Tags[key].Values) == 0 {
+ return fmt.Errorf("tag key %q is configured as strict but doesn't provide values", key)
+ }
+ // Checks for collisions between tag values and tag value labels.
+ tagValues := map[string]string{}
+ for val := range v.Tags[key].Values {
+ lcVal := strings.ToLower(val)
+ if tagValues[lcVal] != "" {
+ return fmt.Errorf("duplicate tag value %q for tag %q", val, key)
+ }
+ // Checks for collisions between labels from different values.
+ tagValues[lcVal] = val
+ for _, tagLbl := range v.Tags[key].Values[val].Labels {
+ label := strings.ToLower(tagLbl.Label)
+ if tagValues[label] != "" && tagValues[label] != val {
+ return fmt.Errorf("tag value label %q for pair (%q:%q) already seen on value %q", tagLbl.Label, key, val, tagValues[label])
+ }
+ tagValues[label] = val
+ }
+ }
+ }
+ return nil
+}
+
+func (v *Vocabulary) getLabelsToKeys() (labels map[string]string) {
+ if v == nil {
+ return
+ }
+ labels = make(map[string]string)
+ for key, val := range v.Tags {
+ for _, lbl := range val.Labels {
+ label := strings.ToLower(lbl.Label)
+ labels[label] = key
+ }
+ }
+ return labels
+}
+
+func (v *Vocabulary) getLabelsToValues(key string) (labels map[string]string) {
+ if v == nil {
+ return
+ }
+ labels = make(map[string]string)
+ if _, ok := v.Tags[key]; ok {
+ for val := range v.Tags[key].Values {
+ labels[strings.ToLower(val)] = val
+ for _, tagLbl := range v.Tags[key].Values[val].Labels {
+ label := strings.ToLower(tagLbl.Label)
+ labels[label] = val
+ }
+ }
+ }
+ return labels
+}
+
+func (v *Vocabulary) checkValue(key, val string) error {
+ if _, ok := v.Tags[key].Values[val]; !ok {
+ lcVal := strings.ToLower(val)
+ correctValue, ok := v.getLabelsToValues(key)[lcVal]
+ if ok {
+ return fmt.Errorf("tag value %q for key %q is an alias, must be provided as %q", val, key, correctValue)
+ } else if v.Tags[key].Strict {
+ return fmt.Errorf("tag value %q is not valid for key %q", val, key)
+ }
+ }
+ return nil
+}
+
+// Check validates the given data against the vocabulary.
+func (v *Vocabulary) Check(data map[string]interface{}) error {
+ if v == nil {
+ return nil
+ }
+ for key, val := range data {
+ // Checks for key validity
+ if v.reservedTagKeys[key] {
+ // Allow reserved keys to be used even if they are not defined in
+ // the vocabulary no matter its strictness.
+ continue
+ }
+ if _, ok := v.Tags[key]; !ok {
+ lcKey := strings.ToLower(key)
+ correctKey, ok := v.getLabelsToKeys()[lcKey]
+ if ok {
+ return fmt.Errorf("tag key %q is an alias, must be provided as %q", key, correctKey)
+ } else if v.StrictTags {
+ return fmt.Errorf("tag key %q is not defined in the vocabulary", key)
+ }
+ // If the key is not defined, we don't need to check the value
+ continue
+ }
+ // Checks for value validity -- key is defined
+ switch val := val.(type) {
+ case string:
+ err := v.checkValue(key, val)
+ if err != nil {
+ return err
+ }
+ case []interface{}:
+ for _, singleVal := range val {
+ switch singleVal := singleVal.(type) {
+ case string:
+ err := v.checkValue(key, singleVal)
+ if err != nil {
+ return err
+ }
+ default:
+ return fmt.Errorf("value list element type for tag key %q was %T, but expected a string", key, singleVal)
+ }
+ }
+ default:
+ return fmt.Errorf("value type for tag key %q was %T, but expected a string or list of strings", key, val)
+ }
+ }
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvados
+
+import (
+ "encoding/json"
+
+ check "gopkg.in/check.v1"
+)
+
+type VocabularySuite struct {
+ testVoc *Vocabulary
+}
+
+var _ = check.Suite(&VocabularySuite{})
+
+func (s *VocabularySuite) SetUpTest(c *check.C) {
+ s.testVoc = &Vocabulary{
+ reservedTagKeys: map[string]bool{
+ "reservedKey": true,
+ },
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Homo sapiens"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "Loxodonta"}},
+ },
+ },
+ },
+ "IDTAGIMPORTANCE": {
+ Strict: true,
+ Labels: []VocabularyLabel{{Label: "Importance"}, {Label: "Priority"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVAL3": {
+ Labels: []VocabularyLabel{{Label: "Low"}, {Label: "Low priority"}},
+ },
+ "IDVAL2": {
+ Labels: []VocabularyLabel{{Label: "Medium"}, {Label: "Medium priority"}},
+ },
+ "IDVAL1": {
+ Labels: []VocabularyLabel{{Label: "High"}, {Label: "High priority"}},
+ },
+ },
+ },
+ "IDTAGCOMMENT": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Comment"}},
+ },
+ },
+ }
+ err := s.testVoc.validate()
+ c.Assert(err, check.IsNil)
+}
+
+func (s *VocabularySuite) TestCheck(c *check.C) {
+ tests := []struct {
+ name string
+ strictVoc bool
+ props string
+ expectSuccess bool
+ errMatches string
+ }{
+ // Check succeeds
+ {
+ "Known key, known value",
+ false,
+ `{"IDTAGANIMALS":"IDVALANIMAL1"}`,
+ true,
+ "",
+ },
+ {
+ "Unknown non-alias key on non-strict vocabulary",
+ false,
+ `{"foo":"bar"}`,
+ true,
+ "",
+ },
+ {
+ "Known non-strict key, unknown non-alias value",
+ false,
+ `{"IDTAGANIMALS":"IDVALANIMAL3"}`,
+ true,
+ "",
+ },
+ {
+ "Undefined but reserved key on strict vocabulary",
+ true,
+ `{"reservedKey":"bar"}`,
+ true,
+ "",
+ },
+ {
+ "Known key, list of known values",
+ false,
+ `{"IDTAGANIMALS":["IDVALANIMAL1","IDVALANIMAL2"]}`,
+ true,
+ "",
+ },
+ {
+ "Known non-strict key, list of unknown non-alias values",
+ false,
+ `{"IDTAGCOMMENT":["hello world","lorem ipsum"]}`,
+ true,
+ "",
+ },
+ // Check fails
+ {
+ "Known first key & value; known 2nd key, unknown 2nd value",
+ false,
+ `{"IDTAGANIMALS":"IDVALANIMAL1", "IDTAGIMPORTANCE": "blah blah"}`,
+ false,
+ "tag value.*is not valid for key.*",
+ },
+ {
+ "Unknown non-alias key on strict vocabulary",
+ true,
+ `{"foo":"bar"}`,
+ false,
+ "tag key.*is not defined in the vocabulary",
+ },
+ {
+ "Known non-strict key, known value alias",
+ false,
+ `{"IDTAGANIMALS":"Loxodonta"}`,
+ false,
+ "tag value.*for key.* is an alias, must be provided as.*",
+ },
+ {
+ "Known strict key, unknown non-alias value",
+ false,
+ `{"IDTAGIMPORTANCE":"Unimportant"}`,
+ false,
+ "tag value.*is not valid for key.*",
+ },
+ {
+ "Known strict key, lowercase value regarded as alias",
+ false,
+ `{"IDTAGIMPORTANCE":"idval1"}`,
+ false,
+ "tag value.*for key.* is an alias, must be provided as.*",
+ },
+ {
+ "Known strict key, known value alias",
+ false,
+ `{"IDTAGIMPORTANCE":"High"}`,
+ false,
+ "tag value.* for key.*is an alias, must be provided as.*",
+ },
+ {
+ "Known strict key, list of known alias values",
+ false,
+ `{"IDTAGIMPORTANCE":["High", "Low"]}`,
+ false,
+ "tag value.*for key.*is an alias, must be provided as.*",
+ },
+ {
+ "Known strict key, list of unknown non-alias values",
+ false,
+ `{"IDTAGIMPORTANCE":["foo","bar"]}`,
+ false,
+ "tag value.*is not valid for key.*",
+ },
+ {
+ "Invalid value type",
+ false,
+ `{"IDTAGANIMALS":1}`,
+ false,
+ "value type for tag key.* was.*, but expected a string or list of strings",
+ },
+ {
+ "Value list of invalid type",
+ false,
+ `{"IDTAGANIMALS":[1]}`,
+ false,
+ "value list element type for tag key.* was.*, but expected a string",
+ },
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ s.testVoc.StrictTags = tt.strictVoc
+
+ var data map[string]interface{}
+ err := json.Unmarshal([]byte(tt.props), &data)
+ c.Assert(err, check.IsNil)
+ err = s.testVoc.Check(data)
+ if tt.expectSuccess {
+ c.Assert(err, check.IsNil)
+ } else {
+ c.Assert(err, check.NotNil)
+ c.Assert(err.Error(), check.Matches, tt.errMatches)
+ }
+ }
+}
+
+func (s *VocabularySuite) TestNewVocabulary(c *check.C) {
+ tests := []struct {
+ name string
+ data string
+ isValid bool
+ errMatches string
+ expect *Vocabulary
+ }{
+ {"Empty data", "", true, "", &Vocabulary{}},
+ {"Invalid JSON", "foo", false, "invalid JSON format.*", nil},
+ {"Valid, empty JSON", "{}", false, ".*doesn't match Vocabulary format.*", nil},
+ {"Valid JSON, wrong data", `{"foo":"bar"}`, false, ".*doesn't match Vocabulary format.*", nil},
+ {
+ "Simple valid example",
+ `{"tags":{
+ "IDTAGANIMALS":{
+ "strict": false,
+ "labels": [{"label": "Animal"}, {"label": "Creature"}],
+ "values": {
+ "IDVALANIMAL1":{"labels":[{"label":"Human"}, {"label":"Homo sapiens"}]},
+ "IDVALANIMAL2":{"labels":[{"label":"Elephant"}, {"label":"Loxodonta"}]},
+ "DOG":{"labels":[{"label":"Dog"}, {"label":"Canis lupus familiaris"}, {"label":"dOg"}]}
+ }
+ }
+ }}`,
+ true, "",
+ &Vocabulary{
+ reservedTagKeys: map[string]bool{
+ "type": true,
+ "template_uuid": true,
+ "groups": true,
+ "username": true,
+ "image_timestamp": true,
+ "docker-image-repo-tag": true,
+ "filters": true,
+ "container_request": true,
+ },
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Homo sapiens"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "Loxodonta"}},
+ },
+ "DOG": {
+ Labels: []VocabularyLabel{{Label: "Dog"}, {Label: "Canis lupus familiaris"}, {Label: "dOg"}},
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ "Valid data, but uses reserved key",
+ `{"tags":{
+ "type":{
+ "strict": false,
+ "labels": [{"label": "Type"}]
+ }
+ }}`,
+ false, "tag key.*is reserved", nil,
+ },
+ }
+
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ voc, err := NewVocabulary([]byte(tt.data), []string{})
+ if tt.isValid {
+ c.Assert(err, check.IsNil)
+ } else {
+ c.Assert(err, check.NotNil)
+ if tt.errMatches != "" {
+ c.Assert(err, check.ErrorMatches, tt.errMatches)
+ }
+ }
+ c.Assert(voc, check.DeepEquals, tt.expect)
+ }
+}
+
+func (s *VocabularySuite) TestValidationErrors(c *check.C) {
+ tests := []struct {
+ name string
+ voc *Vocabulary
+ errMatches string
+ }{
+ {
+ "Strict vocabulary, no keys",
+ &Vocabulary{
+ StrictTags: true,
+ },
+ "vocabulary is strict but no tags are defined",
+ },
+ {
+ "Collision between tag key and tag key label",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ },
+ "IDTAGCOMMENT": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Comment"}, {Label: "IDTAGANIMALS"}},
+ },
+ },
+ },
+ "", // Depending on how the map is sorted, this could be one of two errors
+ },
+ {
+ "Collision between tag key and tag key label (case-insensitive)",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ },
+ "IDTAGCOMMENT": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Comment"}, {Label: "IdTagAnimals"}},
+ },
+ },
+ },
+ "", // Depending on how the map is sorted, this could be one of two errors
+ },
+ {
+ "Collision between tag key labels",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ },
+ "IDTAGCOMMENT": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Comment"}, {Label: "Animal"}},
+ },
+ },
+ },
+ "tag label.*for key.*already seen.*",
+ },
+ {
+ "Collision between tag value and tag value label",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Mammal"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "IDVALANIMAL1"}},
+ },
+ },
+ },
+ },
+ },
+ "", // Depending on how the map is sorted, this could be one of two errors
+ },
+ {
+ "Collision between tag value and tag value label (case-insensitive)",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Mammal"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "IDValAnimal1"}},
+ },
+ },
+ },
+ },
+ },
+ "", // Depending on how the map is sorted, this could be one of two errors
+ },
+ {
+ "Collision between tag value labels",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Mammal"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "Mammal"}},
+ },
+ },
+ },
+ },
+ },
+ "tag value label.*for pair.*already seen.*on value.*",
+ },
+ {
+ "Collision between tag value labels (case-insensitive)",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: false,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ Values: map[string]VocabularyTagValue{
+ "IDVALANIMAL1": {
+ Labels: []VocabularyLabel{{Label: "Human"}, {Label: "Mammal"}},
+ },
+ "IDVALANIMAL2": {
+ Labels: []VocabularyLabel{{Label: "Elephant"}, {Label: "mAMMAL"}},
+ },
+ },
+ },
+ },
+ },
+ "tag value label.*for pair.*already seen.*on value.*",
+ },
+ {
+ "Strict tag key, with no values",
+ &Vocabulary{
+ StrictTags: false,
+ Tags: map[string]VocabularyTag{
+ "IDTAGANIMALS": {
+ Strict: true,
+ Labels: []VocabularyLabel{{Label: "Animal"}, {Label: "Creature"}},
+ },
+ },
+ },
+ "tag key.*is configured as strict but doesn't provide values",
+ },
+ }
+ for _, tt := range tests {
+ c.Log(c.TestName()+" ", tt.name)
+ err := tt.voc.validate()
+ c.Assert(err, check.NotNil)
+ if tt.errMatches != "" {
+ c.Assert(err, check.ErrorMatches, tt.errMatches)
+ }
+ }
+}
data, err := ioutil.ReadFile(file)
if err != nil {
if !os.IsNotExist(err) {
- log.Printf("error reading %q: %s", file, err)
+ log.Printf("proceeding without loading cert file %q: %s", file, err)
}
continue
}
if scheme == "" {
scheme = "https"
}
+ if c.ApiServer == "" {
+ return nil, fmt.Errorf("Arvados client is not configured (target API host is not set). Maybe env var ARVADOS_API_HOST should be set first?")
+ }
u := url.URL{
Scheme: scheme,
Host: c.ApiServer}
return value, ErrInvalidArgument
}
+// ClusterConfig returns the value of the given key in the current cluster's
+// exported config. If key is an empty string, it'll return the entire config.
+func (c *ArvadosClient) ClusterConfig(key string) (config interface{}, err error) {
+ var clusterConfig interface{}
+ err = c.Call("GET", "config", "", "", nil, &clusterConfig)
+ if err != nil {
+ return nil, err
+ }
+ if key == "" {
+ return clusterConfig, nil
+ }
+ configData, ok := clusterConfig.(map[string]interface{})[key]
+ if !ok {
+ return nil, ErrInvalidArgument
+ }
+ return configData, nil
+}
+
func (c *ArvadosClient) httpClient() *http.Client {
if c.Client != nil {
return c.Client
"net/http"
"os"
"testing"
+ "time"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
)
type ServerRequiredSuite struct{}
func (s *ServerRequiredSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
arvadostest.StartKeep(2, false)
RetryDelay = 0
}
func (s *ServerRequiredSuite) TearDownSuite(c *C) {
arvadostest.StopKeep(2)
- arvadostest.StopAPI()
}
func (s *ServerRequiredSuite) SetUpTest(c *C) {
c.Assert(value, IsNil)
}
+func (s *ServerRequiredSuite) TestAPIClusterConfig_Get_StorageClasses(c *C) {
+ arv, err := MakeArvadosClient()
+ c.Assert(err, IsNil)
+ data, err := arv.ClusterConfig("StorageClasses")
+ c.Assert(err, IsNil)
+ c.Assert(data, NotNil)
+ clusterConfig := data.(map[string]interface{})
+ _, ok := clusterConfig["default"]
+ c.Assert(ok, Equals, true)
+}
+
+func (s *ServerRequiredSuite) TestAPIClusterConfig_Get_All(c *C) {
+ arv, err := MakeArvadosClient()
+ c.Assert(err, IsNil)
+ data, err := arv.ClusterConfig("")
+ c.Assert(err, IsNil)
+ c.Assert(data, NotNil)
+ clusterConfig := data.(map[string]interface{})
+ _, ok := clusterConfig["StorageClasses"]
+ c.Assert(ok, Equals, true)
+}
+
+func (s *ServerRequiredSuite) TestAPIClusterConfig_Get_noSuchSection(c *C) {
+ arv, err := MakeArvadosClient()
+ c.Assert(err, IsNil)
+ data, err := arv.ClusterConfig("noSuchSection")
+ c.Assert(err, NotNil)
+ c.Assert(data, IsNil)
+}
+
+func (s *ServerRequiredSuite) TestCreateLarge(c *C) {
+ arv, err := MakeArvadosClient()
+ c.Assert(err, IsNil)
+
+ txt := arvados.SignLocator("d41d8cd98f00b204e9800998ecf8427e+0", arv.ApiToken, time.Now().Add(time.Minute), time.Minute, []byte(arvadostest.SystemRootToken))
+ // Ensure our request body is bigger than the Go http server's
+ // default max size, 10 MB.
+ for len(txt) < 12000000 {
+ txt = txt + " " + txt
+ }
+ txt = ". " + txt + " 0:0:foo\n"
+
+ resp := Dict{}
+ err = arv.Create("collections", Dict{
+ "ensure_unique_name": true,
+ "collection": Dict{
+ "is_trashed": true,
+ "name": "test",
+ "manifest_text": txt,
+ },
+ }, &resp)
+ c.Check(err, IsNil)
+ c.Check(resp["portable_data_hash"], Not(Equals), "")
+ c.Check(resp["portable_data_hash"], Not(Equals), "d41d8cd98f00b204e9800998ecf8427e+0")
+}
+
type UnitSuite struct{}
func (s *UnitSuite) TestUUIDMatch(c *C) {
as.appendCall(ctx, as.ConfigGet, nil)
return nil, as.Error
}
+func (as *APIStub) VocabularyGet(ctx context.Context) (arvados.Vocabulary, error) {
+ as.appendCall(ctx, as.VocabularyGet, nil)
+ return arvados.Vocabulary{}, as.Error
+}
func (as *APIStub) Login(ctx context.Context, options arvados.LoginOptions) (arvados.LoginResponse, error) {
as.appendCall(ctx, as.Login, options)
return arvados.LoginResponse{}, as.Error
as.appendCall(ctx, as.ContainerRequestDelete, options)
return arvados.ContainerRequest{}, as.Error
}
+func (as *APIStub) GroupCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupCreate, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) GroupUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupUpdate, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) GroupGet(ctx context.Context, options arvados.GetOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupGet, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) GroupList(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ as.appendCall(ctx, as.GroupList, options)
+ return arvados.GroupList{}, as.Error
+}
+func (as *APIStub) GroupContents(ctx context.Context, options arvados.GroupContentsOptions) (arvados.ObjectList, error) {
+ as.appendCall(ctx, as.GroupContents, options)
+ return arvados.ObjectList{}, as.Error
+}
+func (as *APIStub) GroupShared(ctx context.Context, options arvados.ListOptions) (arvados.GroupList, error) {
+ as.appendCall(ctx, as.GroupShared, options)
+ return arvados.GroupList{}, as.Error
+}
+func (as *APIStub) GroupDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupDelete, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) GroupTrash(ctx context.Context, options arvados.DeleteOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupTrash, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) GroupUntrash(ctx context.Context, options arvados.UntrashOptions) (arvados.Group, error) {
+ as.appendCall(ctx, as.GroupUntrash, options)
+ return arvados.Group{}, as.Error
+}
+func (as *APIStub) LinkCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Link, error) {
+ as.appendCall(ctx, as.LinkCreate, options)
+ return arvados.Link{}, as.Error
+}
+func (as *APIStub) LinkUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Link, error) {
+ as.appendCall(ctx, as.LinkUpdate, options)
+ return arvados.Link{}, as.Error
+}
+func (as *APIStub) LinkGet(ctx context.Context, options arvados.GetOptions) (arvados.Link, error) {
+ as.appendCall(ctx, as.LinkGet, options)
+ return arvados.Link{}, as.Error
+}
+func (as *APIStub) LinkList(ctx context.Context, options arvados.ListOptions) (arvados.LinkList, error) {
+ as.appendCall(ctx, as.LinkList, options)
+ return arvados.LinkList{}, as.Error
+}
+func (as *APIStub) LinkDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Link, error) {
+ as.appendCall(ctx, as.LinkDelete, options)
+ return arvados.Link{}, as.Error
+}
func (as *APIStub) SpecimenCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Specimen, error) {
as.appendCall(ctx, as.SpecimenCreate, options)
return arvados.Specimen{}, as.Error
as.appendCall(ctx, as.SpecimenDelete, options)
return arvados.Specimen{}, as.Error
}
+func (as *APIStub) SysTrashSweep(ctx context.Context, options struct{}) (struct{}, error) {
+ as.appendCall(ctx, as.SysTrashSweep, options)
+ return struct{}{}, as.Error
+}
func (as *APIStub) UserCreate(ctx context.Context, options arvados.CreateOptions) (arvados.User, error) {
as.appendCall(ctx, as.UserCreate, options)
return arvados.User{}, as.Error
as.appendCall(ctx, as.UserUpdate, options)
return arvados.User{}, as.Error
}
-func (as *APIStub) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
- as.appendCall(ctx, as.UserUpdateUUID, options)
- return arvados.User{}, as.Error
-}
func (as *APIStub) UserActivate(ctx context.Context, options arvados.UserActivateOptions) (arvados.User, error) {
as.appendCall(ctx, as.UserActivate, options)
return arvados.User{}, as.Error
as.appendCall(ctx, as.APIClientAuthorizationCurrent, options)
return arvados.APIClientAuthorization{}, as.Error
}
+func (as *APIStub) APIClientAuthorizationCreate(ctx context.Context, options arvados.CreateOptions) (arvados.APIClientAuthorization, error) {
+ as.appendCall(ctx, as.APIClientAuthorizationCreate, options)
+ return arvados.APIClientAuthorization{}, as.Error
+}
+func (as *APIStub) APIClientAuthorizationUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.APIClientAuthorization, error) {
+ as.appendCall(ctx, as.APIClientAuthorizationUpdate, options)
+ return arvados.APIClientAuthorization{}, as.Error
+}
+func (as *APIStub) APIClientAuthorizationDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.APIClientAuthorization, error) {
+ as.appendCall(ctx, as.APIClientAuthorizationDelete, options)
+ return arvados.APIClientAuthorization{}, as.Error
+}
+func (as *APIStub) APIClientAuthorizationList(ctx context.Context, options arvados.ListOptions) (arvados.APIClientAuthorizationList, error) {
+ as.appendCall(ctx, as.APIClientAuthorizationList, options)
+ return arvados.APIClientAuthorizationList{}, as.Error
+}
+func (as *APIStub) APIClientAuthorizationGet(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
+ as.appendCall(ctx, as.APIClientAuthorizationGet, options)
+ return arvados.APIClientAuthorization{}, as.Error
+}
func (as *APIStub) appendCall(ctx context.Context, method interface{}, options interface{}) {
as.mtx.Lock()
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package arvadostest
+
+import "git.arvados.org/arvados.git/sdk/go/arvados"
+
+// Test that *APIStub implements arvados.API
+var _ arvados.API = &APIStub{}
ActiveToken = "3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
ActiveTokenUUID = "zzzzz-gj3su-077z32aux8dg2s1"
ActiveTokenV2 = "v2/zzzzz-gj3su-077z32aux8dg2s1/3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
+ AdminUserUUID = "zzzzz-tpzed-d9tiejq69daie8f"
AdminToken = "4axaw8zxe0qm22wa6urpp5nskcne8z88cvbupv653y1njyi05h"
AdminTokenUUID = "zzzzz-gj3su-027z32aux8dg2s1"
AnonymousToken = "4kg6k6lzmp9kj4cpkcoxie964cmvjahbt4fod9zru44k4jqdmi"
UserAgreementPDH = "b519d9cb706a29fc7ea24dbea2f05851+93"
HelloWorldPdh = "55713e6a34081eb03609e7ad5fcad129+62"
+ MultilevelCollection1 = "zzzzz-4zz18-pyw8yp9g3pr7irn"
+ StorageClassesDesiredDefaultConfirmedDefault = "zzzzz-4zz18-3t236wr12769tga"
+ StorageClassesDesiredArchiveConfirmedDefault = "zzzzz-4zz18-3t236wr12769qqa"
+ EmptyCollectionUUID = "zzzzz-4zz18-gs9ooj1h9sd5mde"
+
AProjectUUID = "zzzzz-j7d0g-v955i6s2oi1cbso"
ASubprojectUUID = "zzzzz-j7d0g-axqo7eu9pwvna1x"
QueuedContainerRequestUUID = "zzzzz-xvhdp-cr4queuedcontnr"
QueuedContainerUUID = "zzzzz-dz642-queuedcontainer"
+ LockedContainerUUID = "zzzzz-dz642-lockedcontainer"
+
RunningContainerUUID = "zzzzz-dz642-runningcontainr"
CompletedContainerUUID = "zzzzz-dz642-compltcontainer"
CompletedDiagnosticsHasher2ContainerUUID = "zzzzz-dz642-diagcomphasher2"
CompletedDiagnosticsHasher3ContainerUUID = "zzzzz-dz642-diagcomphasher3"
+ UncommittedContainerRequestUUID = "zzzzz-xvhdp-cr4uncommittedc"
+
Hasher1LogCollectionUUID = "zzzzz-4zz18-dlogcollhash001"
Hasher2LogCollectionUUID = "zzzzz-4zz18-dlogcollhash002"
Hasher3LogCollectionUUID = "zzzzz-4zz18-dlogcollhash003"
LogCollectionUUID = "zzzzz-4zz18-logcollection01"
LogCollectionUUID2 = "zzzzz-4zz18-logcollection02"
+
+ DockerImage112PDH = "d740a57097711e08eb9b2a93518f20ab+174"
+ DockerImage112Filename = "sha256:d8309758b8fe2c81034ffc8a10c36460b77db7bc5e7b448c4e5b684f9d95a678.tar"
)
// PathologicalManifest : A valid manifest designed to test
"gopkg.in/check.v1"
"gopkg.in/square/go-jose.v2"
+ "gopkg.in/square/go-jose.v2/jwt"
)
type OIDCProvider struct {
ValidClientID string
ValidClientSecret string
// desired response from token endpoint
- AuthEmail string
- AuthEmailVerified bool
- AuthName string
+ AuthEmail string
+ AuthEmailVerified bool
+ AuthName string
+ AuthGivenName string
+ AuthFamilyName string
+ AccessTokenPayload map[string]interface{}
PeopleAPIResponse map[string]interface{}
c.Assert(err, check.IsNil)
p.Issuer = httptest.NewServer(http.HandlerFunc(p.serveOIDC))
p.PeopleAPI = httptest.NewServer(http.HandlerFunc(p.servePeopleAPI))
+ p.AccessTokenPayload = map[string]interface{}{"sub": "example"}
return p
}
func (p *OIDCProvider) ValidAccessToken() string {
- return p.fakeToken([]byte("fake access token"))
+ buf, _ := json.Marshal(p.AccessTokenPayload)
+ return p.fakeToken(buf)
}
func (p *OIDCProvider) serveOIDC(w http.ResponseWriter, req *http.Request) {
"email": p.AuthEmail,
"email_verified": p.AuthEmailVerified,
"name": p.AuthName,
+ "given_name": p.AuthGivenName,
+ "family_name": p.AuthFamilyName,
"alt_verified": true, // for custom claim tests
"alt_email": "alt_email@example.com", // for custom claim tests
"alt_username": "desired-username", // for custom claim tests
case "/auth":
w.WriteHeader(http.StatusInternalServerError)
case "/userinfo":
- if authhdr := req.Header.Get("Authorization"); strings.TrimPrefix(authhdr, "Bearer ") != p.ValidAccessToken() {
+ authhdr := req.Header.Get("Authorization")
+ if _, err := jwt.ParseSigned(strings.TrimPrefix(authhdr, "Bearer ")); err != nil {
p.c.Logf("OIDCProvider: bad auth %q", authhdr)
w.WriteHeader(http.StatusUnauthorized)
return
json.NewEncoder(w).Encode(map[string]interface{}{
"sub": "fake-user-id",
"name": p.AuthName,
- "given_name": p.AuthName,
- "family_name": "",
+ "given_name": p.AuthGivenName,
+ "family_name": p.AuthFamilyName,
"alt_username": "desired-username",
"email": p.AuthEmail,
"email_verified": p.AuthEmailVerified,
package arvadostest
import (
- "bufio"
- "bytes"
+ "crypto/tls"
"fmt"
"io/ioutil"
"log"
+ "net/http"
"os"
"os/exec"
"path"
"strconv"
"strings"
+
+ "gopkg.in/check.v1"
)
var authSettings = make(map[string]string)
-// ResetEnv resets test env
+// ResetEnv resets ARVADOS_* env vars to whatever they were the first
+// time this func was called.
+//
+// Call it from your SetUpTest or SetUpSuite func if your tests modify
+// env vars.
func ResetEnv() {
- for k, v := range authSettings {
- os.Setenv(k, v)
- }
-}
-
-// APIHost returns the address:port of the current test server.
-func APIHost() string {
- h := authSettings["ARVADOS_API_HOST"]
- if h == "" {
- log.Fatal("arvadostest.APIHost() was called but authSettings is not populated")
- }
- return h
-}
-
-// ParseAuthSettings parses auth settings from given input
-func ParseAuthSettings(authScript []byte) {
- scanner := bufio.NewScanner(bytes.NewReader(authScript))
- for scanner.Scan() {
- line := scanner.Text()
- if 0 != strings.Index(line, "export ") {
- log.Printf("Ignoring: %v", line)
- continue
+ if len(authSettings) == 0 {
+ for _, e := range os.Environ() {
+ e := strings.SplitN(e, "=", 2)
+ if len(e) == 2 {
+ authSettings[e[0]] = e[1]
+ }
}
- toks := strings.SplitN(strings.Replace(line, "export ", "", 1), "=", 2)
- if len(toks) == 2 {
- authSettings[toks[0]] = toks[1]
- } else {
- log.Fatalf("Could not parse: %v", line)
+ } else {
+ for k, v := range authSettings {
+ os.Setenv(k, v)
}
}
- log.Printf("authSettings: %v", authSettings)
}
var pythonTestDir string
}
}
-// StartAPI starts test API server
-func StartAPI() {
- cwd, _ := os.Getwd()
- defer os.Chdir(cwd)
- chdirToPythonTests()
-
- cmd := exec.Command("python", "run_test_server.py", "start", "--auth", "admin")
- cmd.Stdin = nil
- cmd.Stderr = os.Stderr
-
- authScript, err := cmd.Output()
- if err != nil {
- log.Fatalf("%+v: %s", cmd.Args, err)
- }
- ParseAuthSettings(authScript)
- ResetEnv()
-}
-
-// StopAPI stops test API server
-func StopAPI() {
- cwd, _ := os.Getwd()
- defer os.Chdir(cwd)
- chdirToPythonTests()
-
- cmd := exec.Command("python", "run_test_server.py", "stop")
- bgRun(cmd)
- // Without Wait, "go test" in go1.10.1 tends to hang. https://github.com/golang/go/issues/24050
- cmd.Wait()
+func ResetDB(c *check.C) {
+ hc := http.Client{Transport: &http.Transport{
+ TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
+ }}
+ req, err := http.NewRequest("POST", "https://"+os.Getenv("ARVADOS_TEST_API_HOST")+"/database/reset", nil)
+ c.Assert(err, check.IsNil)
+ req.Header.Set("Authorization", "Bearer "+AdminToken)
+ resp, err := hc.Do(req)
+ c.Assert(err, check.IsNil)
+ defer resp.Body.Close()
+ c.Check(resp.StatusCode, check.Equals, http.StatusOK)
}
// StartKeep starts the given number of keep servers,
package dispatch
import (
+ "bytes"
"context"
"fmt"
"sync"
"time"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"github.com/sirupsen/logrus"
// running, and return.
//
// The DispatchFunc should not return until the container is finished.
-type DispatchFunc func(*Dispatcher, arvados.Container, <-chan arvados.Container)
+type DispatchFunc func(*Dispatcher, arvados.Container, <-chan arvados.Container) error
// Run watches the API server's queue for containers that are either
// ready to run and available to lock, or are already locked by this
}
tracker.updates <- c
go func() {
- d.RunContainer(d, c, tracker.updates)
- // RunContainer blocks for the lifetime of the container. When
- // it returns, the tracker should delete itself.
+ err := d.RunContainer(d, c, tracker.updates)
+ if err != nil {
+ text := fmt.Sprintf("Error running container %s: %s", c.UUID, err)
+ if err, ok := err.(dispatchcloud.ConstraintsNotSatisfiableError); ok {
+ var logBuf bytes.Buffer
+ fmt.Fprintf(&logBuf, "cannot run container %s: %s\n", c.UUID, err)
+ if len(err.AvailableTypes) == 0 {
+ fmt.Fprint(&logBuf, "No instance types are configured.\n")
+ } else {
+ fmt.Fprint(&logBuf, "Available instance types:\n")
+ for _, t := range err.AvailableTypes {
+ fmt.Fprintf(&logBuf,
+ "Type %q: %d VCPUs, %d RAM, %d Scratch, %f Price\n",
+ t.Name, t.VCPUs, t.RAM, t.Scratch, t.Price)
+ }
+ }
+ text = logBuf.String()
+ d.UpdateState(c.UUID, Cancelled)
+ }
+ d.Logger.Printf("%s", text)
+ lr := arvadosclient.Dict{"log": arvadosclient.Dict{
+ "object_uuid": c.UUID,
+ "event_type": "dispatch",
+ "properties": map[string]string{"text": text}}}
+ d.Arv.Create("logs", lr, nil)
+ d.Unlock(c.UUID)
+ }
+
d.mtx.Lock()
delete(d.trackers, c.UUID)
d.mtx.Unlock()
type suite struct{}
-func (s *suite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
-}
-
-func (s *suite) TearDownSuite(c *C) {
- arvadostest.StopAPI()
-}
-
func (s *suite) TestTrackContainer(c *C) {
arv, err := arvadosclient.MakeArvadosClient()
c.Assert(err, Equals, nil)
time.AfterFunc(10*time.Second, func() { done <- false })
d := &Dispatcher{
Arv: arv,
- RunContainer: func(dsp *Dispatcher, ctr arvados.Container, status <-chan arvados.Container) {
+ RunContainer: func(dsp *Dispatcher, ctr arvados.Container, status <-chan arvados.Container) error {
for ctr := range status {
c.Logf("%#v", ctr)
}
done <- true
+ return nil
},
}
d.TrackContainer(arvadostest.QueuedContainerUUID)
for _, svc := range []*arvados.Service{
&svcs.Controller,
&svcs.DispatchCloud,
+ &svcs.DispatchLSF,
&svcs.Keepbalance,
&svcs.Keepproxy,
&svcs.Keepstore,
}
req.Header.Set(HeaderRequestID, gen.Next())
}
+ w.Header().Set("X-Request-Id", req.Header.Get("X-Request-Id"))
h.ServeHTTP(w, req)
})
}
// Copyright (C) The Arvados Authors. All rights reserved.
//
-// SPDX-License-Identifier: AGPL-3.0
+// SPDX-License-Identifier: Apache-2.0
package httpserver
import (
+ "bufio"
"context"
+ "net"
"net/http"
"time"
requestTimeContextKey = contextKey{"requestTime"}
)
-// HandlerWithContext returns an http.Handler that changes the request
-// context to ctx (replacing http.Server's default
-// context.Background()), then calls next.
-func HandlerWithContext(ctx context.Context, next http.Handler) http.Handler {
+type hijacker interface {
+ http.ResponseWriter
+ http.Hijacker
+}
+
+// hijackNotifier wraps a ResponseWriter, calling the provided
+// Notify() func if/when the wrapped Hijacker is hijacked.
+type hijackNotifier struct {
+ hijacker
+ hijacked chan<- bool
+}
+
+func (hn hijackNotifier) Hijack() (net.Conn, *bufio.ReadWriter, error) {
+ close(hn.hijacked)
+ return hn.hijacker.Hijack()
+}
+
+// HandlerWithDeadline cancels the request context if the request
+// takes longer than the specified timeout without having its
+// connection hijacked.
+func HandlerWithDeadline(timeout time.Duration, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ ctx, cancel := context.WithCancel(r.Context())
+ defer cancel()
+ nodeadline := make(chan bool)
+ go func() {
+ select {
+ case <-nodeadline:
+ case <-ctx.Done():
+ case <-time.After(timeout):
+ cancel()
+ }
+ }()
+ if hj, ok := w.(hijacker); ok {
+ w = hijackNotifier{hj, nodeadline}
+ }
next.ServeHTTP(w, r.WithContext(ctx))
})
}
// Copyright (C) The Arvados Authors. All rights reserved.
//
-// SPDX-License-Identifier: AGPL-3.0
+// SPDX-License-Identifier: Apache-2.0
package httpserver
"context"
"encoding/json"
"fmt"
+ "io/ioutil"
+ "net"
"net/http"
"net/http/httptest"
"testing"
s.ctx = ctxlog.Context(context.Background(), s.log)
}
+func (s *Suite) TestWithDeadline(c *check.C) {
+ req, err := http.NewRequest("GET", "https://foo.example/bar", nil)
+ c.Assert(err, check.IsNil)
+
+ // Short timeout cancels context in <1s
+ resp := httptest.NewRecorder()
+ HandlerWithDeadline(time.Millisecond, http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ select {
+ case <-req.Context().Done():
+ w.Write([]byte("ok"))
+ case <-time.After(time.Second):
+ c.Error("timed out")
+ }
+ })).ServeHTTP(resp, req.WithContext(s.ctx))
+ c.Check(resp.Body.String(), check.Equals, "ok")
+
+ // Long timeout does not cancel context in <1ms
+ resp = httptest.NewRecorder()
+ HandlerWithDeadline(time.Second, http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ select {
+ case <-req.Context().Done():
+ c.Error("request context done too soon")
+ case <-time.After(time.Millisecond):
+ w.Write([]byte("ok"))
+ }
+ })).ServeHTTP(resp, req.WithContext(s.ctx))
+ c.Check(resp.Body.String(), check.Equals, "ok")
+}
+
+func (s *Suite) TestNoDeadlineAfterHijacked(c *check.C) {
+ srv := Server{
+ Addr: ":",
+ Server: http.Server{
+ Handler: HandlerWithDeadline(time.Millisecond, http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ conn, _, err := w.(http.Hijacker).Hijack()
+ c.Assert(err, check.IsNil)
+ defer conn.Close()
+ select {
+ case <-req.Context().Done():
+ c.Error("request context done too soon")
+ case <-time.After(time.Second / 10):
+ conn.Write([]byte("HTTP/1.1 200 OK\r\n\r\nok"))
+ }
+ })),
+ BaseContext: func(net.Listener) context.Context { return s.ctx },
+ },
+ }
+ srv.Start()
+ defer srv.Close()
+ resp, err := http.Get("http://" + srv.Addr)
+ c.Assert(err, check.IsNil)
+ body, err := ioutil.ReadAll(resp.Body)
+ c.Check(string(body), check.Equals, "ok")
+}
+
func (s *Suite) TestLogRequests(c *check.C) {
h := AddRequestIDs(LogRequests(
http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
c.Assert(err, check.IsNil)
resp := httptest.NewRecorder()
- HandlerWithContext(s.ctx, h).ServeHTTP(resp, req)
+ h.ServeHTTP(resp, req.WithContext(s.ctx))
dec := json.NewDecoder(s.logdata)
c.Assert(err, check.IsNil)
resp := httptest.NewRecorder()
- HandlerWithContext(s.ctx, LogRequests(
+ LogRequests(
http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(trial.statusCode)
w.Write([]byte(trial.sentBody))
}),
- )).ServeHTTP(resp, req)
+ ).ServeHTTP(resp, req.WithContext(s.ctx))
gotReq := make(map[string]interface{})
err = dec.Decode(&gotReq)
return nil
}
+ if kc.Arvados.ApiServer == "" {
+ return fmt.Errorf("Arvados client is not configured (target API host is not set). Maybe env var ARVADOS_API_HOST should be set first?")
+ }
+
svcListCacheMtx.Lock()
cacheEnt, ok := svcListCache[kc.Arvados.ApiServer]
if !ok {
import (
"bytes"
+ "context"
"crypto/md5"
"errors"
"fmt"
"sync"
"time"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
- "git.arvados.org/arvados.git/sdk/go/asyncbuf"
"git.arvados.org/arvados.git/sdk/go/httpserver"
)
multipleResponseError
}
-type InsufficientReplicasError error
+type InsufficientReplicasError struct{ error }
-type OversizeBlockError error
+type OversizeBlockError struct{ error }
-var ErrOversizeBlock = OversizeBlockError(errors.New("Exceeded maximum block size (" + strconv.Itoa(BLOCKSIZE) + ")"))
+var ErrOversizeBlock = OversizeBlockError{error: errors.New("Exceeded maximum block size (" + strconv.Itoa(BLOCKSIZE) + ")")}
var MissingArvadosApiHost = errors.New("Missing required environment variable ARVADOS_API_HOST")
var MissingArvadosApiToken = errors.New("Missing required environment variable ARVADOS_API_TOKEN")
var InvalidLocatorError = errors.New("Invalid locator")
// ErrIncompleteIndex is returned when the Index response does not end with a new empty line
var ErrIncompleteIndex = errors.New("Got incomplete index")
-const XKeepDesiredReplicas = "X-Keep-Desired-Replicas"
-const XKeepReplicasStored = "X-Keep-Replicas-Stored"
+const (
+ XKeepDesiredReplicas = "X-Keep-Desired-Replicas"
+ XKeepReplicasStored = "X-Keep-Replicas-Stored"
+ XKeepStorageClasses = "X-Keep-Storage-Classes"
+ XKeepStorageClassesConfirmed = "X-Keep-Storage-Classes-Confirmed"
+)
type HTTPClient interface {
Do(*http.Request) (*http.Response, error)
// KeepClient holds information about Arvados and Keep servers.
type KeepClient struct {
- Arvados *arvadosclient.ArvadosClient
- Want_replicas int
- localRoots map[string]string
- writableLocalRoots map[string]string
- gatewayRoots map[string]string
- lock sync.RWMutex
- HTTPClient HTTPClient
- Retries int
- BlockCache *BlockCache
- RequestID string
- StorageClasses []string
+ Arvados *arvadosclient.ArvadosClient
+ Want_replicas int
+ localRoots map[string]string
+ writableLocalRoots map[string]string
+ gatewayRoots map[string]string
+ lock sync.RWMutex
+ HTTPClient HTTPClient
+ Retries int
+ BlockCache *BlockCache
+ RequestID string
+ StorageClasses []string
+ DefaultStorageClasses []string // Set by cluster's exported config
// set to 1 if all writable services are of disk type, otherwise 0
replicasPerService int
disableDiscovery bool
}
-// MakeKeepClient creates a new KeepClient, calls
+func (kc *KeepClient) loadDefaultClasses() error {
+ scData, err := kc.Arvados.ClusterConfig("StorageClasses")
+ if err != nil {
+ return err
+ }
+ classes := scData.(map[string]interface{})
+ for scName := range classes {
+ scConf, _ := classes[scName].(map[string]interface{})
+ isDefault, ok := scConf["Default"].(bool)
+ if ok && isDefault {
+ kc.DefaultStorageClasses = append(kc.DefaultStorageClasses, scName)
+ }
+ }
+ return nil
+}
+
+// MakeKeepClient creates a new KeepClient, loads default storage classes, calls
// DiscoverKeepServices(), and returns when the client is ready to
// use.
func MakeKeepClient(arv *arvadosclient.ArvadosClient) (*KeepClient, error) {
defaultReplicationLevel = int(v)
}
}
- return &KeepClient{
+ kc := &KeepClient{
Arvados: arv,
Want_replicas: defaultReplicationLevel,
Retries: 2,
}
+ err = kc.loadDefaultClasses()
+ if err != nil {
+ DebugPrintf("DEBUG: Unable to load the default storage classes cluster config")
+ }
+ return kc
}
// PutHR puts a block given the block hash, a reader, and the number of bytes
// Returns an InsufficientReplicasError if 0 <= replicas <
// kc.Wants_replicas.
func (kc *KeepClient) PutHR(hash string, r io.Reader, dataBytes int64) (string, int, error) {
- // Buffer for reads from 'r'
- var bufsize int
- if dataBytes > 0 {
- if dataBytes > BLOCKSIZE {
- return "", 0, ErrOversizeBlock
- }
- bufsize = int(dataBytes)
- } else {
- bufsize = BLOCKSIZE
- }
-
- buf := asyncbuf.NewBuffer(make([]byte, 0, bufsize))
- go func() {
- _, err := io.Copy(buf, HashCheckingReader{r, md5.New(), hash})
- buf.CloseWithError(err)
- }()
- return kc.putReplicas(hash, buf.NewReader, dataBytes)
+ resp, err := kc.BlockWrite(context.Background(), arvados.BlockWriteOptions{
+ Hash: hash,
+ Reader: r,
+ DataSize: int(dataBytes),
+ })
+ return resp.Locator, resp.Replicas, err
}
// PutHB writes a block to Keep. The hash of the bytes is given in
//
// Return values are the same as for PutHR.
func (kc *KeepClient) PutHB(hash string, buf []byte) (string, int, error) {
- newReader := func() io.Reader { return bytes.NewBuffer(buf) }
- return kc.putReplicas(hash, newReader, int64(len(buf)))
+ resp, err := kc.BlockWrite(context.Background(), arvados.BlockWriteOptions{
+ Hash: hash,
+ Data: buf,
+ })
+ return resp.Locator, resp.Replicas, err
}
// PutB writes a block to Keep. It computes the hash itself.
//
// Return values are the same as for PutHR.
func (kc *KeepClient) PutB(buffer []byte) (string, int, error) {
- hash := fmt.Sprintf("%x", md5.Sum(buffer))
- return kc.PutHB(hash, buffer)
+ resp, err := kc.BlockWrite(context.Background(), arvados.BlockWriteOptions{
+ Data: buffer,
+ })
+ return resp.Locator, resp.Replicas, err
}
// PutR writes a block to Keep. It first reads all data from r into a buffer
kc.cache().Clear()
}
+func (kc *KeepClient) SetStorageClasses(sc []string) {
+ // make a copy so the caller can't mess with it.
+ kc.StorageClasses = append([]string{}, sc...)
+}
+
var (
// There are four global http.Client objects for the four
// possible permutations of TLS behavior (verify/skip-verify)
import (
"bytes"
+ "context"
"crypto/md5"
- "errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
+ "sync"
"testing"
"time"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
}
func (s *ServerRequiredSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
arvadostest.StartKeep(2, false)
}
func (s *ServerRequiredSuite) TearDownSuite(c *C) {
arvadostest.StopKeep(2)
- arvadostest.StopAPI()
}
func (s *ServerRequiredSuite) SetUpTest(c *C) {
}
}
+func (s *ServerRequiredSuite) TestDefaultStorageClasses(c *C) {
+ arv, err := arvadosclient.MakeArvadosClient()
+ c.Assert(err, IsNil)
+
+ cc, err := arv.ClusterConfig("StorageClasses")
+ c.Assert(err, IsNil)
+ c.Assert(cc, NotNil)
+ c.Assert(cc.(map[string]interface{})["default"], NotNil)
+
+ kc := New(arv)
+ c.Assert(kc.DefaultStorageClasses, DeepEquals, []string{"default"})
+}
+
func (s *ServerRequiredSuite) TestDefaultReplications(c *C) {
arv, err := arvadosclient.MakeArvadosClient()
- c.Assert(err, Equals, nil)
+ c.Assert(err, IsNil)
kc, err := MakeKeepClient(arv)
c.Check(err, IsNil)
}
type StubPutHandler struct {
- c *C
- expectPath string
- expectAPIToken string
- expectBody string
- expectStorageClass string
- handled chan string
+ c *C
+ expectPath string
+ expectAPIToken string
+ expectBody string
+ expectStorageClass string
+ returnStorageClasses string
+ handled chan string
+ requests []*http.Request
+ mtx sync.Mutex
}
-func (sph StubPutHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
+func (sph *StubPutHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
+ sph.mtx.Lock()
+ sph.requests = append(sph.requests, req)
+ sph.mtx.Unlock()
sph.c.Check(req.URL.Path, Equals, "/"+sph.expectPath)
sph.c.Check(req.Header.Get("Authorization"), Equals, fmt.Sprintf("OAuth2 %s", sph.expectAPIToken))
- sph.c.Check(req.Header.Get("X-Keep-Storage-Classes"), Equals, sph.expectStorageClass)
+ if sph.expectStorageClass != "*" {
+ sph.c.Check(req.Header.Get("X-Keep-Storage-Classes"), Equals, sph.expectStorageClass)
+ }
body, err := ioutil.ReadAll(req.Body)
sph.c.Check(err, Equals, nil)
sph.c.Check(body, DeepEquals, []byte(sph.expectBody))
+ resp.Header().Set("X-Keep-Replicas-Stored", "1")
+ if sph.returnStorageClasses != "" {
+ resp.Header().Set("X-Keep-Storage-Classes-Confirmed", sph.returnStorageClasses)
+ }
resp.WriteHeader(200)
sph.handled <- fmt.Sprintf("http://%s", req.Host)
}
// bind to 0.0.0.0 or [::] which is not a valid address for Dial()
ks.listener, err = net.ListenTCP("tcp", &net.TCPAddr{IP: []byte{127, 0, 0, 1}, Port: 0})
if err != nil {
- panic(fmt.Sprintf("Could not listen on any port"))
+ panic("Could not listen on any port")
}
ks.url = fmt.Sprintf("http://%s", ks.listener.Addr().String())
go http.Serve(ks.listener, st)
func (s *StandaloneSuite) TestUploadToStubKeepServer(c *C) {
log.Printf("TestUploadToStubKeepServer")
- st := StubPutHandler{
- c,
- "acbd18db4cc2f85cedef654fccc4a4d8",
- "abc123",
- "foo",
- "hot",
- make(chan string)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: "acbd18db4cc2f85cedef654fccc4a4d8",
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "",
+ returnStorageClasses: "default=1",
+ handled: make(chan string),
+ }
UploadToStubHelper(c, st,
func(kc *KeepClient, url string, reader io.ReadCloser, writer io.WriteCloser, uploadStatusChan chan uploadStatus) {
- kc.StorageClasses = []string{"hot"}
- go kc.uploadToKeepServer(url, st.expectPath, reader, uploadStatusChan, int64(len("foo")), kc.getRequestID())
+ go kc.uploadToKeepServer(url, st.expectPath, nil, reader, uploadStatusChan, len("foo"), kc.getRequestID())
writer.Write([]byte("foo"))
writer.Close()
<-st.handled
status := <-uploadStatusChan
- c.Check(status, DeepEquals, uploadStatus{nil, fmt.Sprintf("%s/%s", url, st.expectPath), 200, 1, ""})
+ c.Check(status, DeepEquals, uploadStatus{nil, fmt.Sprintf("%s/%s", url, st.expectPath), 200, 1, map[string]int{"default": 1}, ""})
})
}
func (s *StandaloneSuite) TestUploadToStubKeepServerBufferReader(c *C) {
- st := StubPutHandler{
- c,
- "acbd18db4cc2f85cedef654fccc4a4d8",
- "abc123",
- "foo",
- "",
- make(chan string)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: "acbd18db4cc2f85cedef654fccc4a4d8",
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "",
+ returnStorageClasses: "default=1",
+ handled: make(chan string),
+ }
UploadToStubHelper(c, st,
func(kc *KeepClient, url string, _ io.ReadCloser, _ io.WriteCloser, uploadStatusChan chan uploadStatus) {
- go kc.uploadToKeepServer(url, st.expectPath, bytes.NewBuffer([]byte("foo")), uploadStatusChan, 3, kc.getRequestID())
+ go kc.uploadToKeepServer(url, st.expectPath, nil, bytes.NewBuffer([]byte("foo")), uploadStatusChan, 3, kc.getRequestID())
<-st.handled
status := <-uploadStatusChan
- c.Check(status, DeepEquals, uploadStatus{nil, fmt.Sprintf("%s/%s", url, st.expectPath), 200, 1, ""})
+ c.Check(status, DeepEquals, uploadStatus{nil, fmt.Sprintf("%s/%s", url, st.expectPath), 200, 1, map[string]int{"default": 1}, ""})
})
}
+func (s *StandaloneSuite) TestUploadWithStorageClasses(c *C) {
+ for _, trial := range []struct {
+ respHeader string
+ expectMap map[string]int
+ }{
+ {"", nil},
+ {"foo=1", map[string]int{"foo": 1}},
+ {" foo=1 , bar=2 ", map[string]int{"foo": 1, "bar": 2}},
+ {" =foo=1 ", nil},
+ {"foo", nil},
+ } {
+ st := &StubPutHandler{
+ c: c,
+ expectPath: "acbd18db4cc2f85cedef654fccc4a4d8",
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "",
+ returnStorageClasses: trial.respHeader,
+ handled: make(chan string),
+ }
+
+ UploadToStubHelper(c, st,
+ func(kc *KeepClient, url string, reader io.ReadCloser, writer io.WriteCloser, uploadStatusChan chan uploadStatus) {
+ go kc.uploadToKeepServer(url, st.expectPath, nil, reader, uploadStatusChan, len("foo"), kc.getRequestID())
+
+ writer.Write([]byte("foo"))
+ writer.Close()
+
+ <-st.handled
+ status := <-uploadStatusChan
+ c.Check(status, DeepEquals, uploadStatus{nil, fmt.Sprintf("%s/%s", url, st.expectPath), 200, 1, trial.expectMap, ""})
+ })
+ }
+}
+
+func (s *StandaloneSuite) TestPutWithoutStorageClassesClusterSupport(c *C) {
+ nServers := 5
+ for _, trial := range []struct {
+ replicas int
+ clientClasses []string
+ putClasses []string
+ minRequests int
+ maxRequests int
+ success bool
+ }{
+ // Talking to an older cluster (no default storage classes exported
+ // config) and no other additional storage classes requirements.
+ {1, nil, nil, 1, 1, true},
+ {2, nil, nil, 2, 2, true},
+ {3, nil, nil, 3, 3, true},
+ {nServers*2 + 1, nil, nil, nServers, nServers, false},
+
+ {1, []string{"class1"}, nil, 1, 1, true},
+ {2, []string{"class1"}, nil, 2, 2, true},
+ {3, []string{"class1"}, nil, 3, 3, true},
+ {1, []string{"class1", "class2"}, nil, 1, 1, true},
+ {nServers*2 + 1, []string{"class1"}, nil, nServers, nServers, false},
+
+ {1, nil, []string{"class1"}, 1, 1, true},
+ {2, nil, []string{"class1"}, 2, 2, true},
+ {3, nil, []string{"class1"}, 3, 3, true},
+ {1, nil, []string{"class1", "class2"}, 1, 1, true},
+ {nServers*2 + 1, nil, []string{"class1"}, nServers, nServers, false},
+ } {
+ c.Logf("%+v", trial)
+ st := &StubPutHandler{
+ c: c,
+ expectPath: "acbd18db4cc2f85cedef654fccc4a4d8",
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "*",
+ returnStorageClasses: "", // Simulate old cluster without SC keep support
+ handled: make(chan string, 100),
+ }
+ ks := RunSomeFakeKeepServers(st, nServers)
+ arv, _ := arvadosclient.MakeArvadosClient()
+ kc, _ := MakeKeepClient(arv)
+ kc.Want_replicas = trial.replicas
+ kc.StorageClasses = trial.clientClasses
+ kc.DefaultStorageClasses = nil // Simulate an old cluster without SC defaults
+ arv.ApiToken = "abc123"
+ localRoots := make(map[string]string)
+ writableLocalRoots := make(map[string]string)
+ for i, k := range ks {
+ localRoots[fmt.Sprintf("zzzzz-bi6l4-fakefakefake%03d", i)] = k.url
+ writableLocalRoots[fmt.Sprintf("zzzzz-bi6l4-fakefakefake%03d", i)] = k.url
+ defer k.listener.Close()
+ }
+ kc.SetServiceRoots(localRoots, writableLocalRoots, nil)
+
+ _, err := kc.BlockWrite(context.Background(), arvados.BlockWriteOptions{
+ Data: []byte("foo"),
+ StorageClasses: trial.putClasses,
+ })
+ if trial.success {
+ c.Check(err, IsNil)
+ } else {
+ c.Check(err, NotNil)
+ }
+ c.Check(len(st.handled) >= trial.minRequests, Equals, true, Commentf("len(st.handled)==%d, trial.minRequests==%d", len(st.handled), trial.minRequests))
+ c.Check(len(st.handled) <= trial.maxRequests, Equals, true, Commentf("len(st.handled)==%d, trial.maxRequests==%d", len(st.handled), trial.maxRequests))
+ if trial.clientClasses == nil && trial.putClasses == nil {
+ c.Check(st.requests[0].Header.Get("X-Keep-Storage-Classes"), Equals, "")
+ }
+ }
+}
+
+func (s *StandaloneSuite) TestPutWithStorageClasses(c *C) {
+ nServers := 5
+ for _, trial := range []struct {
+ replicas int
+ defaultClasses []string
+ clientClasses []string // clientClasses takes precedence over defaultClasses
+ putClasses []string // putClasses takes precedence over clientClasses
+ minRequests int
+ maxRequests int
+ success bool
+ }{
+ {1, []string{"class1"}, nil, nil, 1, 1, true},
+ {2, []string{"class1"}, nil, nil, 1, 2, true},
+ {3, []string{"class1"}, nil, nil, 2, 3, true},
+ {1, []string{"class1", "class2"}, nil, nil, 1, 1, true},
+
+ // defaultClasses doesn't matter when any of the others is specified.
+ {1, []string{"class1"}, []string{"class1"}, nil, 1, 1, true},
+ {2, []string{"class1"}, []string{"class1"}, nil, 1, 2, true},
+ {3, []string{"class1"}, []string{"class1"}, nil, 2, 3, true},
+ {1, []string{"class1"}, []string{"class1", "class2"}, nil, 1, 1, true},
+ {3, []string{"class1"}, nil, []string{"class1"}, 2, 3, true},
+ {1, []string{"class1"}, nil, []string{"class1", "class2"}, 1, 1, true},
+ {1, []string{"class1"}, []string{"class404"}, []string{"class1", "class2"}, 1, 1, true},
+ {1, []string{"class1"}, []string{"class1"}, []string{"class404", "class2"}, nServers, nServers, false},
+ {nServers*2 + 1, []string{}, []string{"class1"}, nil, nServers, nServers, false},
+ {1, []string{"class1"}, []string{"class404"}, nil, nServers, nServers, false},
+ {1, []string{"class1"}, []string{"class1", "class404"}, nil, nServers, nServers, false},
+ {1, []string{"class1"}, nil, []string{"class1", "class404"}, nServers, nServers, false},
+ } {
+ c.Logf("%+v", trial)
+ st := &StubPutHandler{
+ c: c,
+ expectPath: "acbd18db4cc2f85cedef654fccc4a4d8",
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "*",
+ returnStorageClasses: "class1=2, class2=2",
+ handled: make(chan string, 100),
+ }
+ ks := RunSomeFakeKeepServers(st, nServers)
+ arv, _ := arvadosclient.MakeArvadosClient()
+ kc, _ := MakeKeepClient(arv)
+ kc.Want_replicas = trial.replicas
+ kc.StorageClasses = trial.clientClasses
+ kc.DefaultStorageClasses = trial.defaultClasses
+ arv.ApiToken = "abc123"
+ localRoots := make(map[string]string)
+ writableLocalRoots := make(map[string]string)
+ for i, k := range ks {
+ localRoots[fmt.Sprintf("zzzzz-bi6l4-fakefakefake%03d", i)] = k.url
+ writableLocalRoots[fmt.Sprintf("zzzzz-bi6l4-fakefakefake%03d", i)] = k.url
+ defer k.listener.Close()
+ }
+ kc.SetServiceRoots(localRoots, writableLocalRoots, nil)
+
+ _, err := kc.BlockWrite(context.Background(), arvados.BlockWriteOptions{
+ Data: []byte("foo"),
+ StorageClasses: trial.putClasses,
+ })
+ if trial.success {
+ c.Check(err, IsNil)
+ } else {
+ c.Check(err, NotNil)
+ }
+ c.Check(len(st.handled) >= trial.minRequests, Equals, true, Commentf("len(st.handled)==%d, trial.minRequests==%d", len(st.handled), trial.minRequests))
+ c.Check(len(st.handled) <= trial.maxRequests, Equals, true, Commentf("len(st.handled)==%d, trial.maxRequests==%d", len(st.handled), trial.maxRequests))
+ if !trial.success && trial.replicas == 1 && c.Check(len(st.requests) >= 2, Equals, true) {
+ // Max concurrency should be 1. First request
+ // should have succeeded for class1. Second
+ // request should only ask for class404.
+ c.Check(st.requests[1].Header.Get("X-Keep-Storage-Classes"), Equals, "class404")
+ }
+ }
+}
+
type FailHandler struct {
handled chan string
}
func(kc *KeepClient, url string, reader io.ReadCloser,
writer io.WriteCloser, uploadStatusChan chan uploadStatus) {
- go kc.uploadToKeepServer(url, hash, reader, uploadStatusChan, 3, kc.getRequestID())
+ go kc.uploadToKeepServer(url, hash, nil, reader, uploadStatusChan, 3, kc.getRequestID())
writer.Write([]byte("foo"))
writer.Close()
func (s *StandaloneSuite) TestPutB(c *C) {
hash := Md5String("foo")
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 5)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 5),
+ }
arv, _ := arvadosclient.MakeArvadosClient()
kc, _ := MakeKeepClient(arv)
func (s *StandaloneSuite) TestPutHR(c *C) {
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 5)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 5),
+ }
arv, _ := arvadosclient.MakeArvadosClient()
kc, _ := MakeKeepClient(arv)
func (s *StandaloneSuite) TestPutWithFail(c *C) {
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 4)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 4),
+ }
fh := FailHandler{
make(chan string, 1)}
func (s *StandaloneSuite) TestPutWithTooManyFail(c *C) {
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 1)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 1),
+ }
fh := FailHandler{
make(chan string, 4)}
_, replicas, err := kc.PutB([]byte("foo"))
- c.Check(err, FitsTypeOf, InsufficientReplicasError(errors.New("")))
+ c.Check(err, FitsTypeOf, InsufficientReplicasError{})
c.Check(replicas, Equals, 1)
c.Check(<-st.handled, Equals, ks1[0].url)
}
_, replicas, err := kc.PutB([]byte("foo"))
<-st.handled
- c.Check(err, FitsTypeOf, InsufficientReplicasError(errors.New("")))
+ c.Check(err, FitsTypeOf, InsufficientReplicasError{})
c.Check(replicas, Equals, 2)
}
func (s *StandaloneSuite) TestPutBWant2ReplicasWithOnlyOneWritableLocalRoot(c *C) {
hash := Md5String("foo")
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 5)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 5),
+ }
arv, _ := arvadosclient.MakeArvadosClient()
kc, _ := MakeKeepClient(arv)
_, replicas, err := kc.PutB([]byte("foo"))
- c.Check(err, FitsTypeOf, InsufficientReplicasError(errors.New("")))
+ c.Check(err, FitsTypeOf, InsufficientReplicasError{})
c.Check(replicas, Equals, 1)
c.Check(<-st.handled, Equals, localRoots[fmt.Sprintf("zzzzz-bi6l4-fakefakefake%03d", 0)])
func (s *StandaloneSuite) TestPutBWithNoWritableLocalRoots(c *C) {
hash := Md5String("foo")
- st := StubPutHandler{
- c,
- hash,
- "abc123",
- "foo",
- "",
- make(chan string, 5)}
+ st := &StubPutHandler{
+ c: c,
+ expectPath: hash,
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "",
+ returnStorageClasses: "",
+ handled: make(chan string, 5),
+ }
arv, _ := arvadosclient.MakeArvadosClient()
kc, _ := MakeKeepClient(arv)
_, replicas, err := kc.PutB([]byte("foo"))
- c.Check(err, FitsTypeOf, InsufficientReplicasError(errors.New("")))
+ c.Check(err, FitsTypeOf, InsufficientReplicasError{})
c.Check(replicas, Equals, 0)
}
func (s *StandaloneSuite) TestPutBRetry(c *C) {
st := &FailThenSucceedHandler{
handled: make(chan string, 1),
- successhandler: StubPutHandler{
- c,
- Md5String("foo"),
- "abc123",
- "foo",
- "",
- make(chan string, 5)}}
+ successhandler: &StubPutHandler{
+ c: c,
+ expectPath: Md5String("foo"),
+ expectAPIToken: "abc123",
+ expectBody: "foo",
+ expectStorageClass: "default",
+ returnStorageClasses: "",
+ handled: make(chan string, 5),
+ },
+ }
arv, _ := arvadosclient.MakeArvadosClient()
kc, _ := MakeKeepClient(arv)
package keepclient
import (
+ "bytes"
+ "context"
"crypto/md5"
"errors"
"fmt"
"log"
"net/http"
"os"
+ "strconv"
"strings"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/asyncbuf"
)
// DebugPrintf emits debug messages. The easiest way to enable
url string
statusCode int
replicasStored int
+ classesStored map[string]int
response string
}
-func (kc *KeepClient) uploadToKeepServer(host string, hash string, body io.Reader,
- uploadStatusChan chan<- uploadStatus, expectedLength int64, reqid string) {
+func (kc *KeepClient) uploadToKeepServer(host string, hash string, classesTodo []string, body io.Reader,
+ uploadStatusChan chan<- uploadStatus, expectedLength int, reqid string) {
var req *http.Request
var err error
var url = fmt.Sprintf("%s/%s", host, hash)
if req, err = http.NewRequest("PUT", url, nil); err != nil {
DebugPrintf("DEBUG: [%s] Error creating request PUT %v error: %v", reqid, url, err.Error())
- uploadStatusChan <- uploadStatus{err, url, 0, 0, ""}
+ uploadStatusChan <- uploadStatus{err, url, 0, 0, nil, ""}
return
}
- req.ContentLength = expectedLength
+ req.ContentLength = int64(expectedLength)
if expectedLength > 0 {
req.Body = ioutil.NopCloser(body)
} else {
req.Header.Add("Authorization", "OAuth2 "+kc.Arvados.ApiToken)
req.Header.Add("Content-Type", "application/octet-stream")
req.Header.Add(XKeepDesiredReplicas, fmt.Sprint(kc.Want_replicas))
- if len(kc.StorageClasses) > 0 {
- req.Header.Add("X-Keep-Storage-Classes", strings.Join(kc.StorageClasses, ", "))
+ if len(classesTodo) > 0 {
+ req.Header.Add(XKeepStorageClasses, strings.Join(classesTodo, ", "))
}
var resp *http.Response
if resp, err = kc.httpClient().Do(req); err != nil {
DebugPrintf("DEBUG: [%s] Upload failed %v error: %v", reqid, url, err.Error())
- uploadStatusChan <- uploadStatus{err, url, 0, 0, err.Error()}
+ uploadStatusChan <- uploadStatus{err, url, 0, 0, nil, err.Error()}
return
}
if xr := resp.Header.Get(XKeepReplicasStored); xr != "" {
fmt.Sscanf(xr, "%d", &rep)
}
+ scc := resp.Header.Get(XKeepStorageClassesConfirmed)
+ classesStored, err := parseStorageClassesConfirmedHeader(scc)
+ if err != nil {
+ DebugPrintf("DEBUG: [%s] Ignoring invalid %s header %q: %s", reqid, XKeepStorageClassesConfirmed, scc, err)
+ }
defer resp.Body.Close()
defer io.Copy(ioutil.Discard, resp.Body)
response := strings.TrimSpace(string(respbody))
if err2 != nil && err2 != io.EOF {
DebugPrintf("DEBUG: [%s] Upload %v error: %v response: %v", reqid, url, err2.Error(), response)
- uploadStatusChan <- uploadStatus{err2, url, resp.StatusCode, rep, response}
+ uploadStatusChan <- uploadStatus{err2, url, resp.StatusCode, rep, classesStored, response}
} else if resp.StatusCode == http.StatusOK {
DebugPrintf("DEBUG: [%s] Upload %v success", reqid, url)
- uploadStatusChan <- uploadStatus{nil, url, resp.StatusCode, rep, response}
+ uploadStatusChan <- uploadStatus{nil, url, resp.StatusCode, rep, classesStored, response}
} else {
if resp.StatusCode >= 300 && response == "" {
response = resp.Status
}
DebugPrintf("DEBUG: [%s] Upload %v error: %v response: %v", reqid, url, resp.StatusCode, response)
- uploadStatusChan <- uploadStatus{errors.New(resp.Status), url, resp.StatusCode, rep, response}
+ uploadStatusChan <- uploadStatus{errors.New(resp.Status), url, resp.StatusCode, rep, classesStored, response}
}
}
-func (kc *KeepClient) putReplicas(
- hash string,
- getReader func() io.Reader,
- expectedLength int64) (locator string, replicas int, err error) {
-
- reqid := kc.getRequestID()
+func (kc *KeepClient) BlockWrite(ctx context.Context, req arvados.BlockWriteOptions) (arvados.BlockWriteResponse, error) {
+ var resp arvados.BlockWriteResponse
+ var getReader func() io.Reader
+ if req.Data == nil && req.Reader == nil {
+ return resp, errors.New("invalid BlockWriteOptions: Data and Reader are both nil")
+ }
+ if req.DataSize < 0 {
+ return resp, fmt.Errorf("invalid BlockWriteOptions: negative DataSize %d", req.DataSize)
+ }
+ if req.DataSize > BLOCKSIZE || len(req.Data) > BLOCKSIZE {
+ return resp, ErrOversizeBlock
+ }
+ if req.Data != nil {
+ if req.DataSize > len(req.Data) {
+ return resp, errors.New("invalid BlockWriteOptions: DataSize > len(Data)")
+ }
+ if req.DataSize == 0 {
+ req.DataSize = len(req.Data)
+ }
+ getReader = func() io.Reader { return bytes.NewReader(req.Data[:req.DataSize]) }
+ } else {
+ buf := asyncbuf.NewBuffer(make([]byte, 0, req.DataSize))
+ go func() {
+ _, err := io.Copy(buf, HashCheckingReader{req.Reader, md5.New(), req.Hash})
+ buf.CloseWithError(err)
+ }()
+ getReader = buf.NewReader
+ }
+ if req.Hash == "" {
+ m := md5.New()
+ _, err := io.Copy(m, getReader())
+ if err != nil {
+ return resp, err
+ }
+ req.Hash = fmt.Sprintf("%x", m.Sum(nil))
+ }
+ if req.StorageClasses == nil {
+ if len(kc.StorageClasses) > 0 {
+ req.StorageClasses = kc.StorageClasses
+ } else {
+ req.StorageClasses = kc.DefaultStorageClasses
+ }
+ }
+ if req.Replicas == 0 {
+ req.Replicas = kc.Want_replicas
+ }
+ if req.RequestID == "" {
+ req.RequestID = kc.getRequestID()
+ }
+ if req.Attempts == 0 {
+ req.Attempts = 1 + kc.Retries
+ }
// Calculate the ordering for uploading to servers
- sv := NewRootSorter(kc.WritableLocalRoots(), hash).GetSortedRoots()
+ sv := NewRootSorter(kc.WritableLocalRoots(), req.Hash).GetSortedRoots()
// The next server to try contacting
nextServer := 0
}()
}()
- replicasDone := 0
- replicasTodo := kc.Want_replicas
+ replicasTodo := map[string]int{}
+ for _, c := range req.StorageClasses {
+ replicasTodo[c] = req.Replicas
+ }
replicasPerThread := kc.replicasPerService
if replicasPerThread < 1 {
// unlimited or unknown
- replicasPerThread = replicasTodo
+ replicasPerThread = req.Replicas
}
- retriesRemaining := 1 + kc.Retries
+ retriesRemaining := req.Attempts
var retryServers []string
lastError := make(map[string]string)
+ trackingClasses := len(replicasTodo) > 0
for retriesRemaining > 0 {
retriesRemaining--
nextServer = 0
retryServers = []string{}
- for replicasTodo > 0 {
- for active*replicasPerThread < replicasTodo {
+ for {
+ var classesTodo []string
+ var maxConcurrency int
+ for sc, r := range replicasTodo {
+ classesTodo = append(classesTodo, sc)
+ if maxConcurrency == 0 || maxConcurrency > r {
+ // Having more than r
+ // writes in flight
+ // would overreplicate
+ // class sc.
+ maxConcurrency = r
+ }
+ }
+ if !trackingClasses {
+ maxConcurrency = req.Replicas - resp.Replicas
+ }
+ if maxConcurrency < 1 {
+ // If there are no non-zero entries in
+ // replicasTodo, we're done.
+ break
+ }
+ for active*replicasPerThread < maxConcurrency {
// Start some upload requests
if nextServer < len(sv) {
- DebugPrintf("DEBUG: [%s] Begin upload %s to %s", reqid, hash, sv[nextServer])
- go kc.uploadToKeepServer(sv[nextServer], hash, getReader(), uploadStatusChan, expectedLength, reqid)
+ DebugPrintf("DEBUG: [%s] Begin upload %s to %s", req.RequestID, req.Hash, sv[nextServer])
+ go kc.uploadToKeepServer(sv[nextServer], req.Hash, classesTodo, getReader(), uploadStatusChan, req.DataSize, req.RequestID)
nextServer++
active++
} else {
msg += resp + "; "
}
msg = msg[:len(msg)-2]
- return locator, replicasDone, InsufficientReplicasError(errors.New(msg))
+ return resp, InsufficientReplicasError{error: errors.New(msg)}
}
break
}
}
- DebugPrintf("DEBUG: [%s] Replicas remaining to write: %v active uploads: %v",
- reqid, replicasTodo, active)
-
- // Now wait for something to happen.
- if active > 0 {
- status := <-uploadStatusChan
- active--
-
- if status.statusCode == 200 {
- // good news!
- replicasDone += status.replicasStored
- replicasTodo -= status.replicasStored
- locator = status.response
- delete(lastError, status.url)
- } else {
- msg := fmt.Sprintf("[%d] %s", status.statusCode, status.response)
- if len(msg) > 100 {
- msg = msg[:100]
- }
- lastError[status.url] = msg
- }
- if status.statusCode == 0 || status.statusCode == 408 || status.statusCode == 429 ||
- (status.statusCode >= 500 && status.statusCode != 503) {
- // Timeout, too many requests, or other server side failure
- // Do not retry when status code is 503, which means the keep server is full
- retryServers = append(retryServers, status.url[0:strings.LastIndex(status.url, "/")])
+ DebugPrintf("DEBUG: [%s] Replicas remaining to write: %v active uploads: %v", req.RequestID, replicasTodo, active)
+ if active < 1 {
+ break
+ }
+
+ // Wait for something to happen.
+ status := <-uploadStatusChan
+ active--
+
+ if status.statusCode == http.StatusOK {
+ delete(lastError, status.url)
+ resp.Replicas += status.replicasStored
+ if len(status.classesStored) == 0 {
+ // Server doesn't report
+ // storage classes. Give up
+ // trying to track which ones
+ // are satisfied; just rely on
+ // total # replicas.
+ trackingClasses = false
}
+ for className, replicas := range status.classesStored {
+ if replicasTodo[className] > replicas {
+ replicasTodo[className] -= replicas
+ } else {
+ delete(replicasTodo, className)
+ }
+ }
+ resp.Locator = status.response
} else {
- break
+ msg := fmt.Sprintf("[%d] %s", status.statusCode, status.response)
+ if len(msg) > 100 {
+ msg = msg[:100]
+ }
+ lastError[status.url] = msg
+ }
+
+ if status.statusCode == 0 || status.statusCode == 408 || status.statusCode == 429 ||
+ (status.statusCode >= 500 && status.statusCode != 503) {
+ // Timeout, too many requests, or other server side failure
+ // Do not retry when status code is 503, which means the keep server is full
+ retryServers = append(retryServers, status.url[0:strings.LastIndex(status.url, "/")])
}
}
sv = retryServers
}
- return locator, replicasDone, nil
+ return resp, nil
+}
+
+func parseStorageClassesConfirmedHeader(hdr string) (map[string]int, error) {
+ if hdr == "" {
+ return nil, nil
+ }
+ classesStored := map[string]int{}
+ for _, cr := range strings.Split(hdr, ",") {
+ cr = strings.TrimSpace(cr)
+ if cr == "" {
+ continue
+ }
+ fields := strings.SplitN(cr, "=", 2)
+ if len(fields) != 2 {
+ return nil, fmt.Errorf("expected exactly one '=' char in entry %q", cr)
+ }
+ className := fields[0]
+ if className == "" {
+ return nil, fmt.Errorf("empty class name in entry %q", cr)
+ }
+ replicas, err := strconv.Atoi(fields[1])
+ if err != nil || replicas < 1 {
+ return nil, fmt.Errorf("invalid replica count %q", fields[1])
+ }
+ classesStored[className] = replicas
+ }
+ return classesStored, nil
}
import java.net.URLDecoder;
import java.nio.charset.StandardCharsets;
import java.util.Objects;
+import java.util.concurrent.TimeUnit;
abstract class BaseApiClient {
BaseApiClient(ConfigProvider config) {
this.config = config;
- this.client = OkHttpClientFactory.INSTANCE.create(config.isApiHostInsecure());
+ this.client = OkHttpClientFactory.INSTANCE.create(config.isApiHostInsecure())
+ .newBuilder()
+ .connectTimeout(config.getConnectTimeout(), TimeUnit.MILLISECONDS)
+ .readTimeout(config.getReadTimeout(), TimeUnit.MILLISECONDS)
+ .writeTimeout(config.getWriteTimeout(), TimeUnit.MILLISECONDS)
+ .build();
}
Request.Builder getRequestBuilder() {
private String groupClass;
@JsonProperty("description")
private String description;
- @JsonProperty("writable_by")
+ @JsonProperty(value = "writable_by", access = JsonProperty.Access.WRITE_ONLY)
private List<String> writableBy;
@JsonProperty("delete_at")
private LocalDateTime deleteAt;
@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonIgnoreProperties(ignoreUnknown = true)
-@JsonPropertyOrder({ "name", "head_kind", "head_uuid", "link_class" })
+@JsonPropertyOrder({"name", "head_kind", "head_uuid", "link_class"})
public class Link extends Item {
@JsonProperty("name")
private String name;
- @JsonProperty("head_kind")
+ @JsonProperty(value = "head_kind", access = JsonProperty.Access.WRITE_ONLY)
private String headKind;
@JsonProperty("head_uuid")
private String headUuid;
+ @JsonProperty("tail_uuid")
+ private String tailUuid;
+ @JsonProperty(value = "tail_kind", access = JsonProperty.Access.WRITE_ONLY)
+ private String tailKind;
@JsonProperty("link_class")
private String linkClass;
return headUuid;
}
+ public String getTailUuid() {
+ return tailUuid;
+ }
+
+ public String getTailKind() {
+ return tailKind;
+ }
+
public String getLinkClass() {
return linkClass;
}
this.headUuid = headUuid;
}
+ public void setTailUuid(String tailUuid) {
+ this.tailUuid = tailUuid;
+ }
+
+ public void setTailKind(String tailKind) {
+ this.tailKind = tailKind;
+ }
+
public void setLinkClass(String linkClass) {
this.linkClass = linkClass;
}
String getApiProtocol();
+ int getConnectTimeout();
+
+ int getReadTimeout();
+
+ int getWriteTimeout();
//FILE UPLOAD
int getFileSplitSize();
private File fileSplitDirectory;
private int numberOfCopies;
private int numberOfRetries;
+ private int connectTimeout;
+ private int readTimeout;
+ private int writeTimeout;
+
+ ExternalConfigProvider(boolean apiHostInsecure, String keepWebHost, int keepWebPort, String apiHost, int apiPort,
+ String apiToken, String apiProtocol, int fileSplitSize, File fileSplitDirectory,
+ int numberOfCopies, int numberOfRetries)
+ {
+ this.apiHostInsecure = apiHostInsecure;
+ this.keepWebHost = keepWebHost;
+ this.keepWebPort = keepWebPort;
+ this.apiHost = apiHost;
+ this.apiPort = apiPort;
+ this.apiToken = apiToken;
+ this.apiProtocol = apiProtocol;
+ this.fileSplitSize = fileSplitSize;
+ this.fileSplitDirectory = fileSplitDirectory;
+ this.numberOfCopies = numberOfCopies;
+ this.numberOfRetries = numberOfRetries;
+ this.connectTimeout = 60000;
+ this.readTimeout = 60000;
+ this.writeTimeout = 60000;
+ }
- ExternalConfigProvider(boolean apiHostInsecure, String keepWebHost, int keepWebPort, String apiHost, int apiPort, String apiToken, String apiProtocol, int fileSplitSize, File fileSplitDirectory, int numberOfCopies, int numberOfRetries) {
+ ExternalConfigProvider(boolean apiHostInsecure, String keepWebHost, int keepWebPort, String apiHost, int apiPort,
+ String apiToken, String apiProtocol, int fileSplitSize, File fileSplitDirectory,
+ int numberOfCopies, int numberOfRetries,
+ int connectTimeout, int readTimeout, int writeTimeout)
+ {
this.apiHostInsecure = apiHostInsecure;
this.keepWebHost = keepWebHost;
this.keepWebPort = keepWebPort;
this.fileSplitDirectory = fileSplitDirectory;
this.numberOfCopies = numberOfCopies;
this.numberOfRetries = numberOfRetries;
+ this.connectTimeout = connectTimeout;
+ this.readTimeout = readTimeout;
+ this.writeTimeout = writeTimeout;
}
public static ExternalConfigProviderBuilder builder() {
return this.numberOfRetries;
}
+ public int getConnectTimeout() {
+ return this.connectTimeout;
+ }
+
+ public int getReadTimeout() {
+ return this.readTimeout;
+ }
+
+ public int getWriteTimeout() {
+ return this.writeTimeout;
+ }
+
public static class ExternalConfigProviderBuilder {
private boolean apiHostInsecure;
private String keepWebHost;
public String getIntegrationTestProjectUuid() {
return this.getString("integration-tests.project-uuid");
}
+
+ @Override
+ public int getConnectTimeout() {
+ return this.getInt("connectTimeout");
+ }
+
+ @Override
+ public int getReadTimeout() {
+ return this.getInt("readTimeout");
+ }
+
+ @Override
+ public int getWriteTimeout() {
+ return this.getInt("writeTimeout");
+ }
}
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0 OR Apache-2.0
+#
# Arvados client default configuration
#
# Remarks:
temp-dir = /tmp/file-split
copies = 2
retries = 0
+ connectTimeout = 60000
+ readTimeout = 60000
+ writeTimeout = 60000
}
--- /dev/null
+/*
+ * Copyright (C) The Arvados Authors. All rights reserved.
+ *
+ * SPDX-License-Identifier: AGPL-3.0 OR Apache-2.0
+ *
+ */
+
+package org.arvados.client.api.client;
+
+import okhttp3.mockwebserver.RecordedRequest;
+import org.arvados.client.api.model.Link;
+import org.arvados.client.api.model.LinkList;
+import org.arvados.client.test.utils.ArvadosClientMockedWebServerTest;
+import org.arvados.client.test.utils.RequestMethod;
+import org.junit.Test;
+
+import static org.arvados.client.test.utils.ApiClientTestUtils.assertAuthorizationHeader;
+import static org.arvados.client.test.utils.ApiClientTestUtils.assertRequestMethod;
+import static org.arvados.client.test.utils.ApiClientTestUtils.assertRequestPath;
+import static org.arvados.client.test.utils.ApiClientTestUtils.getResponse;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.junit.Assert.assertEquals;
+
+public class LinkApiClientTest extends ArvadosClientMockedWebServerTest {
+
+ private static final String RESOURCE = "links";
+
+ private final LinksApiClient client = new LinksApiClient(CONFIG);
+
+ @Test
+ public void listLinks() throws Exception {
+ // given
+ server.enqueue(getResponse("links-list"));
+
+ // when
+ LinkList actual = client.list();
+
+ // then
+ RecordedRequest request = server.takeRequest();
+ assertAuthorizationHeader(request);
+ assertRequestPath(request, RESOURCE);
+ assertRequestMethod(request, RequestMethod.GET);
+ assertThat(actual.getItemsAvailable()).isEqualTo(2);
+ }
+
+ @Test
+ public void getLink() throws Exception {
+ // given
+ server.enqueue(getResponse("links-get"));
+
+ String uuid = "arkau-o0j2j-huxuaxbi46s1yml";
+
+ // when
+ Link actual = client.get(uuid);
+
+ // then
+ RecordedRequest request = server.takeRequest();
+ assertAuthorizationHeader(request);
+ assertRequestPath(request, RESOURCE + "/" + uuid);
+ assertRequestMethod(request, RequestMethod.GET);
+ assertEquals(actual.getUuid(), uuid);
+ assertEquals(actual.getName(), "can_read");
+ assertEquals(actual.getHeadKind(), "arvados#group");
+ assertEquals(actual.getHeadUuid(), "arkau-j7d0g-fcedae2076pw56h");
+ assertEquals(actual.getTailUuid(), "ardev-tpzed-n3kzq4fvoks3uw4");
+ assertEquals(actual.getTailKind(), "arvados#user");
+ assertEquals(actual.getLinkClass(), "permission");
+ }
+
+ @Test
+ public void createLink() throws Exception {
+ // given
+ server.enqueue(getResponse("links-create"));
+
+ String name = "Star Link";
+
+ Link collection = new Link();
+ collection.setName(name);
+
+ // when
+ Link actual = client.create(collection);
+
+ // then
+ RecordedRequest request = server.takeRequest();
+ assertAuthorizationHeader(request);
+ assertRequestPath(request, RESOURCE);
+ assertRequestMethod(request, RequestMethod.POST);
+ assertThat(actual.getName()).isEqualTo(name);
+ assertEquals(actual.getName(), name);
+ assertEquals(actual.getUuid(), "arkau-o0j2j-huxuaxbi46s1yml");
+ assertEquals(actual.getHeadKind(), "arvados#group");
+ assertEquals(actual.getHeadUuid(), "arkau-j7d0g-fcedae2076pw56h");
+ assertEquals(actual.getTailUuid(), "ardev-tpzed-n3kzq4fvoks3uw4");
+ assertEquals(actual.getTailKind(), "arvados#user");
+ assertEquals(actual.getLinkClass(), "star");
+ }
+}
\ No newline at end of file
--- /dev/null
+{
+ "href": "/links/arkau-o0j2j-huxuaxbi46s1yml",
+ "kind": "arvados#link",
+ "etag": "zw1rlnbig0kpm9btw8us3pn9",
+ "uuid": "arkau-o0j2j-huxuaxbi46s1yml",
+ "owner_uuid": "arkau-tpzed-000000000000000",
+ "created_at": "2021-11-30T08:45:04.373354745Z",
+ "modified_by_client_uuid": null,
+ "modified_by_user_uuid": "ardev-tpzed-n3kzq4fvoks3uw4",
+ "modified_at": "2021-11-30T08:45:04.374489000Z",
+ "tail_uuid": "ardev-tpzed-n3kzq4fvoks3uw4",
+ "link_class": "star",
+ "name": "Star Link",
+ "head_uuid": "arkau-j7d0g-fcedae2076pw56h",
+ "head_kind": "arvados#group",
+ "tail_kind": "arvados#user",
+ "properties": {}
+}
\ No newline at end of file
--- /dev/null
+{
+ "href": "/links/arkau-o0j2j-huxuaxbi46s1yml",
+ "kind": "arvados#link",
+ "etag": "zw1rlnbig0kpm9btw8us3pn9",
+ "uuid": "arkau-o0j2j-huxuaxbi46s1yml",
+ "owner_uuid": "arkau-tpzed-000000000000000",
+ "created_at": "2021-11-30T08:45:04.373354745Z",
+ "modified_by_client_uuid": null,
+ "modified_by_user_uuid": "ardev-tpzed-n3kzq4fvoks3uw4",
+ "modified_at": "2021-11-30T08:45:04.374489000Z",
+ "tail_uuid": "ardev-tpzed-n3kzq4fvoks3uw4",
+ "link_class": "permission",
+ "name": "can_read",
+ "head_uuid": "arkau-j7d0g-fcedae2076pw56h",
+ "head_kind": "arvados#group",
+ "tail_kind": "arvados#user",
+ "properties": {}
+}
\ No newline at end of file
--- /dev/null
+{
+ "kind": "arvados#linkList",
+ "etag": "",
+ "self_link": "",
+ "offset": 0,
+ "limit": 100,
+ "items": [
+ {
+ "href": "/links/arkau-o0j2j-x2b4rdadxs2fizn",
+ "kind": "arvados#link",
+ "etag": "dkhtr9tvp9zfy0d90xjn7w1t7",
+ "uuid": "arkau-o0j2j-x2b4rdadxs2fizn",
+ "owner_uuid": "arkau-j7d0g-publicfavorites",
+ "created_at": "2021-10-27T12:00:06.607794000Z",
+ "modified_by_client_uuid": null,
+ "modified_by_user_uuid": "arlog-tpzed-fyiau9qwo7ytntu",
+ "modified_at": "2021-10-27T12:00:06.609840000Z",
+ "tail_uuid": "arkau-j7d0g-publicfavorites",
+ "link_class": "star",
+ "name": "pRED Data Commons Service - Open access",
+ "head_uuid": "arkau-j7d0g-sfhw8b1uson0hwh",
+ "head_kind": "arvados#group",
+ "tail_kind": "arvados#group",
+ "properties": {}
+ },
+ {
+ "href": "/links/arkau-o0j2j-r5am4lz9gnu488k",
+ "kind": "arvados#link",
+ "etag": "9nt0c2xn5oz1jzjzawlycmehz",
+ "uuid": "arkau-o0j2j-r5am4lz9gnu488k",
+ "owner_uuid": "arkau-j7d0g-publicfavorites",
+ "created_at": "2021-06-23T14:58:06.189520000Z",
+ "modified_by_client_uuid": null,
+ "modified_by_user_uuid": "arlog-tpzed-xzjyeljl6co7vlz",
+ "modified_at": "2021-06-23T14:58:06.196208000Z",
+ "tail_uuid": "arkau-j7d0g-publicfavorites",
+ "link_class": "star",
+ "name": "Open Targets Genetics",
+ "head_uuid": "arkau-j7d0g-pj5wysmpy5wn8yo",
+ "head_kind": "arvados#group",
+ "tail_kind": "arvados#group",
+ "properties": {}
+ }
+ ],
+ "items_available": 2
+}
\ No newline at end of file
#
set -e
-format_last_commit_here() {
- local format="$1"; shift
- TZ=UTC git log -n1 --first-parent "--format=format:$format" .
+commit_at_dir() {
+ git log -n1 --format=%H .
}
-version_from_git() {
+build_version() {
# Output the version being built, or if we're building a
# dev/prerelease, output a version number based on the git log for
# the current working directory.
return
fi
- local git_ts git_hash prefix
- if [[ -n "$1" ]] ; then
- prefix="$1"
- else
- prefix="0.1"
- fi
-
- declare $(format_last_commit_here "git_ts=%ct git_hash=%h")
- ARVADOS_BUILDING_VERSION="$(git describe --abbrev=0).$(date -ud "@$git_ts" +%Y%m%d%H%M%S)"
- echo "$ARVADOS_BUILDING_VERSION"
-}
-
-nohash_version_from_git() {
- version_from_git $1 | cut -d. -f1-3
+ $WORKSPACE/build/version-at-commit.sh $(commit_at_dir)
}
-timestamp_from_git() {
- format_last_commit_here "%ct"
-}
-if [[ -n "$1" ]]; then
- build_version="$1"
-else
- build_version="$(version_from_git)"
-fi
-#UID=$(id -u) # UID is read-only on many systems
-exec docker run --rm --user $UID -v $PWD:$PWD -w $PWD gradle /bin/sh -c 'gradle clean && gradle test && gradle jar install '"$gradle_upload"
\ No newline at end of file
+exec docker run --rm --user $UID -v $PWD:$PWD -w $PWD gradle:5.3.1 /bin/sh -c 'gradle clean && gradle test && gradle jar install '"-Pversion=$(build_version) $gradle_upload"
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<classpath>
- <classpathentry including="**/*.java" kind="src" output="target/test-classes" path="src/test/java"/>
- <classpathentry including="**/*.java" kind="src" path="src/main/java"/>
- <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
- <classpathentry kind="var" path="M2_REPO/com/google/apis/google-api-services-discovery/v1-rev42-1.18.0-rc/google-api-services-discovery-v1-rev42-1.18.0-rc.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/google/api-client/google-api-client/1.18.0-rc/google-api-client-1.18.0-rc.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/google/http-client/google-http-client/1.18.0-rc/google-http-client-1.18.0-rc.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar"/>
- <classpathentry kind="var" path="M2_REPO/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar"/>
- <classpathentry kind="var" path="M2_REPO/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar"/>
- <classpathentry kind="var" path="M2_REPO/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar"/>
- <classpathentry kind="var" path="M2_REPO/commons-codec/commons-codec/1.3/commons-codec-1.3.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/google/http-client/google-http-client-jackson2/1.18.0-rc/google-http-client-jackson2-1.18.0-rc.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/fasterxml/jackson/core/jackson-core/2.1.3/jackson-core-2.1.3.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/google/guava/guava/r05/guava-r05.jar"/>
- <classpathentry kind="var" path="M2_REPO/log4j/log4j/1.2.16/log4j-1.2.16.jar"/>
- <classpathentry kind="var" path="M2_REPO/com/googlecode/json-simple/json-simple/1.1.1/json-simple-1.1.1.jar"/>
- <classpathentry kind="var" path="M2_REPO/junit/junit/4.8.1/junit-4.8.1.jar"/>
- <classpathentry kind="output" path="target/classes"/>
-</classpath>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<projectDescription>
- <name>java</name>
- <comment>NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are not supported in M2Eclipse.</comment>
- <projects/>
- <buildSpec>
- <buildCommand>
- <name>org.eclipse.jdt.core.javabuilder</name>
- </buildCommand>
- </buildSpec>
- <natures>
- <nature>org.eclipse.jdt.core.javanature</nature>
- </natures>
-</projectDescription>
\ No newline at end of file
+++ /dev/null
-#Mon Apr 28 10:33:40 EDT 2014
-org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.6
-eclipse.preferences.version=1
-org.eclipse.jdt.core.compiler.source=1.6
-org.eclipse.jdt.core.compiler.compliance=1.6
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-/**
- * This Sample test program is useful in getting started with working with Arvados Java SDK.
- * @author radhika
- *
- */
-
-import org.arvados.sdk.Arvados;
-
-import java.io.File;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-
-public class ArvadosSDKJavaExample {
- /** Make sure the following environment variables are set before using Arvados:
- * ARVADOS_API_TOKEN, ARVADOS_API_HOST and ARVADOS_API_HOST_INSECURE
- * Set ARVADOS_API_HOST_INSECURE to true if you are using self-singed
- * certificates in development and want to bypass certificate validations.
- *
- * If you are not using env variables, you can pass them to Arvados constructor.
- *
- * Please refer to http://doc.arvados.org/api/index.html for a complete list
- * of the available API methods.
- */
- public static void main(String[] args) throws Exception {
- String apiName = "arvados";
- String apiVersion = "v1";
-
- Arvados arv = new Arvados(apiName, apiVersion);
-
- // Make a users list call. Here list on users is the method being invoked.
- // Expect a Map containing the list of users as the response.
- System.out.println("Making an arvados users.list api call");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("users", "list", params);
- System.out.println("Arvados users.list:\n");
- printResponse(response);
-
- // get uuid of the first user from the response
- List items = (List)response.get("items");
-
- Map firstUser = (Map)items.get(0);
- String userUuid = (String)firstUser.get("uuid");
-
- // Make a users get call on the uuid obtained above
- System.out.println("\n\n\nMaking a users.get call for " + userUuid);
- params = new HashMap<String, Object>();
- params.put("uuid", userUuid);
- response = arv.call("users", "get", params);
- System.out.println("Arvados users.get:\n");
- printResponse(response);
-
- // Make a pipeline_templates list call
- System.out.println("\n\n\nMaking a pipeline_templates.list call.");
-
- params = new HashMap<String, Object>();
- response = arv.call("pipeline_templates", "list", params);
-
- System.out.println("Arvados pipelinetempates.list:\n");
- printResponse(response);
- }
-
- private static void printResponse(Map response){
- Set<Entry<String,Object>> entrySet = (Set<Entry<String,Object>>)response.entrySet();
- for (Map.Entry<String, Object> entry : entrySet) {
- if ("items".equals(entry.getKey())) {
- List items = (List)entry.getValue();
- for (Object item : items) {
- System.out.println(" " + item);
- }
- } else {
- System.out.println(entry.getKey() + " = " + entry.getValue());
- }
- }
- }
-}
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-/**
- * This Sample test program is useful in getting started with using Arvados Java SDK.
- * This program creates an Arvados instance using the configured environment variables.
- * It then provides a prompt to input method name and input parameters.
- * The program them invokes the API server to execute the specified method.
- *
- * @author radhika
- */
-
-import org.arvados.sdk.Arvados;
-
-import java.io.File;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.io.BufferedReader;
-import java.io.InputStreamReader;
-
-public class ArvadosSDKJavaExampleWithPrompt {
- /**
- * Make sure the following environment variables are set before using Arvados:
- * ARVADOS_API_TOKEN, ARVADOS_API_HOST and ARVADOS_API_HOST_INSECURE Set
- * ARVADOS_API_HOST_INSECURE to true if you are using self-singed certificates
- * in development and want to bypass certificate validations.
- *
- * Please refer to http://doc.arvados.org/api/index.html for a complete list
- * of the available API methods.
- */
- public static void main(String[] args) throws Exception {
- String apiName = "arvados";
- String apiVersion = "v1";
-
- System.out.print("Welcome to Arvados Java SDK.");
- System.out.println("\nYou can use this example to call API methods interactively.");
- System.out.println("\nPlease refer to http://doc.arvados.org/api/index.html for api documentation");
- System.out.println("\nTo make the calls, enter input data at the prompt.");
- System.out.println("When entering parameters, you may enter a simple string or a well-formed json.");
- System.out.println("For example to get a user you may enter: user, zzzzz-12345-67890");
- System.out.println("Or to filter links, you may enter: filters, [[ \"name\", \"=\", \"can_manage\"]]");
-
- System.out.println("\nEnter ^C when you want to quit");
-
- // use configured env variables for API TOKEN, HOST and HOST_INSECURE
- Arvados arv = new Arvados(apiName, apiVersion);
-
- while (true) {
- try {
- // prompt for resource
- System.out.println("\n\nEnter Resource name (for example users)");
- System.out.println("\nAvailable resources are: " + arv.getAvailableResourses());
- System.out.print("\n>>> ");
-
- // read resource name
- BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
- String resourceName = in.readLine().trim();
- if ("".equals(resourceName)) {
- throw (new Exception("No resource name entered"));
- }
- // read method name
- System.out.println("\nEnter method name (for example get)");
- System.out.println("\nAvailable methods are: " + arv.getAvailableMethodsForResourse(resourceName));
- System.out.print("\n>>> ");
- String methodName = in.readLine().trim();
- if ("".equals(methodName)) {
- throw (new Exception("No method name entered"));
- }
-
- // read method parameters
- System.out.println("\nEnter parameter name, value (for example uuid, uuid-value)");
- System.out.println("\nAvailable parameters are: " +
- arv.getAvailableParametersForMethod(resourceName, methodName));
-
- System.out.print("\n>>> ");
- Map paramsMap = new HashMap();
- String param = "";
- try {
- do {
- param = in.readLine();
- if (param.isEmpty())
- break;
- int index = param.indexOf(","); // first comma
- String paramName = param.substring(0, index);
- String paramValue = param.substring(index+1);
- paramsMap.put(paramName.trim(), paramValue.trim());
-
- System.out.println("\nEnter parameter name, value (for example uuid, uuid-value)");
- System.out.print("\n>>> ");
- } while (!param.isEmpty());
- } catch (Exception e) {
- System.out.println (e.getMessage());
- System.out.println ("\nSet up a new call");
- continue;
- }
-
- // Make a "call" for the given resource name and method name
- try {
- System.out.println ("Making a call for " + resourceName + " " + methodName);
- Map response = arv.call(resourceName, methodName, paramsMap);
-
- Set<Entry<String,Object>> entrySet = (Set<Entry<String,Object>>)response.entrySet();
- for (Map.Entry<String, Object> entry : entrySet) {
- if ("items".equals(entry.getKey())) {
- List items = (List)entry.getValue();
- for (Object item : items) {
- System.out.println(" " + item);
- }
- } else {
- System.out.println(entry.getKey() + " = " + entry.getValue());
- }
- }
- } catch (Exception e){
- System.out.println (e.getMessage());
- System.out.println ("\nSet up a new call");
- }
- } catch (Exception e) {
- System.out.println (e.getMessage());
- System.out.println ("\nSet up a new call");
- }
- }
- }
-}
+++ /dev/null
-Welcome to Arvados Java SDK.
-
-Please refer to http://doc.arvados.org/sdk/java/index.html to get started
- with Arvados Java SDK.
+++ /dev/null
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>org.arvados.sdk</groupId>
- <artifactId>arvados</artifactId>
- <packaging>jar</packaging>
- <version>1.1</version>
- <name>arvados-sdk</name>
- <url>http://arvados.org</url>
-
- <dependencies>
- <dependency>
- <groupId>com.google.apis</groupId>
- <artifactId>google-api-services-discovery</artifactId>
- <version>v1-rev42-1.18.0-rc</version>
- </dependency>
- <dependency>
- <groupId>com.google.api-client</groupId>
- <artifactId>google-api-client</artifactId>
- <version>1.18.0-rc</version>
- </dependency>
- <dependency>
- <groupId>com.google.http-client</groupId>
- <artifactId>google-http-client-jackson2</artifactId>
- <version>1.18.0-rc</version>
- </dependency>
- <dependency>
- <groupId>com.google.guava</groupId>
- <artifactId>guava</artifactId>
- <version>r05</version>
- </dependency>
- <dependency>
- <groupId>log4j</groupId>
- <artifactId>log4j</artifactId>
- <version>1.2.16</version>
- </dependency>
- <dependency>
- <groupId>com.googlecode.json-simple</groupId>
- <artifactId>json-simple</artifactId>
- <version>1.1.1</version>
- </dependency>
-
- <dependency>
- <groupId>junit</groupId>
- <artifactId>junit</artifactId>
- <version>4.8.1</version>
- </dependency>
- </dependencies>
-
- <build>
- <finalName>arvados-sdk-1.1</finalName>
-
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.1</version>
- <configuration>
- <source>1.6</source>
- <target>1.6</target>
- </configuration>
- </plugin>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-assembly-plugin</artifactId>
- <executions>
- <execution>
- <goals>
- <goal>attached</goal>
- </goals>
- <phase>package</phase>
- <configuration>
- <descriptorRefs>
- <descriptorRef>jar-with-dependencies</descriptorRef>
- </descriptorRefs>
- <archive>
- <manifest>
- <mainClass>org.arvados.sdk.Arvados</mainClass>
- </manifest>
- <manifestEntries>
- <!--<Premain-Class>Your.agent.class</Premain-Class> <Agent-Class>Your.agent.class</Agent-Class> -->
- <Can-Redefine-Classes>true</Can-Redefine-Classes>
- <Can-Retransform-Classes>true</Can-Retransform-Classes>
- </manifestEntries>
- </archive>
- </configuration>
- </execution>
- </executions>
- </plugin>
- </plugins>
- <resources>
- <resource>
- <directory>src/main/resources</directory>
- <targetPath>${basedir}/target/classes</targetPath>
- <includes>
- <include>log4j.properties</include>
- </includes>
- <filtering>true</filtering>
- </resource>
- <resource>
- <directory>src/test/resources</directory>
- <filtering>true</filtering>
- </resource>
- </resources>
- </build>
-</project>
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-package org.arvados.sdk;
-
-import com.google.api.client.http.javanet.*;
-import com.google.api.client.http.ByteArrayContent;
-import com.google.api.client.http.GenericUrl;
-import com.google.api.client.http.HttpBackOffIOExceptionHandler;
-import com.google.api.client.http.HttpContent;
-import com.google.api.client.http.HttpRequest;
-import com.google.api.client.http.HttpRequestFactory;
-import com.google.api.client.http.HttpTransport;
-import com.google.api.client.http.UriTemplate;
-import com.google.api.client.json.JsonFactory;
-import com.google.api.client.json.jackson2.JacksonFactory;
-import com.google.api.client.util.ExponentialBackOff;
-import com.google.api.client.util.Maps;
-import com.google.api.services.discovery.Discovery;
-import com.google.api.services.discovery.model.JsonSchema;
-import com.google.api.services.discovery.model.RestDescription;
-import com.google.api.services.discovery.model.RestMethod;
-import com.google.api.services.discovery.model.RestMethod.Request;
-import com.google.api.services.discovery.model.RestResource;
-
-import java.math.BigDecimal;
-import java.math.BigInteger;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.log4j.Logger;
-import org.json.simple.JSONArray;
-import org.json.simple.JSONObject;
-
-/**
- * This class provides a java SDK interface to Arvados API server.
- *
- * Please refer to http://doc.arvados.org/api/ to learn about the
- * various resources and methods exposed by the API server.
- *
- * @author radhika
- */
-public class Arvados {
- // HttpTransport and JsonFactory are thread-safe. So, use global instances.
- private HttpTransport httpTransport;
- private final JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
-
- private String arvadosApiToken;
- private String arvadosApiHost;
- private boolean arvadosApiHostInsecure;
-
- private String arvadosRootUrl;
-
- private static final Logger logger = Logger.getLogger(Arvados.class);
-
- // Get it once and reuse on the call requests
- RestDescription restDescription = null;
- String apiName = null;
- String apiVersion = null;
-
- public Arvados (String apiName, String apiVersion) throws Exception {
- this (apiName, apiVersion, null, null, null);
- }
-
- public Arvados (String apiName, String apiVersion, String token,
- String host, String hostInsecure) throws Exception {
- this.apiName = apiName;
- this.apiVersion = apiVersion;
-
- // Read needed environmental variables if they are not passed
- if (token != null) {
- arvadosApiToken = token;
- } else {
- arvadosApiToken = System.getenv().get("ARVADOS_API_TOKEN");
- if (arvadosApiToken == null) {
- throw new Exception("Missing environment variable: ARVADOS_API_TOKEN");
- }
- }
-
- if (host != null) {
- arvadosApiHost = host;
- } else {
- arvadosApiHost = System.getenv().get("ARVADOS_API_HOST");
- if (arvadosApiHost == null) {
- throw new Exception("Missing environment variable: ARVADOS_API_HOST");
- }
- }
- arvadosRootUrl = "https://" + arvadosApiHost;
-
- if (hostInsecure != null) {
- arvadosApiHostInsecure = Boolean.valueOf(hostInsecure);
- } else {
- arvadosApiHostInsecure =
- "true".equals(System.getenv().get("ARVADOS_API_HOST_INSECURE")) ? true : false;
- }
-
- // Create HTTP_TRANSPORT object
- NetHttpTransport.Builder builder = new NetHttpTransport.Builder();
- if (arvadosApiHostInsecure) {
- builder.doNotValidateCertificate();
- }
- httpTransport = builder.build();
-
- // initialize rest description
- restDescription = loadArvadosApi();
- }
-
- /**
- * Make a call to API server with the provide call information.
- * @param resourceName
- * @param methodName
- * @param paramsMap
- * @return Map
- * @throws Exception
- */
- public Map call(String resourceName, String methodName,
- Map<String, Object> paramsMap) throws Exception {
- RestMethod method = getMatchingMethod(resourceName, methodName);
-
- HashMap<String, Object> parameters = loadParameters(paramsMap, method);
-
- GenericUrl url = new GenericUrl(UriTemplate.expand(
- arvadosRootUrl + restDescription.getBasePath() + method.getPath(),
- parameters, true));
-
- try {
- // construct the request
- HttpRequestFactory requestFactory;
- requestFactory = httpTransport.createRequestFactory();
-
- // possibly required content
- HttpContent content = null;
-
- if (!method.getHttpMethod().equals("GET") &&
- !method.getHttpMethod().equals("DELETE")) {
- String objectName = resourceName.substring(0, resourceName.length()-1);
- Object requestBody = paramsMap.get(objectName);
- if (requestBody == null) {
- error("POST method requires content object " + objectName);
- }
-
- content = new ByteArrayContent("application/json",((String)requestBody).getBytes());
- }
-
- HttpRequest request =
- requestFactory.buildRequest(method.getHttpMethod(), url, content);
-
- // Set read timeout to 120 seconds (up from default of 20 seconds)
- request.setReadTimeout(120 * 1000);
-
- // Add retry behavior
- request.setIOExceptionHandler(new HttpBackOffIOExceptionHandler(new ExponentialBackOff()));
-
- // make the request
- List<String> authHeader = new ArrayList<String>();
- authHeader.add("OAuth2 " + arvadosApiToken);
- request.getHeaders().put("Authorization", authHeader);
- String response = request.execute().parseAsString();
-
- Map responseMap = jsonFactory.createJsonParser(response).parse(HashMap.class);
-
- logger.debug(responseMap);
-
- return responseMap;
- } catch (Exception e) {
- e.printStackTrace();
- throw e;
- }
- }
-
- /**
- * Get all supported resources by the API
- * @return Set
- */
- public Set<String> getAvailableResourses() {
- return (restDescription.getResources().keySet());
- }
-
- /**
- * Get all supported method names for the given resource
- * @param resourceName
- * @return Set
- * @throws Exception
- */
- public Set<String> getAvailableMethodsForResourse(String resourceName)
- throws Exception {
- Map<String, RestMethod> methodMap = getMatchingMethodMap (resourceName);
- return (methodMap.keySet());
- }
-
- /**
- * Get the parameters for the method in the resource sought.
- * @param resourceName
- * @param methodName
- * @return Set
- * @throws Exception
- */
- public Map<String,List<String>> getAvailableParametersForMethod(String resourceName, String methodName)
- throws Exception {
- RestMethod method = getMatchingMethod(resourceName, methodName);
- Map<String, List<String>> parameters = new HashMap<String, List<String>>();
- List<String> requiredParameters = new ArrayList<String>();
- List<String> optionalParameters = new ArrayList<String>();
- parameters.put ("required", requiredParameters);
- parameters.put("optional", optionalParameters);
-
- try {
- // get any request parameters
- Request request = method.getRequest();
- if (request != null) {
- Object required = request.get("required");
- Object requestProperties = request.get("properties");
- if (requestProperties != null) {
- if (requestProperties instanceof Map) {
- Map properties = (Map)requestProperties;
- Set<String> propertyKeys = properties.keySet();
- for (String property : propertyKeys) {
- if (Boolean.TRUE.equals(required)) {
- requiredParameters.add(property);
- } else {
- optionalParameters.add(property);
- }
- }
- }
- }
- }
-
- // get other listed parameters
- Map<String,JsonSchema> methodParameters = method.getParameters();
- for (Map.Entry<String, JsonSchema> entry : methodParameters.entrySet()) {
- if (Boolean.TRUE.equals(entry.getValue().getRequired())) {
- requiredParameters.add(entry.getKey());
- } else {
- optionalParameters.add(entry.getKey());
- }
- }
- } catch (Exception e){
- logger.error(e);
- }
-
- return parameters;
- }
-
- private HashMap<String, Object> loadParameters(Map<String, Object> paramsMap,
- RestMethod method) throws Exception {
- HashMap<String, Object> parameters = Maps.newHashMap();
-
- // required parameters
- if (method.getParameterOrder() != null) {
- for (String parameterName : method.getParameterOrder()) {
- JsonSchema parameter = method.getParameters().get(parameterName);
- if (Boolean.TRUE.equals(parameter.getRequired())) {
- Object parameterValue = paramsMap.get(parameterName);
- if (parameterValue == null) {
- error("missing required parameter: " + parameter);
- } else {
- putParameter(null, parameters, parameterName, parameter, parameterValue);
- }
- }
- }
- }
-
- for (Map.Entry<String, Object> entry : paramsMap.entrySet()) {
- String parameterName = entry.getKey();
- Object parameterValue = entry.getValue();
-
- if (parameterName.equals("contentType")) {
- if (method.getHttpMethod().equals("GET") || method.getHttpMethod().equals("DELETE")) {
- error("HTTP content type cannot be specified for this method: " + parameterName);
- }
- } else {
- JsonSchema parameter = null;
- if (restDescription.getParameters() != null) {
- parameter = restDescription.getParameters().get(parameterName);
- }
- if (parameter == null && method.getParameters() != null) {
- parameter = method.getParameters().get(parameterName);
- }
- putParameter(parameterName, parameters, parameterName, parameter, parameterValue);
- }
- }
-
- return parameters;
- }
-
- private RestMethod getMatchingMethod(String resourceName, String methodName)
- throws Exception {
- Map<String, RestMethod> methodMap = getMatchingMethodMap(resourceName);
-
- if (methodName == null) {
- error("missing method name");
- }
-
- RestMethod method =
- methodMap == null ? null : methodMap.get(methodName);
- if (method == null) {
- error("method not found: ");
- }
-
- return method;
- }
-
- private Map<String, RestMethod> getMatchingMethodMap(String resourceName)
- throws Exception {
- if (resourceName == null) {
- error("missing resource name");
- }
-
- Map<String, RestMethod> methodMap = null;
- Map<String, RestResource> resources = restDescription.getResources();
- RestResource resource = resources.get(resourceName);
- if (resource == null) {
- error("resource not found");
- }
- methodMap = resource.getMethods();
- return methodMap;
- }
-
- /**
- * Not thread-safe. So, create for each request.
- * @param apiName
- * @param apiVersion
- * @return
- * @throws Exception
- */
- private RestDescription loadArvadosApi()
- throws Exception {
- try {
- Discovery discovery;
-
- Discovery.Builder discoveryBuilder =
- new Discovery.Builder(httpTransport, jsonFactory, null);
-
- discoveryBuilder.setRootUrl(arvadosRootUrl);
- discoveryBuilder.setApplicationName(apiName);
-
- discovery = discoveryBuilder.build();
-
- return discovery.apis().getRest(apiName, apiVersion).execute();
- } catch (Exception e) {
- e.printStackTrace();
- throw e;
- }
- }
-
- /**
- * Convert the input parameter into its equivalent json string.
- * Add this json string value to the parameters map to be sent to server.
- * @param argName
- * @param parameters
- * @param parameterName
- * @param parameter
- * @param parameterValue
- * @throws Exception
- */
- private void putParameter(String argName, Map<String, Object> parameters,
- String parameterName, JsonSchema parameter, Object parameterValue)
- throws Exception {
- Object value = parameterValue;
- if (parameter != null) {
- if ("boolean".equals(parameter.getType())) {
- value = Boolean.valueOf(parameterValue.toString());
- } else if ("number".equals(parameter.getType())) {
- value = new BigDecimal(parameterValue.toString());
- } else if ("integer".equals(parameter.getType())) {
- value = new BigInteger(parameterValue.toString());
- } else if ("float".equals(parameter.getType())) {
- value = new BigDecimal(parameterValue.toString());
- } else if ("Java.util.Calendar".equals(parameter.getType())) {
- value = new BigDecimal(parameterValue.toString());
- } else if (("array".equals(parameter.getType())) ||
- ("Array".equals(parameter.getType()))) {
- if (parameterValue.getClass().isArray()){
- value = getJsonValueFromArrayType(parameterValue);
- } else if (List.class.isAssignableFrom(parameterValue.getClass())) {
- value = getJsonValueFromListType(parameterValue);
- }
- } else if (("Hash".equals(parameter.getType())) ||
- ("hash".equals(parameter.getType()))) {
- value = getJsonValueFromMapType(parameterValue);
- } else {
- if (parameterValue.getClass().isArray()){
- value = getJsonValueFromArrayType(parameterValue);
- } else if (List.class.isAssignableFrom(parameterValue.getClass())) {
- value = getJsonValueFromListType(parameterValue);
- } else if (Map.class.isAssignableFrom(parameterValue.getClass())) {
- value = getJsonValueFromMapType(parameterValue);
- }
- }
- }
-
- parameters.put(parameterName, value);
- }
-
- /**
- * Convert the given input array into json string before sending to server.
- * @param parameterValue
- * @return
- */
- private String getJsonValueFromArrayType (Object parameterValue) {
- String arrayStr = Arrays.deepToString((Object[])parameterValue);
-
- // we can expect either an array of array objects or an array of objects
- if (arrayStr.startsWith("[[") && arrayStr.endsWith("]]")) {
- Object[][] array = new Object[1][];
- arrayStr = arrayStr.substring(2, arrayStr.length()-2);
- String jsonStr = getJsonStringForArrayStr(arrayStr);
- String value = "[" + jsonStr + "]";
- return value;
- } else {
- arrayStr = arrayStr.substring(1, arrayStr.length()-1);
- return (getJsonStringForArrayStr(arrayStr));
- }
- }
-
- private String getJsonStringForArrayStr(String arrayStr) {
- Object[] array = arrayStr.split(",");
- Object[] trimmedArray = new Object[array.length];
- for (int i=0; i<array.length; i++){
- trimmedArray[i] = array[i].toString().trim();
- }
- String value = JSONArray.toJSONString(Arrays.asList(trimmedArray));
- return value;
- }
-
- /**
- * Convert the given input List into json string before sending to server.
- * @param parameterValue
- * @return
- */
- private String getJsonValueFromListType (Object parameterValue) {
- List paramList = (List)parameterValue;
- Object[] array = new Object[paramList.size()];
- Arrays.deepToString(paramList.toArray(array));
- return (getJsonValueFromArrayType(array));
- }
-
- /**
- * Convert the given input map into json string before sending to server.
- * @param parameterValue
- * @return
- */
- private String getJsonValueFromMapType (Object parameterValue) {
- JSONObject json = new JSONObject((Map)parameterValue);
- return json.toString();
- }
-
- private static void error(String detail) throws Exception {
- String errorDetail = "ERROR: " + detail;
-
- logger.debug(errorDetail);
- throw new Exception(errorDetail);
- }
-
- public static void main(String[] args){
- System.out.println("Welcome to Arvados Java SDK.");
- System.out.println("Please refer to http://doc.arvados.org/sdk/java/index.html to get started with the SDK.");
- }
-
-}
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-package org.arvados.sdk;
-
-import com.google.api.client.util.Lists;
-import com.google.api.client.util.Sets;
-
-import java.util.ArrayList;
-import java.util.SortedSet;
-
-public class MethodDetails implements Comparable<MethodDetails> {
- String name;
- ArrayList<String> requiredParameters = Lists.newArrayList();
- SortedSet<String> optionalParameters = Sets.newTreeSet();
- boolean hasContent;
-
- @Override
- public int compareTo(MethodDetails o) {
- if (o == this) {
- return 0;
- }
- return name.compareTo(o.name);
- }
-}
+++ /dev/null
-# To change log location, change log4j.appender.fileAppender.File
-
-log4j.rootLogger=DEBUG, fileAppender
-
-log4j.appender.fileAppender=org.apache.log4j.RollingFileAppender
-log4j.appender.fileAppender.File=${basedir}/log/arvados_sdk_java.log
-log4j.appender.fileAppender.Append=true
-log4j.appender.file.MaxFileSize=10MB
-log4j.appender.file.MaxBackupIndex=10
-log4j.appender.fileAppender.layout=org.apache.log4j.PatternLayout
-log4j.appender.fileAppender.layout.ConversionPattern=[%d] %-5p %c %L %x - %m%n
+++ /dev/null
-// Copyright (C) The Arvados Authors. All rights reserved.
-//
-// SPDX-License-Identifier: Apache-2.0
-
-package org.arvados.sdk;
-
-import java.io.File;
-import java.io.FileInputStream;
-import java.math.BigDecimal;
-import java.util.ArrayList;
-import java.util.Calendar;
-import java.util.Date;
-import java.util.GregorianCalendar;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-/**
- * Unit test for Arvados.
- */
-public class ArvadosTest {
-
- /**
- * Test users.list api
- * @throws Exception
- */
- @Test
- public void testCallUsersList() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("users", "list", params);
- assertEquals("Expected kind to be users.list", "arvados#userList", response.get("kind"));
-
- List items = (List)response.get("items");
- assertNotNull("expected users list items", items);
- assertTrue("expected at least one item in users list", items.size()>0);
-
- Map firstUser = (Map)items.get(0);
- assertNotNull ("Expcted at least one user", firstUser);
-
- assertEquals("Expected kind to be user", "arvados#user", firstUser.get("kind"));
- assertNotNull("Expected uuid for first user", firstUser.get("uuid"));
- }
-
- /**
- * Test users.get <uuid> api
- * @throws Exception
- */
- @Test
- public void testCallUsersGet() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- // call user.system and get uuid of this user
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("users", "list", params);
-
- assertNotNull("expected users list", response);
- List items = (List)response.get("items");
- assertNotNull("expected users list items", items);
-
- Map firstUser = (Map)items.get(0);
- String userUuid = (String)firstUser.get("uuid");
-
- // invoke users.get with the system user uuid
- params = new HashMap<String, Object>();
- params.put("uuid", userUuid);
-
- response = arv.call("users", "get", params);
-
- assertNotNull("Expected uuid for first user", response.get("uuid"));
- assertEquals("Expected system user uuid", userUuid, response.get("uuid"));
- }
-
- /**
- * Test users.create api
- * @throws Exception
- */
- @Test
- public void testCreateUser() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("user", "{}");
- Map response = arv.call("users", "create", params);
-
- assertEquals("Expected kind to be user", "arvados#user", response.get("kind"));
-
- Object uuid = response.get("uuid");
- assertNotNull("Expected uuid for first user", uuid);
-
- // delete the object
- params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- response = arv.call("users", "delete", params);
-
- // invoke users.get with the system user uuid
- params = new HashMap<String, Object>();
- params.put("uuid", uuid);
-
- Exception caught = null;
- try {
- arv.call("users", "get", params);
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected 404", caught.getMessage().contains("Path not found"));
- }
-
- @Test
- public void testCreateUserWithMissingRequiredParam() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Exception caught = null;
- try {
- arv.call("users", "create", params);
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected POST method requires content object user",
- caught.getMessage().contains("ERROR: POST method requires content object user"));
- }
-
- /**
- * Test users.create api
- * @throws Exception
- */
- @Test
- public void testCreateAndUpdateUser() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("user", "{}");
- Map response = arv.call("users", "create", params);
-
- assertEquals("Expected kind to be user", "arvados#user", response.get("kind"));
-
- Object uuid = response.get("uuid");
- assertNotNull("Expected uuid for first user", uuid);
-
- // update this user
- params = new HashMap<String, Object>();
- params.put("user", "{}");
- params.put("uuid", uuid);
- response = arv.call("users", "update", params);
-
- assertEquals("Expected kind to be user", "arvados#user", response.get("kind"));
-
- uuid = response.get("uuid");
- assertNotNull("Expected uuid for first user", uuid);
-
- // delete the object
- params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- response = arv.call("users", "delete", params);
- }
-
- /**
- * Test unsupported api version api
- * @throws Exception
- */
- @Test
- public void testUnsupportedApiName() throws Exception {
- Exception caught = null;
- try {
- Arvados arv = new Arvados("not_arvados", "v1");
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected 404 when unsupported api is used", caught.getMessage().contains("404 Not Found"));
- }
-
- /**
- * Test unsupported api version api
- * @throws Exception
- */
- @Test
- public void testUnsupportedVersion() throws Exception {
- Exception caught = null;
- try {
- Arvados arv = new Arvados("arvados", "v2");
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected 404 when unsupported version is used", caught.getMessage().contains("404 Not Found"));
- }
-
- /**
- * Test unsupported api version api
- * @throws Exception
- */
- @Test
- public void testCallForNoSuchResrouce() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Exception caught = null;
- try {
- arv.call("abcd", "list", null);
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected ERROR: 404 not found", caught.getMessage().contains("ERROR: resource not found"));
- }
-
- /**
- * Test unsupported api version api
- * @throws Exception
- */
- @Test
- public void testCallForNoSuchResrouceMethod() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Exception caught = null;
- try {
- arv.call("users", "abcd", null);
- } catch (Exception e) {
- caught = e;
- }
-
- assertNotNull ("expected exception", caught);
- assertTrue ("Expected ERROR: 404 not found", caught.getMessage().contains("ERROR: method not found"));
- }
-
- /**
- * Test pipeline_tempates.create api
- * @throws Exception
- */
- @Test
- public void testCreateAndGetPipelineTemplate() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- File file = new File(getClass().getResource( "/first_pipeline.json" ).toURI());
- byte[] data = new byte[(int)file.length()];
- try {
- FileInputStream is = new FileInputStream(file);
- is.read(data);
- is.close();
- }catch(Exception e) {
- e.printStackTrace();
- }
-
- Map<String, Object> params = new HashMap<String, Object>();
- params.put("pipeline_template", new String(data));
- Map response = arv.call("pipeline_templates", "create", params);
- assertEquals("Expected kind to be user", "arvados#pipelineTemplate", response.get("kind"));
- String uuid = (String)response.get("uuid");
- assertNotNull("Expected uuid for pipeline template", uuid);
-
- // get the pipeline
- params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- response = arv.call("pipeline_templates", "get", params);
-
- assertEquals("Expected kind to be user", "arvados#pipelineTemplate", response.get("kind"));
- assertEquals("Expected uuid for pipeline template", uuid, response.get("uuid"));
-
- // delete the object
- params = new HashMap<String, Object>();
- params.put("uuid", uuid);
- response = arv.call("pipeline_templates", "delete", params);
- }
-
- /**
- * Test users.list api
- * @throws Exception
- */
- @Test
- public void testArvadosWithTokenPassed() throws Exception {
- String token = System.getenv().get("ARVADOS_API_TOKEN");
- String host = System.getenv().get("ARVADOS_API_HOST");
- String hostInsecure = System.getenv().get("ARVADOS_API_HOST_INSECURE");
-
- Arvados arv = new Arvados("arvados", "v1", token, host, hostInsecure);
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("users", "list", params);
- assertEquals("Expected kind to be users.list", "arvados#userList", response.get("kind"));
- }
-
- /**
- * Test users.list api
- * @throws Exception
- */
- @Test
- public void testCallUsersListWithLimit() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("users", "list", params);
- assertEquals("Expected users.list in response", "arvados#userList", response.get("kind"));
-
- List items = (List)response.get("items");
- assertNotNull("expected users list items", items);
- assertTrue("expected at least one item in users list", items.size()>0);
-
- int numUsersListItems = items.size();
-
- // make the request again with limit
- params = new HashMap<String, Object>();
- params.put("limit", numUsersListItems-1);
-
- response = arv.call("users", "list", params);
-
- assertEquals("Expected kind to be users.list", "arvados#userList", response.get("kind"));
-
- items = (List)response.get("items");
- assertNotNull("expected users list items", items);
- assertTrue("expected at least one item in users list", items.size()>0);
-
- int numUsersListItems2 = items.size();
- assertEquals ("Got more users than requested", numUsersListItems-1, numUsersListItems2);
- }
-
- @Test
- public void testGetLinksWithFilters() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("links", "list", params);
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
-
- String[][] filters = new String[1][];
- String[] condition = new String[3];
- condition[0] = "name";
- condition[1] = "=";
- condition[2] = "can_manage";
- filters[0] = condition;
- params.put("filters", filters);
-
- response = arv.call("links", "list", params);
-
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
- assertFalse("Expected no can_manage in response", response.toString().contains("\"name\":\"can_manage\""));
- }
-
- @Test
- public void testGetLinksWithFiltersAsList() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("links", "list", params);
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
-
- List<List> filters = new ArrayList<List>();
- List<String> condition = new ArrayList<String>();
- condition.add("name");
- condition.add("is_a");
- condition.add("can_manage");
- filters.add(condition);
- params.put("filters", filters);
-
- response = arv.call("links", "list", params);
-
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
- assertFalse("Expected no can_manage in response", response.toString().contains("\"name\":\"can_manage\""));
- }
-
- @Test
- public void testGetLinksWithTimestampFilters() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map response = arv.call("links", "list", params);
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
-
- // get links created "tomorrow". Expect none in response
- Calendar calendar = new GregorianCalendar();
- calendar.setTime(new Date());
- calendar.add(Calendar.DAY_OF_MONTH, 1);
-
- Object[][] filters = new Object[1][];
- Object[] condition = new Object[3];
- condition[0] = "created_at";
- condition[1] = ">";
- condition[2] = calendar.get(Calendar.YEAR) + "-" + (calendar.get(Calendar.MONTH)+1) + "-" + calendar.get(Calendar.DAY_OF_MONTH);
- filters[0] = condition;
- params.put("filters", filters);
-
- response = arv.call("links", "list", params);
-
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
- int items_avail = ((BigDecimal)response.get("items_available")).intValue();
- assertEquals("Expected zero links", items_avail, 0);
- }
-
- @Test
- public void testGetLinksWithWhereClause() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
-
- Map<String, Object> params = new HashMap<String, Object>();
-
- Map<String, String> where = new HashMap<String, String>();
- where.put("where", "updated_at > '2014-05-01'");
-
- params.put("where", where);
-
- Map response = arv.call("links", "list", params);
-
- assertEquals("Expected links.list in response", "arvados#linkList", response.get("kind"));
- }
-
- @Test
- public void testGetAvailableResources() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
- Set<String> resources = arv.getAvailableResourses();
- assertNotNull("Expected resources", resources);
- assertTrue("Excected users in resrouces", resources.contains("users"));
- }
-
- @Test
- public void testGetAvailableMethodsResources() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
- Set<String> methods = arv.getAvailableMethodsForResourse("users");
- assertNotNull("Expected resources", methods);
- assertTrue("Excected create method for users", methods.contains("create"));
- }
-
- @Test
- public void testGetAvailableParametersForUsersGetMethod() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
- Map<String,List<String>> parameters = arv.getAvailableParametersForMethod("users", "get");
- assertNotNull("Expected parameters", parameters);
- assertTrue("Excected uuid parameter for get method for users", parameters.get("required").contains("uuid"));
- }
-
- @Test
- public void testGetAvailableParametersForUsersCreateMethod() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
- Map<String,List<String>> parameters = arv.getAvailableParametersForMethod("users", "create");
- assertNotNull("Expected parameters", parameters);
- assertTrue("Excected user parameter for get method for users", parameters.get("required").contains("user"));
- }
-
- @Test
- public void testGetAvailableParametersForUsersListMethod() throws Exception {
- Arvados arv = new Arvados("arvados", "v1");
- Map<String,List<String>> parameters = arv.getAvailableParametersForMethod("users", "list");
- assertNotNull("Expected parameters", parameters);
- assertTrue("Excected no required parameter for list method for users", parameters.get("required").size() == 0);
- assertTrue("Excected some optional parameters for list method for users", parameters.get("optional").contains("filters"));
- }
-
-}
+++ /dev/null
-{
- "components":{
- "do_hash":{
- "script":"hash.py",
- "script_parameters":{
- "input":{
- "required": true,
- "dataclass": "Collection"
- }
- },
- "script_version":"master",
- "output_is_persistent":true
- }
- }
-}
import os
import re
import socket
+import ssl
import sys
import time
import types
def _intercept_http_request(self, uri, method="GET", headers={}, **kwargs):
- if (self.max_request_size and
- kwargs.get('body') and
- self.max_request_size < len(kwargs['body'])):
- raise apiclient_errors.MediaUploadSizeError("Request size %i bytes exceeds published limit of %i bytes" % (len(kwargs['body']), self.max_request_size))
-
- if config.get("ARVADOS_EXTERNAL_CLIENT", "") == "true":
- headers['X-External-Client'] = '1'
-
- headers['Authorization'] = 'OAuth2 %s' % self.arvados_api_token
if not headers.get('X-Request-Id'):
headers['X-Request-Id'] = self._request_id()
+ try:
+ if (self.max_request_size and
+ kwargs.get('body') and
+ self.max_request_size < len(kwargs['body'])):
+ raise apiclient_errors.MediaUploadSizeError("Request size %i bytes exceeds published limit of %i bytes" % (len(kwargs['body']), self.max_request_size))
- retryable = method in [
- 'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT']
- retry_count = self._retry_count if retryable else 0
-
- if (not retryable and
- time.time() - self._last_request_time > self._max_keepalive_idle):
- # High probability of failure due to connection atrophy. Make
- # sure this request [re]opens a new connection by closing and
- # forgetting all cached connections first.
- for conn in self.connections.values():
- conn.close()
- self.connections.clear()
-
- delay = self._retry_delay_initial
- for _ in range(retry_count):
- self._last_request_time = time.time()
- try:
- return self.orig_http_request(uri, method, headers=headers, **kwargs)
- except http.client.HTTPException:
- _logger.debug("Retrying API request in %d s after HTTP error",
- delay, exc_info=True)
- except socket.error:
- # This is the one case where httplib2 doesn't close the
- # underlying connection first. Close all open
- # connections, expecting this object only has the one
- # connection to the API server. This is safe because
- # httplib2 reopens connections when needed.
- _logger.debug("Retrying API request in %d s after socket error",
- delay, exc_info=True)
+ if config.get("ARVADOS_EXTERNAL_CLIENT", "") == "true":
+ headers['X-External-Client'] = '1'
+
+ headers['Authorization'] = 'OAuth2 %s' % self.arvados_api_token
+
+ retryable = method in [
+ 'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT']
+ retry_count = self._retry_count if retryable else 0
+
+ if (not retryable and
+ time.time() - self._last_request_time > self._max_keepalive_idle):
+ # High probability of failure due to connection atrophy. Make
+ # sure this request [re]opens a new connection by closing and
+ # forgetting all cached connections first.
for conn in self.connections.values():
conn.close()
- except httplib2.SSLHandshakeError as e:
- # Intercept and re-raise with a better error message.
- raise httplib2.SSLHandshakeError("Could not connect to %s\n%s\nPossible causes: remote SSL/TLS certificate expired, or was issued by an untrusted certificate authority." % (uri, e))
+ self.connections.clear()
+
+ delay = self._retry_delay_initial
+ for _ in range(retry_count):
+ self._last_request_time = time.time()
+ try:
+ return self.orig_http_request(uri, method, headers=headers, **kwargs)
+ except http.client.HTTPException:
+ _logger.debug("[%s] Retrying API request in %d s after HTTP error",
+ headers['X-Request-Id'], delay, exc_info=True)
+ except ssl.SSLCertVerificationError as e:
+ raise ssl.SSLCertVerificationError(e.args[0], "Could not connect to %s\n%s\nPossible causes: remote SSL/TLS certificate expired, or was issued by an untrusted certificate authority." % (uri, e)) from None
+ except socket.error:
+ # This is the one case where httplib2 doesn't close the
+ # underlying connection first. Close all open
+ # connections, expecting this object only has the one
+ # connection to the API server. This is safe because
+ # httplib2 reopens connections when needed.
+ _logger.debug("[%s] Retrying API request in %d s after socket error",
+ headers['X-Request-Id'], delay, exc_info=True)
+ for conn in self.connections.values():
+ conn.close()
+
+ time.sleep(delay)
+ delay = delay * self._retry_delay_backoff
- time.sleep(delay)
- delay = delay * self._retry_delay_backoff
-
- self._last_request_time = time.time()
- return self.orig_http_request(uri, method, headers=headers, **kwargs)
+ self._last_request_time = time.time()
+ return self.orig_http_request(uri, method, headers=headers, **kwargs)
+ except Exception as e:
+ # Prepend "[request_id] " to the error message, which we
+ # assume is the first string argument passed to the exception
+ # constructor.
+ for i in range(len(e.args or ())):
+ if type(e.args[i]) == type(""):
+ e.args = e.args[:i] + ("[{}] {}".format(headers['X-Request-Id'], e.args[i]),) + e.args[i+1:]
+ raise type(e)(*e.args)
+ raise
def _patch_http_request(http, api_token):
http.arvados_api_token = api_token
pass
elif not host and not token:
return api_from_config(
- version=version, cache=cache, request_id=request_id, **kwargs)
+ version=version, cache=cache, timeout=timeout,
+ request_id=request_id, **kwargs)
else:
# Caller provided one but not the other
if not host:
DEFAULT_PUT_THREADS = 2
DEFAULT_GET_THREADS = 2
- def __init__(self, keep, copies=None, put_threads=None, num_retries=None):
+ def __init__(self, keep, copies=None, put_threads=None, num_retries=None, storage_classes_func=None):
"""keep: KeepClient object to use"""
self._keep = keep
self._bufferblocks = collections.OrderedDict()
self._prefetch_threads = None
self.lock = threading.Lock()
self.prefetch_enabled = True
- if put_threads:
- self.num_put_threads = put_threads
- else:
- self.num_put_threads = _BlockManager.DEFAULT_PUT_THREADS
+ self.num_put_threads = put_threads or _BlockManager.DEFAULT_PUT_THREADS
self.num_get_threads = _BlockManager.DEFAULT_GET_THREADS
self.copies = copies
+ self.storage_classes = storage_classes_func or (lambda: [])
self._pending_write_size = 0
self.threads_lock = threading.Lock()
self.padding_block = None
return
if self.copies is None:
- loc = self._keep.put(bufferblock.buffer_view[0:bufferblock.write_pointer].tobytes(), num_retries=self.num_retries)
+ loc = self._keep.put(bufferblock.buffer_view[0:bufferblock.write_pointer].tobytes(), num_retries=self.num_retries, classes=self.storage_classes())
else:
- loc = self._keep.put(bufferblock.buffer_view[0:bufferblock.write_pointer].tobytes(), num_retries=self.num_retries, copies=self.copies)
+ loc = self._keep.put(bufferblock.buffer_view[0:bufferblock.write_pointer].tobytes(), num_retries=self.num_retries, copies=self.copies, classes=self.storage_classes())
bufferblock.set_state(_BufferBlock.COMMITTED, loc)
except Exception as e:
bufferblock.set_state(_BufferBlock.ERROR, e)
# If we don't limit the Queue size, the upload queue can quickly
# grow to take up gigabytes of RAM if the writing process is
- # generating data more quickly than it can be send to the Keep
+ # generating data more quickly than it can be sent to the Keep
# servers.
#
# With two upload threads and a queue size of 2, this means up to 4
if sync:
try:
if self.copies is None:
- loc = self._keep.put(block.buffer_view[0:block.write_pointer].tobytes(), num_retries=self.num_retries)
+ loc = self._keep.put(block.buffer_view[0:block.write_pointer].tobytes(), num_retries=self.num_retries, classes=self.storage_classes())
else:
- loc = self._keep.put(block.buffer_view[0:block.write_pointer].tobytes(), num_retries=self.num_retries, copies=self.copies)
+ loc = self._keep.put(block.buffer_view[0:block.write_pointer].tobytes(), num_retries=self.num_retries, copies=self.copies, classes=self.storage_classes())
block.set_state(_BufferBlock.COMMITTED, loc)
except Exception as e:
block.set_state(_BufferBlock.ERROR, e)
apiconfig=None,
block_manager=None,
replication_desired=None,
+ storage_classes_desired=None,
put_threads=None):
"""Collection constructor.
configuration applies. If not None, this value will also be used
for determining the number of block copies being written.
+ :storage_classes_desired:
+ A list of storage class names where to upload the data. If None,
+ the keep client is expected to store the data into the cluster's
+ default storage class(es).
+
"""
+
+ if storage_classes_desired and type(storage_classes_desired) is not list:
+ raise errors.ArgumentError("storage_classes_desired must be list type.")
+
super(Collection, self).__init__(parent)
self._api_client = api_client
self._keep_client = keep_client
self._block_manager = block_manager
self.replication_desired = replication_desired
+ self._storage_classes_desired = storage_classes_desired
self.put_threads = put_threads
if apiconfig:
try:
self._populate()
- except (IOError, errors.SyntaxError) as e:
- raise errors.ArgumentError("Error processing manifest text: %s", e)
+ except errors.SyntaxError as e:
+ raise errors.ArgumentError("Error processing manifest text: %s", str(e)) from None
+
+ def storage_classes_desired(self):
+ return self._storage_classes_desired or []
def root_collection(self):
return self
copies = (self.replication_desired or
self._my_api()._rootDesc.get('defaultCollectionReplication',
2))
- self._block_manager = _BlockManager(self._my_keep(), copies=copies, put_threads=self.put_threads, num_retries=self.num_retries)
+ self._block_manager = _BlockManager(self._my_keep(), copies=copies, put_threads=self.put_threads, num_retries=self.num_retries, storage_classes_func=self.storage_classes_desired)
return self._block_manager
def _remember_api_response(self, response):
self._manifest_text = self._api_response['manifest_text']
self._portable_data_hash = self._api_response['portable_data_hash']
# If not overriden via kwargs, we should try to load the
- # replication_desired from the API server
+ # replication_desired and storage_classes_desired from the API server
if self.replication_desired is None:
self.replication_desired = self._api_response.get('replication_desired', None)
+ if self._storage_classes_desired is None:
+ self._storage_classes_desired = self._api_response.get('storage_classes_desired', None)
def _populate(self):
if self._manifest_text is None:
storage_classes=None,
trash_at=None,
merge=True,
- num_retries=None):
+ num_retries=None,
+ preserve_version=False):
"""Save collection to an existing collection record.
Commit pending buffer blocks to Keep, merge with remote record (if
:num_retries:
Retry count on API calls (if None, use the collection default)
+ :preserve_version:
+ If True, indicate that the collection content being saved right now
+ should be preserved in a version snapshot if the collection record is
+ updated in the future. Requires that the API server has
+ Collections.CollectionVersioning enabled, if not, setting this will
+ raise an exception.
+
"""
if properties and type(properties) is not dict:
raise errors.ArgumentError("properties must be dictionary type.")
if storage_classes and type(storage_classes) is not list:
raise errors.ArgumentError("storage_classes must be list type.")
+ if storage_classes:
+ self._storage_classes_desired = storage_classes
if trash_at and type(trash_at) is not datetime.datetime:
raise errors.ArgumentError("trash_at must be datetime type.")
+ if preserve_version and not self._my_api().config()['Collections'].get('CollectionVersioning', False):
+ raise errors.ArgumentError("preserve_version is not supported when CollectionVersioning is not enabled.")
+
body={}
if properties:
body["properties"] = properties
- if storage_classes:
- body["storage_classes_desired"] = storage_classes
+ if self.storage_classes_desired():
+ body["storage_classes_desired"] = self.storage_classes_desired()
if trash_at:
t = trash_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
body["trash_at"] = t
+ if preserve_version:
+ body["preserve_version"] = preserve_version
if not self.committed():
if self._has_remote_blocks:
storage_classes=None,
trash_at=None,
ensure_unique_name=False,
- num_retries=None):
+ num_retries=None,
+ preserve_version=False):
"""Save collection to a new collection record.
Commit pending buffer blocks to Keep and, when create_collection_record
:num_retries:
Retry count on API calls (if None, use the collection default)
+ :preserve_version:
+ If True, indicate that the collection content being saved right now
+ should be preserved in a version snapshot if the collection record is
+ updated in the future. Requires that the API server has
+ Collections.CollectionVersioning enabled, if not, setting this will
+ raise an exception.
+
"""
if properties and type(properties) is not dict:
raise errors.ArgumentError("properties must be dictionary type.")
if trash_at and type(trash_at) is not datetime.datetime:
raise errors.ArgumentError("trash_at must be datetime type.")
+ if preserve_version and not self._my_api().config()['Collections'].get('CollectionVersioning', False):
+ raise errors.ArgumentError("preserve_version is not supported when CollectionVersioning is not enabled.")
+
if self._has_remote_blocks:
# Copy any remote blocks to the local cluster.
self._copy_remote_blocks(remote_blocks={})
self._has_remote_blocks = False
+ if storage_classes:
+ self._storage_classes_desired = storage_classes
+
self._my_block_manager().commit_all()
text = self.manifest_text(strip=False)
body["owner_uuid"] = owner_uuid
if properties:
body["properties"] = properties
- if storage_classes:
- body["storage_classes_desired"] = storage_classes
+ if self.storage_classes_desired():
+ body["storage_classes_desired"] = self.storage_classes_desired()
if trash_at:
t = trash_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
body["trash_at"] = t
+ if preserve_version:
+ body["preserve_version"] = preserve_version
self._remember_api_response(self._my_api().collections().create(ensure_unique_name=ensure_unique_name, body=body).execute(num_retries=num_retries))
text = self._api_response["manifest_text"]
self.find_or_create(os.path.join(stream_name, name[:-2]), COLLECTION)
else:
filepath = os.path.join(stream_name, name)
- afile = self.find_or_create(filepath, FILE)
+ try:
+ afile = self.find_or_create(filepath, FILE)
+ except IOError as e:
+ if e.errno == errno.ENOTDIR:
+ raise errors.SyntaxError("Dir part of %s conflicts with file of the same name.", filepath) from None
+ else:
+ raise e from None
if isinstance(afile, ArvadosFile):
afile.add_segment(blocks, pos, size)
else:
help='Perform copy even if the object appears to exist at the remote destination.')
copy_opts.add_argument(
'--src', dest='source_arvados',
- help='The name of the source Arvados instance (required) - points at an Arvados config file. May be either a pathname to a config file, or (for example) "foo" as shorthand for $HOME/.config/arvados/foo.conf.')
+ help='The cluster id of the source Arvados instance. May be either a pathname to a config file, or (for example) "foo" as shorthand for $HOME/.config/arvados/foo.conf. If not provided, will be inferred from the UUID of the object being copied.')
copy_opts.add_argument(
'--dst', dest='destination_arvados',
- help='The name of the destination Arvados instance (required) - points at an Arvados config file. May be either a pathname to a config file, or (for example) "foo" as shorthand for $HOME/.config/arvados/foo.conf.')
+ help='The name of the destination Arvados instance (required). May be either a pathname to a config file, or (for example) "foo" as shorthand for $HOME/.config/arvados/foo.conf. If not provided, will use ARVADOS_API_HOST from environment.')
copy_opts.add_argument(
'--recursive', dest='recursive', action='store_true',
help='Recursively copy any dependencies for this object, and subprojects. (default)')
copy_opts.add_argument(
'--project-uuid', dest='project_uuid',
help='The UUID of the project at the destination to which the collection or workflow should be copied.')
+ copy_opts.add_argument(
+ '--storage-classes', dest='storage_classes',
+ help='Comma separated list of storage classes to be used when saving data to the destinaton Arvados instance.')
copy_opts.add_argument(
'object_uuid',
copy_opts.set_defaults(recursive=True)
parser = argparse.ArgumentParser(
- description='Copy a workflow or collection from one Arvados instance to another.',
+ description='Copy a workflow, collection or project from one Arvados instance to another. On success, the uuid of the copied object is printed to stdout.',
parents=[copy_opts, arv_cmd.retry_opt])
args = parser.parse_args()
+ if args.storage_classes:
+ args.storage_classes = [x for x in args.storage_classes.strip().replace(' ', '').split(',') if x]
+
if args.verbose:
logger.setLevel(logging.DEBUG)
else:
logger.error("API server returned an error result: {}".format(result))
exit(1)
- logger.info("")
+ print(result['uuid'])
+
+ if result.get('partial_error'):
+ logger.warning("Warning: created copy with uuid {} but failed to copy some items: {}".format(result['uuid'], result['partial_error']))
+ exit(1)
+
logger.info("Success: created copy with uuid {}".format(result['uuid']))
exit(0)
# fetch the workflow from the source instance
wf = src.workflows().get(uuid=wf_uuid).execute(num_retries=args.retries)
+ if not wf["definition"]:
+ logger.warning("Workflow object {} has an empty or null definition, it won't do anything.".format(wf_uuid))
+
# copy collections and docker images
- if args.recursive:
+ if args.recursive and wf["definition"]:
wf_def = yaml.safe_load(wf["definition"])
if wf_def is not None:
locations = []
if not body["name"]:
body['name'] = "copied from " + collection_uuid
+ if args.storage_classes:
+ body['storage_classes_desired'] = args.storage_classes
+
body['owner_uuid'] = args.project_uuid
dst_collection = dst.collections().create(body=body, ensure_unique_name=True).execute(num_retries=args.retries)
if progress_writer:
progress_writer.report(obj_uuid, bytes_written, bytes_expected)
data = src_keep.get(word)
- dst_locator = dst_keep.put(data)
+ dst_locator = dst_keep.put(data, classes=(args.storage_classes or []))
dst_locators[blockhash] = dst_locator
bytes_written += loc.size
dst_manifest.write(' ')
logger.debug('Copying %s to %s', obj_uuid, project_record["uuid"])
+
+ partial_error = ""
+
# Copy collections
- copy_collections([col["uuid"] for col in arvados.util.list_all(src.collections().list, filters=[["owner_uuid", "=", obj_uuid]])],
- src, dst, args)
+ try:
+ copy_collections([col["uuid"] for col in arvados.util.list_all(src.collections().list, filters=[["owner_uuid", "=", obj_uuid]])],
+ src, dst, args)
+ except Exception as e:
+ partial_error += "\n" + str(e)
# Copy workflows
for w in arvados.util.list_all(src.workflows().list, filters=[["owner_uuid", "=", obj_uuid]]):
- copy_workflow(w["uuid"], src, dst, args)
+ try:
+ copy_workflow(w["uuid"], src, dst, args)
+ except Exception as e:
+ partial_error += "\n" + "Error while copying %s: %s" % (w["uuid"], e)
if args.recursive:
for g in arvados.util.list_all(src.groups().list, filters=[["owner_uuid", "=", obj_uuid]]):
- copy_project(g["uuid"], src, dst, project_record["uuid"], args)
+ try:
+ copy_project(g["uuid"], src, dst, project_record["uuid"], args)
+ except Exception as e:
+ partial_error += "\n" + "Error while copying %s: %s" % (g["uuid"], e)
+
+ project_record["partial_error"] = partial_error
return project_record
kwargs.setdefault('stdin', subprocess.PIPE)
kwargs.setdefault('stdout', sys.stderr)
try:
- docker_proc = subprocess.Popen(['docker.io'] + cmd, *args, **kwargs)
- except OSError: # No docker.io in $PATH
docker_proc = subprocess.Popen(['docker'] + cmd, *args, **kwargs)
+ except OSError: # No docker in $PATH, try docker.io
+ docker_proc = subprocess.Popen(['docker.io'] + cmd, *args, **kwargs)
if manage_stdin:
docker_proc.stdin.close()
return docker_proc
check_docker(list_proc, "images")
def find_image_hashes(image_search, image_tag=None):
- # Given one argument, search for Docker images with matching hashes,
- # and return their full hashes in a set.
- # Given two arguments, also search for a Docker image with the
- # same repository and tag. If one is found, return its hash in a
- # set; otherwise, fall back to the one-argument hash search.
- # Returns None if no match is found, or a hash search is ambiguous.
- hash_search = image_search.lower()
- hash_matches = set()
- for image in docker_images():
- if (image.repo == image_search) and (image.tag == image_tag):
- return set([image.hash])
- elif image.hash.startswith(hash_search):
- hash_matches.add(image.hash)
- return hash_matches
+ # Query for a Docker images with the repository and tag and return
+ # the image ids in a list. Returns empty list if no match is
+ # found.
+
+ list_proc = popen_docker(['inspect', "%s%s" % (image_search, ":"+image_tag if image_tag else "")], stdout=subprocess.PIPE)
+
+ inspect = list_proc.stdout.read()
+ list_proc.stdout.close()
+
+ imageinfo = json.loads(inspect)
+
+ return [i["Id"] for i in imageinfo]
def find_one_image_hash(image_search, image_tag=None):
hashes = find_image_hashes(image_search, image_tag)
arguments = [i for i in arguments if i not in (args.image, args.tag, image_repo_tag)]
put_args = keepdocker_parser.parse_known_args(arguments)[1]
+ # Don't fail when cached manifest is invalid, just ignore the cache.
+ put_args += ['--batch']
+
if args.name is None:
put_args += ['--name', collection_name]
put_args + ['--filename', outfile_name, image_file.name], stdout=stdout,
install_sig_handlers=install_sig_handlers).strip()
- api.collections().update(uuid=coll_uuid, body={"properties": {"docker-image-repo-tag": image_repo_tag}}).execute(num_retries=args.retries)
+ # Managed properties could be already set
+ coll_properties = api.collections().get(uuid=coll_uuid).execute(num_retries=args.retries).get('properties', {})
+ coll_properties.update({"docker-image-repo-tag": image_repo_tag})
+
+ api.collections().update(uuid=coll_uuid, body={"properties": coll_properties}).execute(num_retries=args.retries)
# Read the image metadata and make Arvados links from it.
image_file.seek(0)
_group.add_argument('--stream', action='store_true',
help="""
Store the file content and display the resulting manifest on
-stdout. Do not write the manifest to Keep or save a Collection object
-in Arvados.
+stdout. Do not save a Collection object in Arvados.
""")
_group.add_argument('--as-manifest', action='store_true', dest='manifest',
""")
_group.add_argument('--no-follow-links', action='store_false', dest='follow_links',
help="""
-Do not follow file and directory symlinks.
+Ignore file and directory symlinks. Even paths given explicitly on the
+command line will be skipped if they are symlinks.
""")
still be displayed.)
""")
+run_opts.add_argument('--batch', action='store_true', default=False,
+ help="""
+Retries with '--no-resume --no-cache' if cached state contains invalid/expired
+block signatures.
+""")
+
_group = run_opts.add_mutually_exclusive_group()
_group.add_argument('--resume', action='store_true', default=True,
help="""
args.paths = ["-" if x == "/dev/stdin" else x for x in args.paths]
- if len(args.paths) != 1 or os.path.isdir(args.paths[0]):
- if args.filename:
- arg_parser.error("""
+ if args.filename and (len(args.paths) != 1 or os.path.isdir(args.paths[0])):
+ arg_parser.error("""
--filename argument cannot be used when storing a directory or
multiple files.
""")
}
def __init__(self, paths, resume=True, use_cache=True, reporter=None,
- name=None, owner_uuid=None, api_client=None,
+ name=None, owner_uuid=None, api_client=None, batch_mode=False,
ensure_unique_name=False, num_retries=None,
put_threads=None, replication_desired=None, filename=None,
update_time=60.0, update_collection=None, storage_classes=None,
self.paths = paths
self.resume = resume
self.use_cache = use_cache
+ self.batch_mode = batch_mode
self.update = False
self.reporter = reporter
# This will set to 0 before start counting, if no special files are going
self._write_stdin(self.filename or 'stdin')
elif not os.path.exists(path):
raise PathDoesNotExistError(u"file or directory '{}' does not exist.".format(path))
+ elif (not self.follow_links) and os.path.islink(path):
+ self.logger.warning("Skipping symlink '{}'".format(path))
+ continue
elif os.path.isdir(path):
# Use absolute paths on cache index so CWD doesn't interfere
# with the caching logic.
files.sort()
for f in files:
filepath = os.path.join(root, f)
+ if not os.path.isfile(filepath):
+ self.logger.warning("Skipping non-regular file '{}'".format(filepath))
+ continue
# Add its size to the total bytes count (if applicable)
if self.follow_links or (not os.path.islink(filepath)):
if self.bytes_expected is not None:
else:
# The file already exist on remote collection, skip it.
pass
- self._remote_collection.save(storage_classes=self.storage_classes,
- num_retries=self.num_retries,
+ self._remote_collection.save(num_retries=self.num_retries,
trash_at=self._collection_trash_at())
else:
- if self.storage_classes is None:
- self.storage_classes = ['default']
+ if len(self._local_collection) == 0:
+ self.logger.warning("No files were uploaded, skipping collection creation.")
+ return
self._local_collection.save_new(
name=self.name, owner_uuid=self.owner_uuid,
- storage_classes=self.storage_classes,
ensure_unique_name=self.ensure_unique_name,
num_retries=self.num_retries,
trash_at=self._collection_trash_at())
def _write_stdin(self, filename):
output = self._local_collection.open(filename, 'wb')
- self._write(sys.stdin, output)
+ self._write(sys.stdin.buffer, output)
output.close()
def _check_file(self, source, filename):
self._remote_collection = arvados.collection.Collection(
update_collection,
api_client=self._api_client,
+ storage_classes_desired=self.storage_classes,
num_retries=self.num_retries)
except arvados.errors.ApiError as error:
raise CollectionUpdateError("Cannot read collection {} ({})".format(update_collection, error))
# No cache file, set empty state
self._state = copy.deepcopy(self.EMPTY_STATE)
if not self._cached_manifest_valid():
- raise ResumeCacheInvalidError()
+ if not self.batch_mode:
+ raise ResumeCacheInvalidError()
+ else:
+ self.logger.info("Invalid signatures on cache file '{}' while being run in 'batch mode' -- continuing anyways.".format(self._cache_file.name))
+ self.use_cache = False # Don't overwrite preexisting cache file.
+ self._state = copy.deepcopy(self.EMPTY_STATE)
# Load the previous manifest so we can check if files were modified remotely.
self._local_collection = arvados.collection.Collection(
self._state['manifest'],
replication_desired=self.replication_desired,
+ storage_classes_desired=self.storage_classes,
put_threads=self.put_threads,
api_client=self._api_client,
num_retries=self.num_retries)
# Split storage-classes argument
storage_classes = None
if args.storage_classes:
- storage_classes = args.storage_classes.strip().split(',')
- if len(storage_classes) > 1:
- logger.error("Multiple storage classes are not supported currently.")
- sys.exit(1)
-
+ storage_classes = args.storage_classes.strip().replace(' ', '').split(',')
# Setup exclude regex from all the --exclude arguments provided
name_patterns = []
writer = ArvPutUploadJob(paths = args.paths,
resume = args.resume,
use_cache = args.use_cache,
+ batch_mode= args.batch,
filename = args.filename,
reporter = reporter,
api_client = api_client,
" or been created with another Arvados user's credentials.",
" Switch user or use one of the following options to restart upload:",
" --no-resume to start a new resume cache.",
- " --no-cache to disable resume cache."]))
+ " --no-cache to disable resume cache.",
+ " --batch to ignore the resume cache if invalid."]))
sys.exit(1)
except (CollectionUpdateError, PathDoesNotExistError) as error:
logger.error("\n".join([
output = None
try:
writer.start(save_collection=not(args.stream or args.raw))
- except arvados.errors.ApiError as error:
+ except (arvados.errors.ApiError, arvados.errors.KeepWriteError) as error:
logger.error("\n".join([
"arv-put: %s" % str(error)]))
sys.exit(1)
output = writer.manifest_text()
elif args.raw:
output = ','.join(writer.data_locators())
- else:
+ elif writer.manifest_locator() is not None:
try:
expiration_notice = ""
if writer.collection_trash_at() is not None:
"arv-put: Error creating Collection on project: {}.".format(
error))
status = 1
+ else:
+ status = 1
# Print the locator (uuid) of the new collection.
if output is None:
from __future__ import absolute_import
from __future__ import division
+import copy
from future import standard_library
from future.utils import native_str
standard_library.install_aliases()
return None
return self._result['body']
- def put(self, hash_s, body, timeout=None):
+ def put(self, hash_s, body, timeout=None, headers={}):
+ put_headers = copy.copy(self.put_headers)
+ put_headers.update(headers)
url = self.root + hash_s
_logger.debug("Request: PUT %s", url)
curl = self._get_user_agent()
curl.setopt(pycurl.INFILESIZE, len(body))
curl.setopt(pycurl.READFUNCTION, body_reader.read)
curl.setopt(pycurl.HTTPHEADER, [
- '{}: {}'.format(k,v) for k,v in self.put_headers.items()])
+ '{}: {}'.format(k,v) for k,v in put_headers.items()])
curl.setopt(pycurl.WRITEFUNCTION, response_body.write)
curl.setopt(pycurl.HEADERFUNCTION, self._headerfunction)
if self.insecure:
class KeepWriterQueue(queue.Queue):
- def __init__(self, copies):
+ def __init__(self, copies, classes=[]):
queue.Queue.__init__(self) # Old-style superclass
self.wanted_copies = copies
+ self.wanted_storage_classes = classes
self.successful_copies = 0
+ self.confirmed_storage_classes = {}
self.response = None
- self.successful_copies_lock = threading.Lock()
- self.pending_tries = copies
+ self.storage_classes_tracking = True
+ self.queue_data_lock = threading.RLock()
+ self.pending_tries = max(copies, len(classes))
self.pending_tries_notification = threading.Condition()
- def write_success(self, response, replicas_nr):
- with self.successful_copies_lock:
+ def write_success(self, response, replicas_nr, classes_confirmed):
+ with self.queue_data_lock:
self.successful_copies += replicas_nr
+ if classes_confirmed is None:
+ self.storage_classes_tracking = False
+ elif self.storage_classes_tracking:
+ for st_class, st_copies in classes_confirmed.items():
+ try:
+ self.confirmed_storage_classes[st_class] += st_copies
+ except KeyError:
+ self.confirmed_storage_classes[st_class] = st_copies
+ self.pending_tries = max(self.wanted_copies - self.successful_copies, len(self.pending_classes()))
self.response = response
with self.pending_tries_notification:
self.pending_tries_notification.notify_all()
self.pending_tries_notification.notify()
def pending_copies(self):
- with self.successful_copies_lock:
+ with self.queue_data_lock:
return self.wanted_copies - self.successful_copies
+ def satisfied_classes(self):
+ with self.queue_data_lock:
+ if not self.storage_classes_tracking:
+ # Notifies disabled storage classes expectation to
+ # the outer loop.
+ return None
+ return list(set(self.wanted_storage_classes) - set(self.pending_classes()))
+
+ def pending_classes(self):
+ with self.queue_data_lock:
+ if (not self.storage_classes_tracking) or (self.wanted_storage_classes is None):
+ return []
+ unsatisfied_classes = copy.copy(self.wanted_storage_classes)
+ for st_class, st_copies in self.confirmed_storage_classes.items():
+ if st_class in unsatisfied_classes and st_copies >= self.wanted_copies:
+ unsatisfied_classes.remove(st_class)
+ return unsatisfied_classes
+
def get_next_task(self):
with self.pending_tries_notification:
while True:
- if self.pending_copies() < 1:
+ if self.pending_copies() < 1 and len(self.pending_classes()) == 0:
# This notify_all() is unnecessary --
# write_success() already called notify_all()
# when pending<1 became true, so it's not
class KeepWriterThreadPool(object):
- def __init__(self, data, data_hash, copies, max_service_replicas, timeout=None):
+ def __init__(self, data, data_hash, copies, max_service_replicas, timeout=None, classes=[]):
self.total_task_nr = 0
- self.wanted_copies = copies
if (not max_service_replicas) or (max_service_replicas >= copies):
num_threads = 1
else:
num_threads = int(math.ceil(1.0*copies/max_service_replicas))
_logger.debug("Pool max threads is %d", num_threads)
self.workers = []
- self.queue = KeepClient.KeepWriterQueue(copies)
+ self.queue = KeepClient.KeepWriterQueue(copies, classes)
# Create workers
for _ in range(num_threads):
w = KeepClient.KeepWriterThread(self.queue, data, data_hash, timeout)
self.total_task_nr += 1
def done(self):
- return self.queue.successful_copies
+ return self.queue.successful_copies, self.queue.satisfied_classes()
def join(self):
# Start workers
except queue.Empty:
return
try:
- locator, copies = self.do_task(service, service_root)
+ locator, copies, classes = self.do_task(service, service_root)
except Exception as e:
if not isinstance(e, self.TaskFailed):
_logger.exception("Exception in KeepWriterThread")
self.queue.write_fail(service)
else:
- self.queue.write_success(locator, copies)
+ self.queue.write_success(locator, copies, classes)
finally:
self.queue.task_done()
def do_task(self, service, service_root):
+ classes = self.queue.pending_classes()
+ headers = {}
+ if len(classes) > 0:
+ classes.sort()
+ headers['X-Keep-Storage-Classes'] = ', '.join(classes)
success = bool(service.put(self.data_hash,
self.data,
- timeout=self.timeout))
+ timeout=self.timeout,
+ headers=headers))
result = service.last_result()
if not success:
- if result.get('status_code', None):
+ if result.get('status_code'):
_logger.debug("Request fail: PUT %s => %s %s",
self.data_hash,
- result['status_code'],
- result['body'])
+ result.get('status_code'),
+ result.get('body'))
raise self.TaskFailed()
_logger.debug("KeepWriterThread %s succeeded %s+%i %s",
except (KeyError, ValueError):
replicas_stored = 1
- return result['body'].strip(), replicas_stored
+ classes_confirmed = {}
+ try:
+ scch = result['headers']['x-keep-storage-classes-confirmed']
+ for confirmation in scch.replace(' ', '').split(','):
+ if '=' in confirmation:
+ stored_class, stored_copies = confirmation.split('=')[:2]
+ classes_confirmed[stored_class] = int(stored_copies)
+ except (KeyError, ValueError):
+ # Storage classes confirmed header missing or corrupt
+ classes_confirmed = None
+
+ return result['body'].strip(), replicas_stored, classes_confirmed
def __init__(self, api_client=None, proxy=None,
self.get_counter = Counter()
self.hits_counter = Counter()
self.misses_counter = Counter()
+ self._storage_classes_unsupported_warning = False
+ self._default_classes = []
if local_store:
self.local_store = local_store
self._writable_services = None
self.using_proxy = None
self._static_services_list = False
+ try:
+ self._default_classes = [
+ k for k, v in self.api_client.config()['StorageClasses'].items() if v['Default']]
+ except KeyError:
+ # We're talking to an old cluster
+ pass
def current_timeout(self, attempt_number):
"""Return the appropriate timeout to use for this client.
self.get_counter.add(1)
+ request_id = (request_id or
+ (hasattr(self, 'api_client') and self.api_client.request_id) or
+ arvados.util.new_request_id())
+ if headers is None:
+ headers = {}
+ headers['X-Request-Id'] = request_id
+
slot = None
blob = None
try:
self.misses_counter.add(1)
- if headers is None:
- headers = {}
- headers['X-Request-Id'] = (request_id or
- (hasattr(self, 'api_client') and self.api_client.request_id) or
- arvados.util.new_request_id())
-
# If the locator has hints specifying a prefix (indicating a
# remote keepproxy) or the UUID of a local gateway service,
# read data from the indicated service(s) instead of the usual
for key in sorted_roots)
if not roots_map:
raise arvados.errors.KeepReadError(
- "failed to read {}: no Keep services available ({})".format(
- loc_s, loop.last_result()))
+ "[{}] failed to read {}: no Keep services available ({})".format(
+ request_id, loc_s, loop.last_result()))
elif not_founds == len(sorted_roots):
raise arvados.errors.NotFoundError(
- "{} not found".format(loc_s), service_errors)
+ "[{}] {} not found".format(request_id, loc_s), service_errors)
else:
raise arvados.errors.KeepReadError(
- "failed to read {} after {}".format(loc_s, loop.attempts_str()), service_errors, label="service")
+ "[{}] failed to read {} after {}".format(request_id, loc_s, loop.attempts_str()), service_errors, label="service")
@retry.retry_method
- def put(self, data, copies=2, num_retries=None, request_id=None):
+ def put(self, data, copies=2, num_retries=None, request_id=None, classes=None):
"""Save data in Keep.
This method will get a list of Keep services from the API server, and
*each* Keep server if it returns temporary failures, with
exponential backoff. The default value is set when the
KeepClient is initialized.
+ * classes: An optional list of storage class names where copies should
+ be written.
"""
+ classes = classes or self._default_classes
+
if not isinstance(data, bytes):
data = data.encode()
return loc_s
locator = KeepLocator(loc_s)
+ request_id = (request_id or
+ (hasattr(self, 'api_client') and self.api_client.request_id) or
+ arvados.util.new_request_id())
headers = {
- 'X-Request-Id': (request_id or
- (hasattr(self, 'api_client') and self.api_client.request_id) or
- arvados.util.new_request_id()),
+ 'X-Request-Id': request_id,
'X-Keep-Desired-Replicas': str(copies),
}
roots_map = {}
loop = retry.RetryLoop(num_retries, self._check_loop_result,
backoff_start=2)
- done = 0
+ done_copies = 0
+ done_classes = []
for tries_left in loop:
try:
sorted_roots = self.map_new_services(
loop.save_result(error)
continue
+ pending_classes = []
+ if done_classes is not None:
+ pending_classes = list(set(classes) - set(done_classes))
writer_pool = KeepClient.KeepWriterThreadPool(data=data,
data_hash=data_hash,
- copies=copies - done,
+ copies=copies - done_copies,
max_service_replicas=self.max_replicas_per_service,
- timeout=self.current_timeout(num_retries - tries_left))
+ timeout=self.current_timeout(num_retries - tries_left),
+ classes=pending_classes)
for service_root, ks in [(root, roots_map[root])
for root in sorted_roots]:
if ks.finished():
continue
writer_pool.add_task(ks, service_root)
writer_pool.join()
- done += writer_pool.done()
- loop.save_result((done >= copies, writer_pool.total_task_nr))
+ pool_copies, pool_classes = writer_pool.done()
+ done_copies += pool_copies
+ if (done_classes is not None) and (pool_classes is not None):
+ done_classes += pool_classes
+ loop.save_result(
+ (done_copies >= copies and set(done_classes) == set(classes),
+ writer_pool.total_task_nr))
+ else:
+ # Old keepstore contacted without storage classes support:
+ # success is determined only by successful copies.
+ #
+ # Disable storage classes tracking from this point forward.
+ if not self._storage_classes_unsupported_warning:
+ self._storage_classes_unsupported_warning = True
+ _logger.warning("X-Keep-Storage-Classes header not supported by the cluster")
+ done_classes = None
+ loop.save_result(
+ (done_copies >= copies, writer_pool.total_task_nr))
if loop.success():
return writer_pool.response()
if not roots_map:
raise arvados.errors.KeepWriteError(
- "failed to write {}: no Keep services available ({})".format(
- data_hash, loop.last_result()))
+ "[{}] failed to write {}: no Keep services available ({})".format(
+ request_id, data_hash, loop.last_result()))
else:
service_errors = ((key, roots_map[key].last_result()['error'])
for key in sorted_roots
if roots_map[key].last_result()['error'])
raise arvados.errors.KeepWriteError(
- "failed to write {} after {} (wanted {} copies but wrote {})".format(
- data_hash, loop.attempts_str(), copies, writer_pool.done()), service_errors, label="service")
+ "[{}] failed to write {} after {} (wanted {} copies but wrote {})".format(
+ request_id, data_hash, loop.attempts_str(), (copies, classes), writer_pool.done()), service_errors, label="service")
- def local_store_put(self, data, copies=1, num_retries=None):
+ def local_store_put(self, data, copies=1, num_retries=None, classes=[]):
"""A stub for put().
This method is used in place of the real put() method when
install_requires=[
'ciso8601 >=2.0.0',
'future',
- 'google-api-python-client >=1.6.2, <1.7',
- 'httplib2 >=0.9.2',
+ 'google-api-python-client >=1.6.2, <2',
+ 'google-auth<2',
+ 'httplib2 >=0.9.2, <0.20.2',
'pycurl >=7.19.5.1',
- 'ruamel.yaml >=0.15.54, <=0.16.5',
+ 'ruamel.yaml >=0.15.54, <0.17.11',
'setuptools',
'ws4py >=0.4.2',
- 'rsa < 4.1'
],
- extras_require={
- ':os.name=="posix" and python_version<"3"': ['subprocess32 >= 3.5.1'],
- ':python_version<"3"': ['pytz'],
- },
classifiers=[
- 'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
],
test_suite='tests',
class ApiClientMock(object):
def api_client_mock(self):
- return mock.MagicMock(name='api_client_mock')
+ api_mock = mock.MagicMock(name='api_client_mock')
+ api_mock.config.return_value = {
+ 'StorageClasses': {
+ 'default': {'Default': True}
+ }
+ }
+ return api_mock
def mock_keep_services(self, api_mock=None, status=200, count=12,
service_type='disk',
server_name controller ~.*;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
+ client_max_body_size 0;
location / {
proxy_pass http://controller;
proxy_set_header Host $http_host;
confdata['Clusters']['zzzzz']['Collections']['BlobSigning'] = blob_signing
with open(conf, 'w') as f:
yaml.safe_dump(confdata, f)
- keep_cmd = ["keepstore", "-config", conf]
+ keep_cmd = ["arvados-server", "keepstore", "-config", conf]
with open(_logfilename('keep{}'.format(n)), WRITE_MODE) as logf:
with open('/dev/null') as _stdin:
"http://%s:%s"%(localhost, keep_web_dl_port): {},
},
},
- "SSO": {
- "ExternalURL": "http://localhost:3002",
- },
}
config = {
"RequestTimeout": "30s",
},
"Login": {
- "SSO": {
- "ProviderAppID": "arvados-server",
- "ProviderAppSecret": "608dbf356a327e2d0d4932b60161e212c2d8d8f5e25690d7b622f850a990cd33",
+ "Test": {
+ "Enable": True,
+ "Users": {
+ "alice": {
+ "Email": "alice@example.com",
+ "Password": "xyzzy"
+ }
+ }
},
},
"SystemLogs": {
"UserProfileNotificationAddress": "arvados@example.com",
},
"Collections": {
+ "CollectionVersioning": True,
"BlobSigningKey": "zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc",
"TrustAllContent": False,
"ForwardSlashNameSubstitution": "/",
for msg in ["Bad UUID format", "Bad output format"]:
self.assertIn(msg, err_s)
+ @mock.patch('time.sleep')
+ def test_exceptions_include_request_id(self, sleep):
+ api = arvados.api('v1')
+ api.request_id='fake-request-id'
+ api._http.orig_http_request = mock.MagicMock()
+ api._http.orig_http_request.side_effect = socket.error('mock error')
+ caught = None
+ try:
+ api.users().current().execute()
+ except Exception as e:
+ caught = e
+ self.assertRegex(str(caught), r'fake-request-id')
+
def test_exceptions_without_errors_have_basic_info(self):
mock_responses = {
'arvados.humans.delete': (
text = "X" * maxsize
arvados.api('v1').collections().create(body={"manifest_text": text}).execute()
+ def test_default_request_timeout(self):
+ api = arvados.api('v1')
+ self.assertEqual(api._http.timeout, 300,
+ "Default timeout value should be 300")
+
+ def test_custom_request_timeout(self):
+ api = arvados.api('v1', timeout=1234)
+ self.assertEqual(api._http.timeout, 1234,
+ "Requested timeout value was 1234")
+
def test_ordered_json_model(self):
mock_responses = {
'arvados.humans.get': (
with c.open('foo', 'wt') as f:
f.write('foo')
c.save_new("arv-copy foo collection", owner_uuid=src_proj)
+ coll_record = api.collections().get(uuid=c.manifest_locator()).execute()
+ assert coll_record['storage_classes_desired'] == ['default']
dest_proj = api.groups().create(body={"group": {"name": "arv-copy dest project", "group_class": "project"}}).execute()["uuid"]
contents = api.groups().list(filters=[["owner_uuid", "=", dest_proj]]).execute()
assert len(contents["items"]) == 0
- try:
- self.run_copy(["--project-uuid", dest_proj, src_proj])
- except SystemExit as e:
- assert e.code == 0
+ with tutil.redirected_streams(
+ stdout=tutil.StringIO, stderr=tutil.StringIO) as (out, err):
+ try:
+ self.run_copy(["--project-uuid", dest_proj, "--storage-classes", "foo", src_proj])
+ except SystemExit as e:
+ assert e.code == 0
+ copy_uuid_from_stdout = out.getvalue().strip()
contents = api.groups().list(filters=[["owner_uuid", "=", dest_proj]]).execute()
assert len(contents["items"]) == 1
assert contents["items"][0]["name"] == "arv-copy project"
copied_project = contents["items"][0]["uuid"]
+ assert copied_project == copy_uuid_from_stdout
+
contents = api.collections().list(filters=[["owner_uuid", "=", copied_project]]).execute()
assert len(contents["items"]) == 1
assert contents["items"][0]["uuid"] != c.manifest_locator()
assert contents["items"][0]["name"] == "arv-copy foo collection"
assert contents["items"][0]["portable_data_hash"] == c.portable_data_hash()
+ assert contents["items"][0]["storage_classes_desired"] == ["foo"]
finally:
os.environ['HOME'] = home_was
from __future__ import absolute_import
import arvados
+import collections
+import copy
import hashlib
import mock
import os
side_effect=StopTest) as find_image_mock:
self.run_arv_keepdocker(['[::1]/repo/img'], sys.stderr)
find_image_mock.assert_called_with('[::1]/repo/img', 'latest')
+
+ @mock.patch('arvados.commands.keepdocker.find_image_hashes',
+ return_value=['abc123'])
+ @mock.patch('arvados.commands.keepdocker.find_one_image_hash',
+ return_value='abc123')
+ def test_collection_property_update(self, _1, _2):
+ image_id = 'sha256:'+hashlib.sha256(b'image').hexdigest()
+ fakeDD = arvados.api('v1')._rootDesc
+ fakeDD['dockerImageFormats'] = ['v2']
+
+ err = tutil.StringIO()
+ out = tutil.StringIO()
+ File = collections.namedtuple('File', ['name'])
+ mocked_file = File(name='docker_image')
+ mocked_collection = {
+ 'uuid': 'new-collection-uuid',
+ 'properties': {
+ 'responsible_person_uuid': 'person_uuid',
+ }
+ }
+
+ with tutil.redirected_streams(stdout=out), \
+ mock.patch('arvados.api') as api, \
+ mock.patch('arvados.commands.keepdocker.popen_docker',
+ return_value=subprocess.Popen(
+ ['echo', image_id],
+ stdout=subprocess.PIPE)), \
+ mock.patch('arvados.commands.keepdocker.prep_image_file',
+ return_value=(mocked_file, False)), \
+ mock.patch('arvados.commands.put.main',
+ return_value='new-collection-uuid'), \
+ self.assertRaises(StopTest):
+
+ api()._rootDesc = fakeDD
+ api().collections().get().execute.return_value = copy.deepcopy(mocked_collection)
+ api().collections().update().execute.side_effect = StopTest
+ self.run_arv_keepdocker(['--force', 'testimage'], err)
+
+ updated_properties = mocked_collection['properties']
+ updated_properties.update({'docker-image-repo-tag': 'testimage:latest'})
+ api().collections().update.assert_called_with(
+ uuid=mocked_collection['uuid'],
+ body={'properties': updated_properties})
import apiclient
import ciso8601
import datetime
-import hashlib
import json
import logging
import mock
+import multiprocessing
import os
import pwd
import random
import time
import unittest
import uuid
-import yaml
import arvados
import arvados.commands.put as arv_put
shutil.rmtree(self.small_files_dir)
shutil.rmtree(self.tempdir_with_symlink)
+ def test_non_regular_files_are_ignored_except_symlinks_to_dirs(self):
+ def pfunc(x):
+ with open(x, 'w') as f:
+ f.write('test')
+ fifo_filename = 'fifo-file'
+ fifo_path = os.path.join(self.tempdir_with_symlink, fifo_filename)
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkeddir')))
+ os.mkfifo(fifo_path)
+ producer = multiprocessing.Process(target=pfunc, args=(fifo_path,))
+ producer.start()
+ cwriter = arv_put.ArvPutUploadJob([self.tempdir_with_symlink])
+ cwriter.start(save_collection=False)
+ if producer.exitcode is None:
+ # If the producer is still running, kill it. This should always be
+ # before any assertion that may fail.
+ producer.terminate()
+ producer.join(1)
+ self.assertIn('linkeddir', cwriter.manifest_text())
+ self.assertNotIn(fifo_filename, cwriter.manifest_text())
+
def test_symlinks_are_followed_by_default(self):
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkeddir')))
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkedfile')))
cwriter = arv_put.ArvPutUploadJob([self.tempdir_with_symlink])
cwriter.start(save_collection=False)
self.assertIn('linkeddir', cwriter.manifest_text())
cwriter.destroy_cache()
def test_symlinks_are_not_followed_when_requested(self):
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkeddir')))
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkedfile')))
cwriter = arv_put.ArvPutUploadJob([self.tempdir_with_symlink],
follow_links=False)
cwriter.start(save_collection=False)
self.assertNotIn('linkeddir', cwriter.manifest_text())
self.assertNotIn('linkedfile', cwriter.manifest_text())
cwriter.destroy_cache()
+ # Check for bug #17800: passed symlinks should also be ignored.
+ linked_dir = os.path.join(self.tempdir_with_symlink, 'linkeddir')
+ cwriter = arv_put.ArvPutUploadJob([linked_dir], follow_links=False)
+ cwriter.start(save_collection=False)
+ self.assertNotIn('linkeddir', cwriter.manifest_text())
+ cwriter.destroy_cache()
+
+ def test_no_empty_collection_saved(self):
+ self.assertTrue(os.path.islink(os.path.join(self.tempdir_with_symlink, 'linkeddir')))
+ linked_dir = os.path.join(self.tempdir_with_symlink, 'linkeddir')
+ cwriter = arv_put.ArvPutUploadJob([linked_dir], follow_links=False)
+ cwriter.start(save_collection=True)
+ self.assertIsNone(cwriter.manifest_locator())
+ self.assertEqual('', cwriter.manifest_text())
+ cwriter.destroy_cache()
def test_passing_nonexistant_path_raise_exception(self):
uuid_str = str(uuid.uuid4())
self.call_main_with_args,
['--project-uuid', self.Z_UUID, '--stream'])
- def test_error_when_multiple_storage_classes_specified(self):
- self.assertRaises(SystemExit,
- self.call_main_with_args,
- ['--storage-classes', 'hot,cold'])
-
def test_error_when_excluding_absolute_path(self):
tmpdir = self.make_tmpdir()
self.assertRaises(SystemExit,
[sys.executable, arv_put.__file__, '--stream'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, env=self.ENVIRON)
- pipe.stdin.write(b'stdin test\n')
+ pipe.stdin.write(b'stdin test\xa6\n')
pipe.stdin.close()
deadline = time.time() + 5
while (pipe.poll() is None) and (time.time() < deadline):
elif returncode != 0:
sys.stdout.write(pipe.stdout.read())
self.fail("arv-put returned exit code {}".format(returncode))
- self.assertIn('4a9c8b735dce4b5fa3acf221a0b13628+11',
+ self.assertIn('1cb671b355a0c23d5d1c61d59cdb1b2b+12',
pipe.stdout.read().decode())
def test_sigint_logs_request_id(self):
r'INFO: Cache expired, starting from scratch.*')
self.assertEqual(p.returncode, 0)
- def test_invalid_signature_invalidates_cache(self):
- self.authorize_with('active')
- tmpdir = self.make_tmpdir()
- with open(os.path.join(tmpdir, 'somefile.txt'), 'w') as f:
- f.write('foo')
- # Upload a directory and get the cache file name
- p = subprocess.Popen([sys.executable, arv_put.__file__, tmpdir],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- env=self.ENVIRON)
- (_, err) = p.communicate()
- self.assertRegex(err.decode(), r'INFO: Creating new cache file at ')
- self.assertEqual(p.returncode, 0)
- cache_filepath = re.search(r'INFO: Creating new cache file at (.*)',
- err.decode()).groups()[0]
- self.assertTrue(os.path.isfile(cache_filepath))
- # Load the cache file contents and modify the manifest to simulate
- # an invalid access token
- with open(cache_filepath, 'r') as c:
- cache = json.load(c)
- self.assertRegex(cache['manifest'], r'\+A\S+\@')
- cache['manifest'] = re.sub(
- r'\+A.*\@',
- "+Aabcdef0123456789abcdef0123456789abcdef01@",
- cache['manifest'])
- with open(cache_filepath, 'w') as c:
- c.write(json.dumps(cache))
- # Re-run the upload and expect to get an invalid cache message
- p = subprocess.Popen([sys.executable, arv_put.__file__, tmpdir],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- env=self.ENVIRON)
- (_, err) = p.communicate()
- self.assertRegex(
- err.decode(),
- r'ERROR: arv-put: Resume cache contains invalid signature.*')
- self.assertEqual(p.returncode, 1)
+ def test_invalid_signature_in_cache(self):
+ for batch_mode in [False, True]:
+ self.authorize_with('active')
+ tmpdir = self.make_tmpdir()
+ with open(os.path.join(tmpdir, 'somefile.txt'), 'w') as f:
+ f.write('foo')
+ # Upload a directory and get the cache file name
+ arv_put_args = [tmpdir]
+ if batch_mode:
+ arv_put_args = ['--batch'] + arv_put_args
+ p = subprocess.Popen([sys.executable, arv_put.__file__] + arv_put_args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ env=self.ENVIRON)
+ (_, err) = p.communicate()
+ self.assertRegex(err.decode(), r'INFO: Creating new cache file at ')
+ self.assertEqual(p.returncode, 0)
+ cache_filepath = re.search(r'INFO: Creating new cache file at (.*)',
+ err.decode()).groups()[0]
+ self.assertTrue(os.path.isfile(cache_filepath))
+ # Load the cache file contents and modify the manifest to simulate
+ # an invalid access token
+ with open(cache_filepath, 'r') as c:
+ cache = json.load(c)
+ self.assertRegex(cache['manifest'], r'\+A\S+\@')
+ cache['manifest'] = re.sub(
+ r'\+A.*\@',
+ "+Aabcdef0123456789abcdef0123456789abcdef01@",
+ cache['manifest'])
+ with open(cache_filepath, 'w') as c:
+ c.write(json.dumps(cache))
+ # Re-run the upload and expect to get an invalid cache message
+ p = subprocess.Popen([sys.executable, arv_put.__file__] + arv_put_args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ env=self.ENVIRON)
+ (_, err) = p.communicate()
+ if not batch_mode:
+ self.assertRegex(
+ err.decode(),
+ r'ERROR: arv-put: Resume cache contains invalid signature.*')
+ self.assertEqual(p.returncode, 1)
+ else:
+ self.assertRegex(
+ err.decode(),
+ r'Invalid signatures on cache file \'.*\' while being run in \'batch mode\' -- continuing anyways.*')
+ self.assertEqual(p.returncode, 0)
def test_single_expired_signature_reuploads_file(self):
self.authorize_with('active')
def test_put_collection_with_storage_classes_specified(self):
collection = self.run_and_find_collection("", ['--storage-classes', 'hot'])
-
self.assertEqual(len(collection['storage_classes_desired']), 1)
self.assertEqual(collection['storage_classes_desired'][0], 'hot')
+ def test_put_collection_with_multiple_storage_classes_specified(self):
+ collection = self.run_and_find_collection("", ['--storage-classes', ' foo, bar ,baz'])
+ self.assertEqual(len(collection['storage_classes_desired']), 3)
+ self.assertEqual(collection['storage_classes_desired'], ['foo', 'bar', 'baz'])
+
def test_put_collection_without_storage_classes_specified(self):
collection = self.run_and_find_collection("")
-
self.assertEqual(len(collection['storage_classes_desired']), 1)
self.assertEqual(collection['storage_classes_desired'][0], 'default')
from builtins import str
from builtins import range
from builtins import object
-import bz2
import datetime
-import gzip
-import io
import mock
import os
import unittest
import arvados
from arvados._ranges import Range
from arvados.keep import KeepLocator
-from arvados.collection import Collection, CollectionReader
+from arvados.collection import Collection
from arvados.arvfile import ArvadosFile, ArvadosFileReader
from . import arvados_testutil as tutil
def get_from_cache(self, locator):
self.requests.append(locator)
return self.blocks.get(locator)
- def put(self, data, num_retries=None, copies=None):
+ def put(self, data, num_retries=None, copies=None, classes=[]):
pdh = tutil.str_keep_locator(data)
self.blocks[pdh] = bytes(data)
return pdh
self.assertEqual("zzzzz-4zz18-mockcollection0", c.manifest_locator())
self.assertFalse(c.modified())
-
-
def test_write_to_end(self):
keep = ArvadosFileWriterTestCase.MockKeep({
"781e5e245d69b566979b86e28d23f2c7+10": b"0123456789",
self.assertEqual("zzzzz-4zz18-mockcollection0", c.manifest_locator())
self.assertFalse(c.modified())
-
def test_large_write(self):
keep = ArvadosFileWriterTestCase.MockKeep({})
api = ArvadosFileWriterTestCase.MockApi({}, {})
self.assertEqual(c.manifest_text(), ". 7f614da9329cd3aebf59b91aadc30bf0+67108864 781e5e245d69b566979b86e28d23f2c7+10 0:2:count.txt 67108864:10:count.txt\n")
-
def test_sparse_write2(self):
keep = ArvadosFileWriterTestCase.MockKeep({})
api = ArvadosFileWriterTestCase.MockApi({}, {})
self.assertEqual(c.manifest_text(), ". 7f614da9329cd3aebf59b91aadc30bf0+67108864 781e5e245d69b566979b86e28d23f2c7+10 0:67108864:count.txt 0:67108864:count.txt 0:2:count.txt 67108864:10:count.txt\n")
-
def test_sparse_write3(self):
keep = ArvadosFileWriterTestCase.MockKeep({})
api = ArvadosFileWriterTestCase.MockApi({}, {})
writer.seek(0)
self.assertEqual(writer.read(), b"000000000011111111112222222222\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004444444444")
-
def test_rewrite_on_empty_file(self):
keep = ArvadosFileWriterTestCase.MockKeep({})
with Collection('. ' + arvados.config.EMPTY_BLOCK_LOCATOR + ' 0:0:count.txt',
blockmanager.commit_bufferblock(bufferblock, True)
self.assertEqual(bufferblock.state(), arvados.arvfile._BufferBlock.COMMITTED)
-
def test_bufferblock_commit_with_error(self):
mockkeep = mock.MagicMock()
mockkeep.put.side_effect = arvados.errors.KeepWriteError("fail")
import copy
import mock
import os
-import pprint
import random
import re
import sys
-import tempfile
import datetime
import ciso8601
import time
self.assertEqual(c1.manifest_text, c2.manifest_text)
self.assertNotEqual(c1.replication_desired, c2.replication_desired)
+ def test_storage_classes_desired_kept_on_load(self):
+ m = '. 781e5e245d69b566979b86e28d23f2c7+10 0:10:count1.txt 0:10:count2.txt\n'
+ c1 = Collection(m, storage_classes_desired=['archival'])
+ c1.save_new()
+ loc = c1.manifest_locator()
+ c2 = Collection(loc)
+ self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertEqual(c1.storage_classes_desired(), c2.storage_classes_desired())
+
+ def test_storage_classes_change_after_save(self):
+ m = '. 781e5e245d69b566979b86e28d23f2c7+10 0:10:count1.txt 0:10:count2.txt\n'
+ c1 = Collection(m, storage_classes_desired=['archival'])
+ c1.save_new()
+ loc = c1.manifest_locator()
+ c2 = Collection(loc)
+ self.assertEqual(['archival'], c2.storage_classes_desired())
+ c2.save(storage_classes=['highIO'])
+ self.assertEqual(['highIO'], c2.storage_classes_desired())
+ c3 = Collection(loc)
+ self.assertEqual(c1.manifest_text, c3.manifest_text)
+ self.assertEqual(['highIO'], c3.storage_classes_desired())
+
+ def test_storage_classes_desired_not_loaded_if_provided(self):
+ m = '. 781e5e245d69b566979b86e28d23f2c7+10 0:10:count1.txt 0:10:count2.txt\n'
+ c1 = Collection(m, storage_classes_desired=['archival'])
+ c1.save_new()
+ loc = c1.manifest_locator()
+ c2 = Collection(loc, storage_classes_desired=['default'])
+ self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertNotEqual(c1.storage_classes_desired(), c2.storage_classes_desired())
+
def test_init_manifest(self):
m1 = """. 5348b82a029fd9e971a811ce1f71360b+43 0:43:md5sum.txt
. 085c37f02916da1cad16f93c54d899b7+41 0:41:md5sum.txt
def setUp(self):
self.keep_put = getattr(arvados.keep.KeepClient, 'put')
+ @mock.patch('arvados.keep.KeepClient.put', autospec=True)
+ def test_storage_classes_desired(self, put_mock):
+ put_mock.side_effect = self.keep_put
+ c = Collection(storage_classes_desired=['default'])
+ with c.open("file.txt", 'wb') as f:
+ f.write('content')
+ c.save_new()
+ _, kwargs = put_mock.call_args
+ self.assertEqual(['default'], kwargs['classes'])
+
@mock.patch('arvados.keep.KeepClient.put', autospec=True)
def test_repacked_block_submission_get_permission_token(self, mocked_put):
'''
class NewCollectionTestCaseWithServers(run_test_server.TestCaseWithServers):
+ def test_preserve_version_on_save(self):
+ c = Collection()
+ c.save_new(preserve_version=True)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 1)
+ self.assertEqual(coll_record['preserve_version'], True)
+ with c.open("foo.txt", "wb") as foo:
+ foo.write(b"foo")
+ c.save(preserve_version=True)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 2)
+ self.assertEqual(coll_record['preserve_version'], True)
+ with c.open("bar.txt", "wb") as foo:
+ foo.write(b"bar")
+ c.save(preserve_version=False)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 3)
+ self.assertEqual(coll_record['preserve_version'], False)
+
def test_get_manifest_text_only_committed(self):
c = Collection()
with c.open("count.txt", "wb") as f:
def test_KeepLongBinaryRWTest(self):
blob_data = b'\xff\xfe\xfd\xfc\x00\x01\x02\x03'
- for i in range(0,23):
+ for i in range(0, 23):
blob_data = blob_data + blob_data
blob_locator = self.keep_client.put(blob_data)
self.assertRegex(
# First reponse was not cached because it was from a HEAD request.
self.assertNotEqual(head_resp, get_resp)
+@tutil.skip_sleep
+class KeepStorageClassesTestCase(unittest.TestCase, tutil.ApiClientMock):
+ def setUp(self):
+ self.api_client = self.mock_keep_services(count=2)
+ self.keep_client = arvados.KeepClient(api_client=self.api_client)
+ self.data = b'xyzzy'
+ self.locator = '1271ed5ef305aadabc605b1609e24c52'
+
+ def test_multiple_default_storage_classes_req_header(self):
+ api_mock = self.api_client_mock()
+ api_mock.config.return_value = {
+ 'StorageClasses': {
+ 'foo': { 'Default': True },
+ 'bar': { 'Default': True },
+ 'baz': { 'Default': False }
+ }
+ }
+ api_client = self.mock_keep_services(api_mock=api_mock, count=2)
+ keep_client = arvados.KeepClient(api_client=api_client)
+ resp_hdr = {
+ 'x-keep-storage-classes-confirmed': 'foo=1, bar=1',
+ 'x-keep-replicas-stored': 1
+ }
+ with tutil.mock_keep_responses(self.locator, 200, **resp_hdr) as mock:
+ keep_client.put(self.data, copies=1)
+ req_hdr = mock.responses[0]
+ self.assertIn(
+ 'X-Keep-Storage-Classes: bar, foo', req_hdr.getopt(pycurl.HTTPHEADER))
+
+ def test_storage_classes_req_header(self):
+ self.assertEqual(
+ self.api_client.config()['StorageClasses'],
+ {'default': {'Default': True}})
+ cases = [
+ # requested, expected
+ [['foo'], 'X-Keep-Storage-Classes: foo'],
+ [['bar', 'foo'], 'X-Keep-Storage-Classes: bar, foo'],
+ [[], 'X-Keep-Storage-Classes: default'],
+ [None, 'X-Keep-Storage-Classes: default'],
+ ]
+ for req_classes, expected_header in cases:
+ headers = {'x-keep-replicas-stored': 1}
+ if req_classes is None or len(req_classes) == 0:
+ confirmed_hdr = 'default=1'
+ elif len(req_classes) > 0:
+ confirmed_hdr = ', '.join(["{}=1".format(cls) for cls in req_classes])
+ headers.update({'x-keep-storage-classes-confirmed': confirmed_hdr})
+ with tutil.mock_keep_responses(self.locator, 200, **headers) as mock:
+ self.keep_client.put(self.data, copies=1, classes=req_classes)
+ req_hdr = mock.responses[0]
+ self.assertIn(expected_header, req_hdr.getopt(pycurl.HTTPHEADER))
+
+ def test_partial_storage_classes_put(self):
+ headers = {
+ 'x-keep-replicas-stored': 1,
+ 'x-keep-storage-classes-confirmed': 'foo=1'}
+ with tutil.mock_keep_responses(self.locator, 200, 503, **headers) as mock:
+ with self.assertRaises(arvados.errors.KeepWriteError):
+ self.keep_client.put(self.data, copies=1, classes=['foo', 'bar'])
+ # 1st request, both classes pending
+ req1_headers = mock.responses[0].getopt(pycurl.HTTPHEADER)
+ self.assertIn('X-Keep-Storage-Classes: bar, foo', req1_headers)
+ # 2nd try, 'foo' class already satisfied
+ req2_headers = mock.responses[1].getopt(pycurl.HTTPHEADER)
+ self.assertIn('X-Keep-Storage-Classes: bar', req2_headers)
+
+ def test_successful_storage_classes_put_requests(self):
+ cases = [
+ # wanted_copies, wanted_classes, confirmed_copies, confirmed_classes, expected_requests
+ [ 1, ['foo'], 1, 'foo=1', 1],
+ [ 1, ['foo'], 2, 'foo=2', 1],
+ [ 2, ['foo'], 2, 'foo=2', 1],
+ [ 2, ['foo'], 1, 'foo=1', 2],
+ [ 1, ['foo', 'bar'], 1, 'foo=1, bar=1', 1],
+ [ 1, ['foo', 'bar'], 2, 'foo=2, bar=2', 1],
+ [ 2, ['foo', 'bar'], 2, 'foo=2, bar=2', 1],
+ [ 2, ['foo', 'bar'], 1, 'foo=1, bar=1', 2],
+ [ 1, ['foo', 'bar'], 1, None, 1],
+ [ 1, ['foo'], 1, None, 1],
+ [ 2, ['foo'], 2, None, 1],
+ [ 2, ['foo'], 1, None, 2],
+ ]
+ for w_copies, w_classes, c_copies, c_classes, e_reqs in cases:
+ headers = {'x-keep-replicas-stored': c_copies}
+ if c_classes is not None:
+ headers.update({'x-keep-storage-classes-confirmed': c_classes})
+ with tutil.mock_keep_responses(self.locator, 200, 200, **headers) as mock:
+ case_desc = 'wanted_copies={}, wanted_classes="{}", confirmed_copies={}, confirmed_classes="{}", expected_requests={}'.format(w_copies, ', '.join(w_classes), c_copies, c_classes, e_reqs)
+ self.assertEqual(self.locator,
+ self.keep_client.put(self.data, copies=w_copies, classes=w_classes),
+ case_desc)
+ self.assertEqual(e_reqs, mock.call_count, case_desc)
+
+ def test_failed_storage_classes_put_requests(self):
+ cases = [
+ # wanted_copies, wanted_classes, confirmed_copies, confirmed_classes, return_code
+ [ 1, ['foo'], 1, 'bar=1', 200],
+ [ 1, ['foo'], 1, None, 503],
+ [ 2, ['foo'], 1, 'bar=1, foo=0', 200],
+ [ 3, ['foo'], 1, 'bar=1, foo=1', 200],
+ [ 3, ['foo', 'bar'], 1, 'bar=2, foo=1', 200],
+ ]
+ for w_copies, w_classes, c_copies, c_classes, return_code in cases:
+ headers = {'x-keep-replicas-stored': c_copies}
+ if c_classes is not None:
+ headers.update({'x-keep-storage-classes-confirmed': c_classes})
+ with tutil.mock_keep_responses(self.locator, return_code, return_code, **headers):
+ case_desc = 'wanted_copies={}, wanted_classes="{}", confirmed_copies={}, confirmed_classes="{}"'.format(w_copies, ', '.join(w_classes), c_copies, c_classes)
+ with self.assertRaises(arvados.errors.KeepWriteError, msg=case_desc):
+ self.keep_client.put(self.data, copies=w_copies, classes=w_classes)
@tutil.skip_sleep
class KeepXRequestIdTestCase(unittest.TestCase, tutil.ApiClientMock):
self.keep_client.head(self.locator)
self.assertAutomaticRequestId(mock.responses[0])
+ def test_request_id_in_exception(self):
+ with tutil.mock_keep_responses(b'', 400, 400, 400) as mock:
+ with self.assertRaisesRegex(arvados.errors.KeepReadError, self.test_id):
+ self.keep_client.head(self.locator, request_id=self.test_id)
+
+ with tutil.mock_keep_responses(b'', 400, 400, 400) as mock:
+ with self.assertRaisesRegex(arvados.errors.KeepReadError, r'req-[a-z0-9]{20}'):
+ self.keep_client.get(self.locator)
+
+ with tutil.mock_keep_responses(b'', 400, 400, 400) as mock:
+ with self.assertRaisesRegex(arvados.errors.KeepWriteError, self.test_id):
+ self.keep_client.put(self.data, request_id=self.test_id)
+
+ with tutil.mock_keep_responses(b'', 400, 400, 400) as mock:
+ with self.assertRaisesRegex(arvados.errors.KeepWriteError, r'req-[a-z0-9]{20}'):
+ self.keep_client.put(self.data)
+
def assertAutomaticRequestId(self, resp):
hdr = [x for x in resp.getopt(pycurl.HTTPHEADER)
if x.startswith('X-Request-Id: ')][0]
self._result = {}
self._result['headers'] = {}
self._result['headers']['x-keep-replicas-stored'] = str(replicas)
+ self._result['headers']['x-keep-storage-classes-confirmed'] = 'default={}'.format(replicas)
self._result['body'] = 'foobar'
- def put(self, data_hash, data, timeout):
+ def put(self, data_hash, data, timeout, headers):
time.sleep(self.delay)
if self.will_raise is not None:
raise self.will_raise
def last_result(self):
if self.will_succeed:
return self._result
+ else:
+ return {"status_code": 500, "body": "didn't succeed"}
def finished(self):
return False
ks = self.FakeKeepService(delay=i/10.0, will_succeed=True)
self.pool.add_task(ks, None)
self.pool.join()
- self.assertEqual(self.pool.done(), self.copies)
+ self.assertEqual(self.pool.done(), (self.copies, []))
def test_only_write_enough_on_partial_success(self):
for i in range(5):
ks = self.FakeKeepService(delay=i/10.0, will_succeed=True)
self.pool.add_task(ks, None)
self.pool.join()
- self.assertEqual(self.pool.done(), self.copies)
+ self.assertEqual(self.pool.done(), (self.copies, []))
def test_only_write_enough_when_some_crash(self):
for i in range(5):
ks = self.FakeKeepService(delay=i/10.0, will_succeed=True)
self.pool.add_task(ks, None)
self.pool.join()
- self.assertEqual(self.pool.done(), self.copies)
+ self.assertEqual(self.pool.done(), (self.copies, []))
def test_fail_when_too_many_crash(self):
for i in range(self.copies+1):
ks = self.FakeKeepService(delay=i/10.0, will_succeed=True)
self.pool.add_task(ks, None)
self.pool.join()
- self.assertEqual(self.pool.done(), self.copies-1)
+ self.assertEqual(self.pool.done(), (self.copies-1, []))
@tutil.skip_sleep
return "abc"
elif r == "insecure":
return False
+ elif r == "config":
+ return lambda: {}
else:
raise arvados.errors.KeepReadError()
keep_client = arvados.KeepClient(api_client=ApiMock(),
s.add_dependency('activesupport', '>= 3')
s.add_dependency('andand', '~> 1.3', '>= 1.3.3')
# Our google-api-client dependency used to be < 0.9, but that could be
- # satisfied by the buggy 0.9.pre*. https://dev.arvados.org/issues/9213
- s.add_dependency('arvados-google-api-client', '>= 0.7', '< 0.8.9')
+ # satisfied by the buggy 0.9.pre*, cf. https://dev.arvados.org/issues/9213
+ # We need at least version 0.8.7.3, cf. https://dev.arvados.org/issues/15673
+ s.add_dependency('arvados-google-api-client', '>= 0.8.7.3', '< 0.8.9')
# work around undeclared dependency on i18n in some activesupport 3.x.x:
s.add_dependency('i18n', '~> 0')
s.add_dependency('json', '>= 1.7.7', '<3')
- # arvados-google-api-client 0.8.7.2 is incompatible with faraday 0.16.2
- s.add_dependency('faraday', '< 0.16')
+ # Avoid warning on Ruby 2.7, cf. https://dev.arvados.org/issues/18247
+ s.add_dependency('faraday', '>= 0.17.4')
s.add_runtime_dependency('jwt', '<2', '>= 0.1.5')
s.homepage =
'https://arvados.org'
# SPDX-License-Identifier: Apache-2.0
require 'google/api_client'
-# Monkeypatch google-api-client gem to avoid sending newline characters
-# on headers to make ruby-2.3.7+ happy.
-# See: https://dev.arvados.org/issues/13920
-Google::APIClient::ENV::OS_VERSION.strip!
-
require 'json'
require 'tempfile'
gem 'listen'
end
-# Fast app boot times
-gem 'bootsnap', require: false
-
gem 'pg', '~> 1.0'
gem 'multi_json'
# Locking to 5.10.3 to workaround issue in 5.11.1 (https://github.com/seattlerb/minitest/issues/730)
gem 'minitest', '5.10.3'
-# Restricted because omniauth >= 1.5.0 requires Ruby >= 2.1.9:
-gem 'omniauth', '~> 1.4.0'
-gem 'omniauth-oauth2', '~> 1.1'
-
gem 'andand'
gem 'optimist'
gem 'themes_for_rails', git: 'https://github.com/arvados/themes_for_rails'
-# Import arvados gem. Note: actual git commit is pinned via Gemfile.lock
-gem 'arvados', git: 'https://github.com/arvados/arvados.git', glob: 'sdk/ruby/arvados.gemspec'
+# Import arvados gem.
+gem 'arvados', '~> 2.1.5'
gem 'httpclient'
gem 'sshkey'
-GIT
- remote: https://github.com/arvados/arvados.git
- revision: 81725af5d5d2e6cd18ba7099ba5fb1fc520f4f8c
- glob: sdk/ruby/arvados.gemspec
- specs:
- arvados (1.5.0.pre20200114202620)
- activesupport (>= 3)
- andand (~> 1.3, >= 1.3.3)
- arvados-google-api-client (>= 0.7, < 0.8.9)
- faraday (< 0.16)
- i18n (~> 0)
- json (>= 1.7.7, < 3)
- jwt (>= 0.1.5, < 2)
-
GIT
remote: https://github.com/arvados/themes_for_rails
revision: ddf6e592b3b6493ea0c2de7b5d3faa120ed35be0
GEM
remote: https://rubygems.org/
specs:
- actioncable (5.2.4.3)
- actionpack (= 5.2.4.3)
+ actioncable (5.2.6)
+ actionpack (= 5.2.6)
nio4r (~> 2.0)
websocket-driver (>= 0.6.1)
- actionmailer (5.2.4.3)
- actionpack (= 5.2.4.3)
- actionview (= 5.2.4.3)
- activejob (= 5.2.4.3)
+ actionmailer (5.2.6)
+ actionpack (= 5.2.6)
+ actionview (= 5.2.6)
+ activejob (= 5.2.6)
mail (~> 2.5, >= 2.5.4)
rails-dom-testing (~> 2.0)
- actionpack (5.2.4.3)
- actionview (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ actionpack (5.2.6)
+ actionview (= 5.2.6)
+ activesupport (= 5.2.6)
rack (~> 2.0, >= 2.0.8)
rack-test (>= 0.6.3)
rails-dom-testing (~> 2.0)
rails-html-sanitizer (~> 1.0, >= 1.0.2)
- actionview (5.2.4.3)
- activesupport (= 5.2.4.3)
+ actionview (5.2.6)
+ activesupport (= 5.2.6)
builder (~> 3.1)
erubi (~> 1.4)
rails-dom-testing (~> 2.0)
rails-html-sanitizer (~> 1.0, >= 1.0.3)
- activejob (5.2.4.3)
- activesupport (= 5.2.4.3)
+ activejob (5.2.6)
+ activesupport (= 5.2.6)
globalid (>= 0.3.6)
- activemodel (5.2.4.3)
- activesupport (= 5.2.4.3)
- activerecord (5.2.4.3)
- activemodel (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ activemodel (5.2.6)
+ activesupport (= 5.2.6)
+ activerecord (5.2.6)
+ activemodel (= 5.2.6)
+ activesupport (= 5.2.6)
arel (>= 9.0)
- activestorage (5.2.4.3)
- actionpack (= 5.2.4.3)
- activerecord (= 5.2.4.3)
- marcel (~> 0.3.1)
- activesupport (5.2.4.3)
+ activestorage (5.2.6)
+ actionpack (= 5.2.6)
+ activerecord (= 5.2.6)
+ marcel (~> 1.0.0)
+ activesupport (5.2.6)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
activemodel (>= 3.0.0)
activesupport (>= 3.0.0)
rack (>= 1.1.0)
- addressable (2.7.0)
+ addressable (2.8.0)
public_suffix (>= 2.0.2, < 5.0)
andand (1.3.3)
arel (9.0.0)
+ arvados (2.1.5)
+ activesupport (>= 3)
+ andand (~> 1.3, >= 1.3.3)
+ arvados-google-api-client (>= 0.7, < 0.8.9)
+ faraday (< 0.16)
+ i18n (~> 0)
+ json (>= 1.7.7, < 3)
+ jwt (>= 0.1.5, < 2)
arvados-google-api-client (0.8.7.4)
activesupport (>= 3.2, < 5.3)
addressable (~> 2.3)
addressable (>= 2.3.1)
extlib (>= 0.9.15)
multi_json (>= 1.0.0)
- bootsnap (1.4.7)
- msgpack (~> 1.0)
builder (3.2.4)
byebug (11.0.1)
capistrano (2.15.9)
net-sftp (>= 2.0.0)
net-ssh (>= 2.0.14)
net-ssh-gateway (>= 1.1.0)
- concurrent-ruby (1.1.6)
+ concurrent-ruby (1.1.9)
crass (1.0.6)
- erubi (1.9.0)
+ erubi (1.10.0)
execjs (2.7.0)
extlib (0.9.16)
factory_bot (5.0.2)
multi_json (~> 1.11)
os (>= 0.9, < 2.0)
signet (~> 0.7)
- hashie (3.6.0)
highline (2.0.1)
httpclient (2.8.3)
i18n (0.9.5)
rails-dom-testing (>= 1, < 3)
railties (>= 4.2.0)
thor (>= 0.14, < 2.0)
- json (2.3.0)
+ json (2.5.1)
jwt (1.5.6)
- launchy (2.4.3)
- addressable (~> 2.3)
+ launchy (2.5.0)
+ addressable (~> 2.7)
libv8 (3.16.14.19)
listen (3.2.1)
rb-fsevent (~> 0.10, >= 0.10.3)
railties (>= 4)
request_store (~> 1.0)
logstash-event (1.2.02)
- loofah (2.6.0)
+ loofah (2.10.0)
crass (~> 1.0.2)
nokogiri (>= 1.5.9)
mail (2.7.1)
mini_mime (>= 0.1.1)
- marcel (0.3.3)
- mimemagic (~> 0.3.2)
+ marcel (1.0.1)
memoist (0.16.2)
metaclass (0.0.4)
method_source (1.0.0)
- mimemagic (0.3.5)
- mini_mime (1.0.2)
- mini_portile2 (2.4.0)
+ mini_mime (1.1.0)
+ mini_portile2 (2.6.1)
minitest (5.10.3)
mocha (1.8.0)
metaclass (~> 0.0.1)
- msgpack (1.3.3)
- multi_json (1.14.1)
- multi_xml (0.6.0)
+ multi_json (1.15.0)
multipart-post (2.1.1)
net-scp (2.0.0)
net-ssh (>= 2.6.5, < 6.0.0)
net-ssh (5.2.0)
net-ssh-gateway (2.0.0)
net-ssh (>= 4.0.0)
- nio4r (2.5.2)
- nokogiri (1.10.10)
- mini_portile2 (~> 2.4.0)
- oauth2 (1.4.1)
- faraday (>= 0.8, < 0.16.0)
- jwt (>= 1.0, < 3.0)
- multi_json (~> 1.3)
- multi_xml (~> 0.5)
- rack (>= 1.2, < 3)
+ nio4r (2.5.7)
+ nokogiri (1.12.5)
+ mini_portile2 (~> 2.6.1)
+ racc (~> 1.4)
oj (3.9.2)
- omniauth (1.4.3)
- hashie (>= 1.2, < 4)
- rack (>= 1.6.2, < 3)
- omniauth-oauth2 (1.5.0)
- oauth2 (~> 1.1)
- omniauth (~> 1.2)
optimist (3.0.0)
- os (1.0.1)
+ os (1.1.1)
passenger (6.0.2)
rack
rake (>= 0.8.1)
pg (1.1.4)
power_assert (1.1.4)
- public_suffix (4.0.3)
+ public_suffix (4.0.6)
+ racc (1.6.0)
rack (2.2.3)
rack-test (1.1.0)
rack (>= 1.0, < 3)
- rails (5.2.4.3)
- actioncable (= 5.2.4.3)
- actionmailer (= 5.2.4.3)
- actionpack (= 5.2.4.3)
- actionview (= 5.2.4.3)
- activejob (= 5.2.4.3)
- activemodel (= 5.2.4.3)
- activerecord (= 5.2.4.3)
- activestorage (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ rails (5.2.6)
+ actioncable (= 5.2.6)
+ actionmailer (= 5.2.6)
+ actionpack (= 5.2.6)
+ actionview (= 5.2.6)
+ activejob (= 5.2.6)
+ activemodel (= 5.2.6)
+ activerecord (= 5.2.6)
+ activestorage (= 5.2.6)
+ activesupport (= 5.2.6)
bundler (>= 1.3.0)
- railties (= 5.2.4.3)
+ railties (= 5.2.6)
sprockets-rails (>= 2.0.0)
rails-controller-testing (1.0.4)
actionpack (>= 5.0.1.x)
rails-observers (0.1.5)
activemodel (>= 4.0)
rails-perftest (0.0.7)
- railties (5.2.4.3)
- actionpack (= 5.2.4.3)
- activesupport (= 5.2.4.3)
+ railties (5.2.6)
+ actionpack (= 5.2.6)
+ activesupport (= 5.2.6)
method_source
rake (>= 0.8.7)
thor (>= 0.19.0, < 2.0)
- rake (13.0.1)
+ rake (13.0.3)
rb-fsevent (0.10.3)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
sprockets (3.7.2)
concurrent-ruby (~> 1.0)
rack (> 1, < 3)
- sprockets-rails (3.2.1)
+ sprockets-rails (3.2.2)
actionpack (>= 4.0)
activesupport (>= 4.0)
sprockets (>= 3.0.0)
therubyracer (0.12.3)
libv8 (~> 3.16.14.15)
ref
- thor (1.0.1)
+ thor (1.1.0)
thread_safe (0.3.6)
tilt (2.0.8)
- tzinfo (1.2.7)
+ tzinfo (1.2.9)
thread_safe (~> 0.1)
uglifier (2.7.2)
execjs (>= 0.3.0)
json (>= 1.8.0)
- websocket-driver (0.7.3)
+ websocket-driver (0.7.4)
websocket-extensions (>= 0.1.0)
websocket-extensions (0.1.5)
DEPENDENCIES
acts_as_api
andand
- arvados!
- bootsnap
+ arvados (~> 2.1.5)
byebug
factory_bot_rails
httpclient
mocha
multi_json
oj
- omniauth (~> 1.4.0)
- omniauth-oauth2 (~> 1.1)
optimist
passenger
pg (~> 1.0)
uglifier (~> 2.0)
BUNDLED WITH
- 1.17.3
+ 2.2.19
border-bottom: 1px solid #fff;
font-size: 0.8em;
}
-img.curoverse-logo {
+img.arvados-logo {
height: 66px;
}
#intropage {
before_action :catch_redirect_hint
before_action :load_required_parameters
+ before_action :load_limit_offset_order_params, only: [:index, :contents]
+ before_action :load_select_param
before_action(:find_object_by_uuid,
except: [:index, :create] + ERROR_ACTIONS)
- before_action(:set_nullable_attrs_to_null, only: [:update, :create])
- before_action :load_limit_offset_order_params, only: [:index, :contents]
before_action :load_where_param, only: [:index, :contents]
before_action :load_filters_param, only: [:index, :contents]
before_action :find_objects_for_index, :only => :index
+ before_action(:set_nullable_attrs_to_null, only: [:update, :create])
before_action :reload_object_before_update, :only => :update
before_action(:render_404_if_no_object,
except: [:index, :create] + ERROR_ACTIONS)
end
err[:errors] ||= args
err[:errors].map! do |err|
- err += " (" + Thread.current[:request_id] + ")"
+ err += " (#{request.request_id})"
end
err[:error_token] = [Time.now.utc.to_i, "%08x" % rand(16 ** 8)].join("+")
status = err.delete(:status) || 422
@objects = @objects.order(@orders.join ", ") if @orders.any?
@objects = @objects.limit(@limit)
@objects = @objects.offset(@offset)
- @objects = @objects.distinct(@distinct) if not @distinct.nil?
+ @objects = @objects.distinct() if @distinct
end
# limit_database_read ensures @objects (which must be an
if not current_user
respond_to do |format|
format.json { send_error("Not logged in", status: 401) }
- format.html { redirect_to '/auth/joshid' }
+ format.html { redirect_to '/login' }
end
false
end
end
def set_current_request_id
- req_id = request.headers['X-Request-Id']
- if !req_id || req_id.length < 1 || req_id.length > 1024
- # Client-supplied ID is either missing or too long to be
- # considered friendly.
- req_id = "req-" + Random::DEFAULT.rand(2**128).to_s(36)[0..19]
- end
- response.headers['X-Request-Id'] = Thread.current[:request_id] = req_id
- Rails.logger.tagged(req_id) do
+ Rails.logger.tagged(request.request_id) do
yield
end
- Thread.current[:request_id] = nil
end
def append_info_to_payload(payload)
super
- payload[:request_id] = response.headers['X-Request-Id']
+ payload[:request_id] = request.request_id
payload[:client_ipaddr] = @remote_ip
payload[:client_auth] = current_api_client_authorization.andand.uuid || nil
end
def self._create_requires_parameters
{
+ select: {
+ type: 'array',
+ description: "Attributes of the new object to return in the response.",
+ required: false,
+ },
ensure_unique_name: {
type: "boolean",
description: "Adjust name to ensure uniqueness instead of returning an error on (owner_uuid, name) collision.",
end
def self._update_requires_parameters
- {}
+ {
+ select: {
+ type: 'array',
+ description: "Attributes of the updated object to return in the response.",
+ required: false,
+ },
+ }
+ end
+
+ def self._show_requires_parameters
+ {
+ select: {
+ type: 'array',
+ description: "Attributes of the object to return in the response.",
+ required: false,
+ },
+ }
end
def self._index_requires_parameters
filters: { type: 'array', required: false },
where: { type: 'object', required: false },
order: { type: 'array', required: false },
- select: { type: 'array', required: false },
- distinct: { type: 'boolean', required: false },
+ select: {
+ type: 'array',
+ description: "Attributes of each object to return in the response.",
+ required: false,
+ },
+ distinct: { type: 'boolean', required: false, default: false },
limit: { type: 'integer', required: false, default: DEFAULT_LIMIT },
offset: { type: 'integer', required: false, default: 0 },
count: { type: 'string', required: false, default: 'exact' },
scopes: {type: 'array', required: false}
}
end
+
def create_system_auth
@object = ApiClientAuthorization.
new(user_id: system_user.id,
end
def current
- @object = Thread.current[:api_client_authorization]
+ @object = Thread.current[:api_client_authorization].dup
+ if params[:remote]
+ # Client is validating a salted token. Don't return the unsalted
+ # secret!
+ @object.api_token = nil
+ end
show
end
include_old_versions: params[:include_old_versions],
}
- # It matters which Collection object we pick because we use it to get signed_manifest_text,
- # the value of which is affected by the value of trash_at.
+ # It matters which Collection object we pick because blob
+ # signatures depend on the value of trash_at.
#
- # From postgres doc: "By default, null values sort as if larger than any non-null
- # value; that is, NULLS FIRST is the default for DESC order, and
- # NULLS LAST otherwise."
+ # From postgres doc: "By default, null values sort as if larger
+ # than any non-null value; that is, NULLS FIRST is the default
+ # for DESC order, and NULLS LAST otherwise."
#
# "trash_at desc" sorts null first, then latest to earliest, so
# it will select the Collection object with the longest
# available lifetime.
- if c = Collection.readable_by(*@read_users, opts).where({ portable_data_hash: loc.to_s }).order("trash_at desc").limit(1).first
+ select_attrs = (@select || ["manifest_text"]) | ["portable_data_hash", "trash_at"]
+ if c = Collection.
+ readable_by(*@read_users, opts).
+ where({ portable_data_hash: loc.to_s }).
+ order("trash_at desc").
+ select(select_attrs.join(", ")).
+ limit(1).
+ first
@object = {
uuid: c.portable_data_hash,
portable_data_hash: c.portable_data_hash,
- manifest_text: c.signed_manifest_text,
+ trash_at: c.trash_at,
}
+ if select_attrs.index("manifest_text")
+ @object[:manifest_text] = c.manifest_text
+ end
end
else
super
protected
- def load_limit_offset_order_params *args
+ def load_select_param *args
super
if action_name == 'index'
# Omit manifest_text and unsigned_manifest_text from index results unless expressly selected.
skip_before_action :find_object_by_uuid, only: :shared
skip_before_action :render_404_if_no_object, only: :shared
+ TRASHABLE_CLASSES = ['project']
+
def self._index_requires_parameters
(super rescue {}).
merge({
params = _index_requires_parameters.
merge({
uuid: {
- type: 'string', required: false, default: nil,
+ type: 'string', required: false, default: '',
},
recursive: {
type: 'boolean', required: false, default: false, description: 'Include contents from child groups recursively.',
end
end
+ def destroy
+ if !TRASHABLE_CLASSES.include?(@object.group_class)
+ return @object.destroy
+ show
+ else
+ super # Calls destroy from TrashableController module
+ end
+ end
+
def render_404_if_no_object
if params[:action] == 'contents'
if !params[:uuid]
:self_link => "",
:offset => @offset,
:limit => @limit,
- :items_available => @items_available,
:items => @objects.as_api_response(nil)
}
+ if params[:count] != 'none'
+ list[:items_available] = @items_available
+ end
if @extra_included
list[:included] = @extra_included.as_api_response(nil, {select: @select})
end
# apply to each table being searched, not "groups".
load_limit_offset_order_params(fill_table_names: false)
+ if params['count'] == 'none' and @offset != 0 and (params['last_object_class'].nil? or params['last_object_class'].empty?)
+ # can't use offset without getting counts, so
+ # fall back to count=exact behavior.
+ params['count'] = 'exact'
+ set_count_none = true
+ end
+
# Trick apply_where_limit_order_params into applying suitable
# per-table values. *_all are the real ones we'll apply to the
# aggregate set.
seen_last_class = false
klasses.each do |klass|
- @offset = 0 if seen_last_class # reset offset for the new next type being processed
-
- # if current klass is same as params['last_object_class'], mark that fact
+ # check if current klass is same as params['last_object_class']
seen_last_class = true if((params['count'].andand.==('none')) and
(params['last_object_class'].nil? or
params['last_object_class'].empty? or
# if klasses are specified, skip all other klass types
next if wanted_klasses.any? and !wanted_klasses.include?(klass.to_s)
- # don't reprocess klass types that were already seen
+ # if specified, and count=none, then only look at the klass in
+ # last_object_class.
+ # for whatever reason, this parameter exists separately from 'wanted_klasses'
next if params['count'] == 'none' and !seen_last_class
# don't process rest of object types if we already have needed number of objects
if klass == Collection
@select = klass.selectable_attributes - ["manifest_text", "unsigned_manifest_text"]
elsif klass == Group
- where_conds = where_conds.merge(group_class: "project")
+ where_conds = where_conds.merge(group_class: ["project","filter"])
end
@filters = request_filters.map do |col, op, val|
@objects = exclude_home @objects, klass
end
+ # Adjust the limit based on number of objects fetched so far
klass_limit = limit_all - all_objects.count
@limit = klass_limit
apply_where_limit_order_params klass
+
+ # This actually fetches the objects
klass_object_list = object_list(model_class: klass)
+
+ # If count=none, :items_available will be nil, and offset is
+ # required to be 0.
klass_items_available = klass_object_list[:items_available] || 0
@items_available += klass_items_available
@offset = [@offset - klass_items_available, 0].max
+
+ # Add objects to the list of objects to be returned.
all_objects += klass_object_list[:items]
if klass_object_list[:limit] < klass_limit
@extra_included = included_by_uuid.values
end
+ if set_count_none
+ params['count'] = 'none'
+ end
+
@objects = all_objects
@limit = limit_all
@offset = offset_all
end
- protected
-
def exclude_home objectlist, klass
# select records that are readable by current user AND
# the owner_uuid is a user (but not the current user) OR
skip_before_action :find_object_by_uuid
skip_before_action :load_filters_param
skip_before_action :load_limit_offset_order_params
+ skip_before_action :load_select_param
skip_before_action :load_read_auths
skip_before_action :load_where_param
skip_before_action :render_404_if_no_object
.first
else
super
- if @object.nil?
- # Normally group permission links are not readable_by users.
- # Make an exception for users with permission to manage the group.
- # FIXME: Solve this more generally - see the controller tests.
- link = Link.find_by_uuid(params[:uuid])
- if (not link.nil?) and
- (link.link_class == "permission") and
- (@read_users.any? { |u| u.can?(manage: link.head_uuid) })
- @object = link
- end
- end
end
end
skip_before_action :find_object_by_uuid
skip_before_action :load_filters_param
skip_before_action :load_limit_offset_order_params
+ skip_before_action :load_select_param
skip_before_action :load_read_auths
skip_before_action :load_where_param
skip_before_action :render_404_if_no_object
# format is YYYYMMDD, must be fixed width (needs to be lexically
# sortable), updated manually, may be used by clients to
# determine availability of API server features.
- revision: "20201210",
+ revision: "20210628",
source_version: AppVersion.hash,
sourceVersion: AppVersion.hash, # source_version should be deprecated in the future
packageVersion: AppVersion.package_version,
auth: {
oauth2: {
scopes: {
- "https://api.curoverse.com/auth/arvados" => {
+ "https://api.arvados.org/auth/arvados" => {
description: "View and manage objects"
},
- "https://api.curoverse.com/auth/arvados.readonly" => {
+ "https://api.arvados.org/auth/arvados.readonly" => {
description: "View objects"
}
}
"$ref" => k.to_s
},
scopes: [
- "https://api.curoverse.com/auth/arvados",
- "https://api.curoverse.com/auth/arvados.readonly"
+ "https://api.arvados.org/auth/arvados",
+ "https://api.arvados.org/auth/arvados.readonly"
]
},
index: {
"$ref" => "#{k.to_s}List"
},
scopes: [
- "https://api.curoverse.com/auth/arvados",
- "https://api.curoverse.com/auth/arvados.readonly"
+ "https://api.arvados.org/auth/arvados",
+ "https://api.arvados.org/auth/arvados.readonly"
]
},
create: {
"$ref" => k.to_s
},
scopes: [
- "https://api.curoverse.com/auth/arvados"
+ "https://api.arvados.org/auth/arvados"
]
},
update: {
"$ref" => k.to_s
},
scopes: [
- "https://api.curoverse.com/auth/arvados"
+ "https://api.arvados.org/auth/arvados"
]
},
delete: {
"$ref" => k.to_s
},
scopes: [
- "https://api.curoverse.com/auth/arvados"
+ "https://api.arvados.org/auth/arvados"
]
}
}
"$ref" => (action == 'index' ? "#{k.to_s}List" : k.to_s)
},
scopes: [
- "https://api.curoverse.com/auth/arvados"
+ "https://api.arvados.org/auth/arvados"
]
}
route.segment_keys.each do |key|
response: {
},
scopes: [
- "https://api.curoverse.com/auth/arvados",
- "https://api.curoverse.com/auth/arvados.readonly"
+ "https://api.arvados.org/auth/arvados",
+ "https://api.arvados.org/auth/arvados.readonly"
+ ]
+ },
+ }
+ }
+
+ discovery[:resources]['sys'] = {
+ methods: {
+ get: {
+ id: "arvados.sys.trash_sweep",
+ path: "sys/trash_sweep",
+ httpMethod: "POST",
+ description: "apply scheduled trash and delete operations",
+ parameters: {
+ },
+ parameterOrder: [
+ ],
+ response: {
+ },
+ scopes: [
+ "https://api.arvados.org/auth/arvados",
+ "https://api.arvados.org/auth/arvados.readonly"
]
},
}
[:activate, :current, :system, :setup, :merge, :batch_update]
skip_before_action :render_404_if_no_object, only:
[:activate, :current, :system, :setup, :merge, :batch_update]
- before_action :admin_required, only: [:setup, :unsetup, :update_uuid, :batch_update]
+ before_action :admin_required, only: [:setup, :unsetup, :batch_update]
# Internal API used by controller to update local cache of user
# records from LoginCluster.
end
end
if needupdate.length > 0
- u.update_attributes!(needupdate)
+ begin
+ u.update_attributes!(needupdate)
+ rescue ActiveRecord::RecordInvalid
+ loginCluster = Rails.configuration.Login.LoginCluster
+ if u.uuid[0..4] == loginCluster && !needupdate[:username].nil?
+ local_user = User.find_by_username(needupdate[:username])
+ # A cached user record from the LoginCluster is stale, reset its username
+ # and retry the update operation.
+ if local_user.andand.uuid[0..4] == loginCluster && local_user.uuid != u.uuid
+ new_username = "#{needupdate[:username]}conflict#{rand(99999999)}"
+ Rails.logger.warn("cached username '#{needupdate[:username]}' collision with user '#{local_user.uuid}' - renaming to '#{new_username}' before retrying")
+ local_user.update_attributes!({username: new_username})
+ retry
+ end
+ end
+ raise # Not the issue we're handling above
+ end
end
@objects << u
end
show
end
- # Change UUID to a new (unused) uuid and transfer all owned/linked
- # objects accordingly.
- def update_uuid
- @object.update_uuid(new_uuid: params[:new_uuid])
- show
- end
-
def merge
if (params[:old_user_uuid] || params[:new_user_uuid])
if !current_user.andand.is_admin
})
end
- def self._update_uuid_requires_parameters
- {
- new_uuid: {
- type: 'string', required: true,
- },
- }
- end
-
def apply_filters(model_class=nil)
return super if @read_users.any?(&:is_admin)
if params[:uuid] != current_user.andand.uuid
#
# SPDX-License-Identifier: AGPL-3.0
-require 'current_api_client'
+class SysController < ApplicationController
+ skip_before_action :find_object_by_uuid
+ skip_before_action :render_404_if_no_object
+ before_action :admin_required
-module SweepTrashedObjects
- extend CurrentApiClient
-
- def self.delete_project_and_contents(p_uuid)
- p = Group.find_by_uuid(p_uuid)
- if !p || p.group_class != 'project'
- raise "can't sweep group '#{p_uuid}', it may not exist or not be a project"
- end
- # First delete sub projects
- Group.where({group_class: 'project', owner_uuid: p_uuid}).each do |sub_project|
- delete_project_and_contents(sub_project.uuid)
- end
- # Next, iterate over all tables which have owner_uuid fields, with some
- # exceptions, and delete records owned by this project
- skipped_classes = ['Group', 'User']
- ActiveRecord::Base.descendants.reject(&:abstract_class?).each do |klass|
- if !skipped_classes.include?(klass.name) && klass.columns.collect(&:name).include?('owner_uuid')
- klass.where({owner_uuid: p_uuid}).destroy_all
- end
- end
- # Finally delete the project itself
- p.destroy
- end
-
- def self.sweep_now
+ def trash_sweep
act_as_system_user do
# Sweep trashed collections
Collection.
where('is_trashed = false and trash_at < statement_timestamp()').
update_all('is_trashed = true')
- # Sweep trashed projects and their contents
+ # Sweep trashed projects and their contents (as well as role
+ # groups that were trashed before #18340 when that was
+ # disallowed)
Group.
- where({group_class: 'project'}).
where('delete_at is not null and delete_at < statement_timestamp()').each do |project|
delete_project_and_contents(project.uuid)
end
Group.
- where({group_class: 'project'}).
where('is_trashed = false and trash_at < statement_timestamp()').
update_all('is_trashed = true')
# Sweep expired tokens
ActiveRecord::Base.connection.execute("DELETE from api_client_authorizations where expires_at <= statement_timestamp()")
end
+ head :no_content
end
- def self.sweep_if_stale
- return if Rails.configuration.Collections.TrashSweepInterval <= 0
- exp = Rails.configuration.Collections.TrashSweepInterval.seconds
- need = false
- Rails.cache.fetch('SweepTrashedObjects', expires_in: exp) do
- need = true
+ protected
+
+ def delete_project_and_contents(p_uuid)
+ p = Group.find_by_uuid(p_uuid)
+ if !p
+ raise "can't sweep group '#{p_uuid}', it may not exist"
+ end
+ # First delete sub projects
+ Group.where({group_class: 'project', owner_uuid: p_uuid}).each do |sub_project|
+ delete_project_and_contents(sub_project.uuid)
end
- if need
- Thread.new do
- Thread.current.abort_on_exception = false
- begin
- sweep_now
- rescue => e
- Rails.logger.error "#{e.class}: #{e}\n#{e.backtrace.join("\n\t")}"
- ensure
- # Rails 5.1+ makes test threads share a database connection, so we can't
- # close a connection shared with other threads.
- # https://github.com/rails/rails/commit/deba47799ff905f778e0c98a015789a1327d5087
- if Rails.env != "test"
- ActiveRecord::Base.connection.close
- end
- end
+ # Next, iterate over all tables which have owner_uuid fields, with some
+ # exceptions, and delete records owned by this project
+ skipped_classes = ['Group', 'User']
+ ActiveRecord::Base.descendants.reject(&:abstract_class?).each do |klass|
+ if !skipped_classes.include?(klass.name) && klass.columns.collect(&:name).include?('owner_uuid')
+ klass.where({owner_uuid: p_uuid}).destroy_all
end
end
+ # Finally delete the project itself
+ p.destroy
end
end
respond_to :html
- # omniauth callback method
+ # create a new session
def create
if !Rails.configuration.Login.LoginCluster.empty? and Rails.configuration.Login.LoginCluster != Rails.configuration.ClusterID
raise "Local login disabled when LoginCluster is set"
authinfo = SafeJSON.load(params[:auth_info])
max_expires_at = authinfo["expires_at"]
else
- # omniauth middleware verified the user and is passing auth_info
- # in request.env.
- authinfo = request.env['omniauth.auth']['info'].with_indifferent_access
+ return send_error "Legacy code path no longer supported", status: 404
end
if !authinfo['user_uuid'].blank?
flash[:notice] = params[:message]
end
- # logout - Clear our rack session BUT essentially redirect to the provider
- # to clean up the Devise session from there too !
+ # logout - this gets intercepted by controller, so this is probably
+ # mostly dead code at this point.
def logout
session[:user_id] = nil
flash[:notice] = 'You have logged off'
return_to = params[:return_to] || root_url
- redirect_to "#{Rails.configuration.Services.SSO.ExternalURL}users/sign_out?redirect_uri=#{CGI.escape return_to}"
+ redirect_to return_to
end
- # login - Just bounce to /auth/joshid. The only purpose of this function is
- # to save the return_to parameter (if it exists; see the application
- # controller). /auth/joshid bypasses the application controller.
+ # login. Redirect to LoginCluster.
def login
if params[:remote] !~ /^[0-9a-z]{5}$/ && !params[:remote].nil?
return send_error 'Invalid remote cluster id', status: 400
p << "return_to=#{CGI.escape(params[:return_to])}" if params[:return_to]
redirect_to "#{login_cluster}/login?#{p.join('&')}"
else
- if params[:return_to]
- # Encode remote param inside callback's return_to, so that we'll get it on
- # create() after login.
- remote_param = params[:remote].nil? ? '' : params[:remote]
- p << "return_to=#{CGI.escape(remote_param + ',' + params[:return_to])}"
- end
- redirect_to "/auth/joshid?#{p.join('&')}"
+ return send_error "Legacy code path no longer supported", status: 404
end
end
end
if Rails.configuration.Login.TokenLifetime > 0
if token_expiration == nil
- token_expiration = Time.now + Rails.configuration.Login.TokenLifetime
+ token_expiration = db_current_time + Rails.configuration.Login.TokenLifetime
else
- token_expiration = [token_expiration, Time.now + Rails.configuration.Login.TokenLifetime].min
+ token_expiration = [token_expiration, db_current_time + Rails.configuration.Login.TokenLifetime].min
end
end
def account_is_setup(user)
@user = user
- mail(to: user.email, subject: 'Welcome to Arvados - account enabled')
+ if not Rails.configuration.Users.UserNotifierEmailBcc.empty? then
+ @bcc = Rails.configuration.Users.UserNotifierEmailBcc.keys
+ mail(to: user.email, subject: 'Welcome to Arvados - account enabled', bcc: @bcc)
+ else
+ mail(to: user.email, subject: 'Welcome to Arvados - account enabled')
+ end
end
end
reader_tokens = nil
if params["remote"] && request.get? && (
request.path.start_with?('/arvados/v1/groups') ||
+ request.path.start_with?('/arvados/v1/api_client_authorizations/current') ||
request.path.start_with?('/arvados/v1/users/current'))
# Request from a remote API server, asking to validate a salted
# token.
end
def is_trusted
- (from_trusted_url && Rails.configuration.Login.TokenLifetime == 0) || super
+ (from_trusted_url && Rails.configuration.Login.IssueTrustedTokens) || super
end
protected
include KindAndEtag
include CommonApiTemplate
extend CurrentApiClient
+ extend DbCurrentTime
belongs_to :api_client
belongs_to :user
after_initialize :assign_random_api_token
serialize :scopes, Array
+ before_validation :clamp_token_expiration
+
api_accessible :user, extend: :common do |t|
t.add :owner_uuid
t.add :user_id
remote_user_prefix = remote_user['uuid'][0..4]
- if token_uuid == ''
- # Use the same UUID as the remote when caching the token.
- begin
- remote_token = SafeJSON.load(
- clnt.get_content('https://' + host + '/arvados/v1/api_client_authorizations/current',
- {'remote' => Rails.configuration.ClusterID},
- {'Authorization' => 'Bearer ' + token}))
- token_uuid = remote_token['uuid']
- if !token_uuid.match(HasUuid::UUID_REGEX) || token_uuid[0..4] != upstream_cluster_id
- raise "remote cluster #{upstream_cluster_id} returned invalid token uuid #{token_uuid.inspect}"
- end
- rescue => e
- Rails.logger.warn "error getting remote token details for #{token.inspect}: #{e}"
- return nil
+ # Get token scope, and make sure we use the same UUID as the
+ # remote when caching the token.
+ remote_token = nil
+ begin
+ remote_token = SafeJSON.load(
+ clnt.get_content('https://' + host + '/arvados/v1/api_client_authorizations/current',
+ {'remote' => Rails.configuration.ClusterID},
+ {'Authorization' => 'Bearer ' + token}))
+ Rails.logger.debug "retrieved remote token #{remote_token.inspect}"
+ token_uuid = remote_token['uuid']
+ if !token_uuid.match(HasUuid::UUID_REGEX) || token_uuid[0..4] != upstream_cluster_id
+ raise "remote cluster #{upstream_cluster_id} returned invalid token uuid #{token_uuid.inspect}"
+ end
+ rescue HTTPClient::BadResponseError => e
+ if e.res.status != 401
+ raise
end
+ rev = SafeJSON.load(clnt.get_content('https://' + host + '/discovery/v1/apis/arvados/v1/rest'))['revision']
+ if rev >= '20010101' && rev < '20210503'
+ Rails.logger.warn "remote cluster #{upstream_cluster_id} at #{host} with api rev #{rev} does not provide token expiry and scopes; using scopes=['all']"
+ else
+ # remote server is new enough that it should have accepted
+ # this request if the token was valid
+ raise
+ end
+ rescue => e
+ Rails.logger.warn "error getting remote token details for #{token.inspect}: #{e}"
+ return nil
end
# Clusters can only authenticate for their own users.
user.last_name = "from cluster #{remote_user_prefix}"
end
- user.save!
+ begin
+ user.save!
+ rescue ActiveRecord::RecordInvalid, ActiveRecord::RecordNotUnique
+ Rails.logger.debug("remote user #{remote_user['uuid']} already exists, retrying...")
+ # Some other request won the race: retry fetching the user record.
+ user = User.find_by_uuid(remote_user['uuid'])
+ if !user
+ Rails.logger.warn("cannot find or create remote user #{remote_user['uuid']}")
+ return nil
+ end
+ end
if user.is_invited && !remote_user['is_invited']
# Remote user is not "invited" state, they should be unsetup, which
end
end
- # We will accept this token (and avoid reloading the user
- # record) for 'RemoteTokenRefresh' (default 5 minutes).
- # Possible todo:
- # Request the actual api_client_auth record from the remote
- # server in case it wants the token to expire sooner.
- auth = ApiClientAuthorization.find_or_create_by(uuid: token_uuid) do |auth|
- auth.user = user
- auth.api_client_id = 0
- end
# If stored_secret is set, we save stored_secret in the database
# but return the real secret to the caller. This way, if we end
# up returning the auth record to the client, they see the same
# secret they supplied, instead of the HMAC we saved in the
# database.
stored_secret = stored_secret || secret
+
+ # We will accept this token (and avoid reloading the user
+ # record) for 'RemoteTokenRefresh' (default 5 minutes).
+ exp = [db_current_time + Rails.configuration.Login.RemoteTokenRefresh,
+ remote_token.andand['expires_at']].compact.min
+ scopes = remote_token.andand['scopes'] || ['all']
+ begin
+ retries ||= 0
+ auth = ApiClientAuthorization.find_or_create_by(uuid: token_uuid) do |auth|
+ auth.user = user
+ auth.api_token = stored_secret
+ auth.api_client_id = 0
+ auth.scopes = scopes
+ auth.expires_at = exp
+ end
+ rescue ActiveRecord::RecordNotUnique
+ Rails.logger.debug("cached remote token #{token_uuid} already exists, retrying...")
+ # Some other request won the race: retry just once before erroring out
+ if (retries += 1) <= 1
+ retry
+ else
+ Rails.logger.warn("cannot find or create cached remote token #{token_uuid}")
+ return nil
+ end
+ end
auth.update_attributes!(user: user,
api_token: stored_secret,
api_client_id: 0,
- expires_at: Time.now + Rails.configuration.Login.RemoteTokenRefresh)
- Rails.logger.debug "cached remote token #{token_uuid} with secret #{stored_secret} in local db"
+ scopes: scopes,
+ expires_at: exp)
+ Rails.logger.debug "cached remote token #{token_uuid} with secret #{stored_secret} and scopes #{scopes} in local db"
auth.api_token = secret
return auth
end
protected
+ def clamp_token_expiration
+ if Rails.configuration.API.MaxTokenLifetime > 0
+ max_token_expiration = db_current_time + Rails.configuration.API.MaxTokenLifetime
+ if (self.new_record? || self.expires_at_changed?) && (self.expires_at.nil? || (self.expires_at > max_token_expiration && !current_user.andand.is_admin))
+ self.expires_at = max_token_expiration
+ end
+ end
+ end
+
def permission_to_create
current_user.andand.is_admin or (current_user.andand.id == self.user_id)
end
direct_check = " OR " + direct_check
end
+ if Rails.configuration.Users.RoleGroupsVisibleToAll &&
+ sql_table == "groups" &&
+ users_list.select { |u| u.is_active }.any?
+ # All role groups are readable (but we still need the other
+ # direct_check clauses to handle non-role groups).
+ direct_check += " OR #{sql_table}.group_class = 'role'"
+ end
+
links_cond = ""
if sql_table == "links"
- # Match any permission link that gives one of the authorized
- # users some permission _or_ gives anyone else permission to
- # view one of the authorized users.
+ # 1) Match permission links incoming or outgoing on the
+ # user, i.e. granting permission on the user, or granting
+ # permission to the user.
+ #
+ # 2) Match permission links which grant permission on an
+ # object that this user can_manage.
+ #
links_cond = "OR (#{sql_table}.link_class IN (:permission_link_classes) AND "+
- "(#{sql_table}.head_uuid IN (#{user_uuids_subquery}) OR #{sql_table}.tail_uuid IN (#{user_uuids_subquery})))"
+ " ((#{sql_table}.head_uuid IN (#{user_uuids_subquery}) OR #{sql_table}.tail_uuid IN (#{user_uuids_subquery})) OR " +
+ " #{sql_table}.head_uuid IN (SELECT target_uuid FROM #{PERMISSION_VIEW} "+
+ " WHERE user_uuid IN (#{user_uuids_subquery}) AND perm_level >= 3))) "
end
sql_conds = "(#{owner_check} #{direct_check} #{links_cond}) #{trashed_check.empty? ? "" : "AND"} #{trashed_check}"
self.where(sql_conds,
user_uuids: all_user_uuids.collect{|c| c["target_uuid"]},
- permission_link_classes: ['permission', 'resources'])
+ permission_link_classes: ['permission'])
end
def save_with_unique_name!
# SPDX-License-Identifier: AGPL-3.0
require 'arvados/keep'
-require 'sweep_trashed_objects'
require 'trashable'
class Collection < ArvadosModel
# Posgresql JSONB columns should NOT be declared as serialized, Rails 5
# already know how to properly treat them.
attribute :properties, :jsonbHash, default: {}
- attribute :storage_classes_desired, :jsonbArray, default: ["default"]
+ attribute :storage_classes_desired, :jsonbArray, default: lambda { Rails.configuration.DefaultStorageClasses }
attribute :storage_classes_confirmed, :jsonbArray, default: []
before_validation :default_empty_manifest
t.add :description
t.add :properties
t.add :portable_data_hash
- t.add :signed_manifest_text, as: :manifest_text
t.add :manifest_text, as: :unsigned_manifest_text
+ t.add :manifest_text, as: :manifest_text
t.add :replication_desired
t.add :replication_confirmed
t.add :replication_confirmed_at
def self.attributes_required_columns
super.merge(
- # If we don't list manifest_text explicitly, the
- # params[:select] code gets confused by the way we
- # expose signed_manifest_text as manifest_text in the
- # API response, and never let clients select the
- # manifest_text column.
- #
- # We need trash_at and is_trashed to determine the
- # correct timestamp in signed_manifest_text.
- 'manifest_text' => ['manifest_text', 'trash_at', 'is_trashed'],
+ # If we don't list unsigned_manifest_text explicitly,
+ # the params[:select] code gets confused by the way we
+ # expose manifest_text as unsigned_manifest_text in
+ # the API response, and never let clients select the
+ # unsigned_manifest_text column.
'unsigned_manifest_text' => ['manifest_text'],
'name' => ['name'],
)
def strip_signatures_and_update_replication_confirmed
if self.manifest_text_changed?
in_old_manifest = {}
- if not self.replication_confirmed.nil?
+ # manifest_text_was could be nil when dealing with a freshly created snapshot,
+ # so we skip this case because there was no real manifest change. (Bug #18005)
+ if (not self.replication_confirmed.nil?) and (not self.manifest_text_was.nil?)
self.class.each_manifest_locator(manifest_text_was) do |match|
in_old_manifest[match[1]] = true
end
end
end
- def signed_manifest_text
+ def signed_manifest_text_only_for_tests
if !has_attribute? :manifest_text
return nil
elsif is_trashed
token = Thread.current[:token]
exp = [db_current_time.to_i + Rails.configuration.Collections.BlobSigningTTL.to_i,
trash_at].compact.map(&:to_i).min
- self.class.sign_manifest manifest_text, token, exp
+ self.class.sign_manifest_only_for_tests manifest_text, token, exp
end
end
- def self.sign_manifest manifest, token, exp=nil
+ def self.sign_manifest_only_for_tests manifest, token, exp=nil
if exp.nil?
exp = db_current_time.to_i + Rails.configuration.Collections.BlobSigningTTL.to_i
end
super - ["manifest_text", "storage_classes_desired", "storage_classes_confirmed", "current_version_uuid"]
end
- def self.where *args
- SweepTrashedObjects.sweep_if_stale
- super
- end
-
protected
# Although the defaults for these columns is already set up on the schema,
# validation on empty desired storage classes return an error.
def default_storage_classes
if self.storage_classes_desired.nil? || self.storage_classes_desired.empty?
- self.storage_classes_desired = ["default"]
+ self.storage_classes_desired = Rails.configuration.DefaultStorageClasses
end
self.storage_classes_confirmed ||= []
end
# already know how to properly treat them.
attribute :secret_mounts, :jsonbHash, default: {}
attribute :runtime_status, :jsonbHash, default: {}
- attribute :runtime_auth_scopes, :jsonbHash, default: {}
+ attribute :runtime_auth_scopes, :jsonbArray, default: []
+ attribute :output_storage_classes, :jsonbArray, default: lambda { Rails.configuration.DefaultStorageClasses }
serialize :environment, Hash
serialize :mounts, Hash
t.add :lock_count
t.add :gateway_address
t.add :interactive_session_started
+ t.add :output_storage_classes
end
# Supported states for a container
end
def self.full_text_searchable_columns
- super - ["secret_mounts", "secret_mounts_md5", "runtime_token", "gateway_address"]
+ super - ["secret_mounts", "secret_mounts_md5", "runtime_token", "gateway_address", "output_storage_classes"]
end
def self.searchable_columns *args
- super - ["secret_mounts_md5", "runtime_token", "gateway_address"]
+ super - ["secret_mounts_md5", "runtime_token", "gateway_address", "output_storage_classes"]
end
def logged_attributes
secret_mounts: req.secret_mounts,
runtime_token: req.runtime_token,
runtime_user_uuid: runtime_user.uuid,
- runtime_auth_scopes: runtime_auth_scopes
+ runtime_auth_scopes: runtime_auth_scopes,
+ output_storage_classes: req.output_storage_classes,
}
end
act_as_system_user do
:environment, :mounts, :output_path, :priority,
:runtime_constraints, :scheduling_parameters,
:secret_mounts, :runtime_token,
- :runtime_user_uuid, :runtime_auth_scopes)
+ :runtime_user_uuid, :runtime_auth_scopes,
+ :output_storage_classes)
end
case self.state
self.runtime_auth_scopes = ["all"]
end
- # generate a new token
+ # Generate a new token. This runs with admin credentials as it's done by a
+ # dispatcher user, so expires_at isn't enforced by API.MaxTokenLifetime.
self.auth = ApiClientAuthorization.
create!(user_id: User.find_by_uuid(self.runtime_user_uuid).id,
api_client_id: 0,
# already know how to properly treat them.
attribute :properties, :jsonbHash, default: {}
attribute :secret_mounts, :jsonbHash, default: {}
+ attribute :output_storage_classes, :jsonbArray, default: lambda { Rails.configuration.DefaultStorageClasses }
serialize :environment, Hash
serialize :mounts, Hash
t.add :scheduling_parameters
t.add :state
t.add :use_existing
+ t.add :output_storage_classes
end
# Supported states for a container request
:container_image, :cwd, :environment, :filters, :mounts,
:output_path, :priority, :runtime_token,
:runtime_constraints, :state, :container_uuid, :use_existing,
- :scheduling_parameters, :secret_mounts, :output_name, :output_ttl]
+ :scheduling_parameters, :secret_mounts, :output_name, :output_ttl,
+ :output_storage_classes]
def self.limit_index_columns_read
["mounts"]
'container_uuid' => container_uuid,
},
portable_data_hash: log_col.portable_data_hash,
- manifest_text: log_col.manifest_text)
+ manifest_text: log_col.manifest_text,
+ storage_classes_desired: self.output_storage_classes
+ )
completed_coll.save_with_unique_name!
end
end
owner_uuid: self.owner_uuid,
name: coll_name,
manifest_text: "",
+ storage_classes_desired: self.output_storage_classes,
properties: {
'type' => out_type,
'container_request' => uuid,
end
def self.full_text_searchable_columns
- super - ["mounts", "secret_mounts", "secret_mounts_md5", "runtime_token"]
+ super - ["mounts", "secret_mounts", "secret_mounts_md5", "runtime_token", "output_storage_classes"]
end
protected
log_coll = Collection.new(
owner_uuid: self.owner_uuid,
name: coll_name = "Container log for request #{uuid}",
- manifest_text: "")
+ manifest_text: "",
+ storage_classes_desired: self.output_storage_classes)
end
# copy logs from old container into CR's log collection
if self.new_record? || self.state_was == Uncommitted
# Allow create-and-commit in a single operation.
permitted.push(*AttrsPermittedBeforeCommit)
+ elsif mounts_changed? && mounts_was.keys.sort == mounts.keys.sort
+ # Ignore the updated mounts if the only changes are default/zero
+ # values as added by controller, see 17774
+ only_defaults = true
+ mounts.each do |path, mount|
+ (mount.to_a - mounts_was[path].to_a).each do |k, v|
+ if ![0, "", false, nil].index(v)
+ only_defaults = false
+ end
+ end
+ end
+ if only_defaults
+ clear_attribute_change("mounts")
+ end
end
case self.state
validate :ensure_filesystem_compatible_name
validate :check_group_class
+ validate :check_filter_group_filters
before_create :assign_name
after_create :after_ownership_change
after_create :update_trash
end
def ensure_filesystem_compatible_name
- # project groups need filesystem-compatible names, but others
+ # project and filter groups need filesystem-compatible names, but others
# don't.
- super if group_class == 'project'
+ super if group_class == 'project' || group_class == 'filter'
end
def check_group_class
- if group_class != 'project' && group_class != 'role'
- errors.add :group_class, "value must be one of 'project' or 'role', was '#{group_class}'"
+ if group_class != 'project' && group_class != 'role' && group_class != 'filter'
+ errors.add :group_class, "value must be one of 'project', 'role' or 'filter', was '#{group_class}'"
end
if group_class_changed? && !group_class_was.nil?
errors.add :group_class, "cannot be modified after record is created"
end
end
+ def check_filter_group_filters
+ if group_class == 'filter'
+ if !self.properties.key?("filters")
+ errors.add :properties, "filters property missing, it must be an array of arrays, each with 3 elements"
+ return
+ end
+ if !self.properties["filters"].is_a?(Array)
+ errors.add :properties, "filters property must be an array of arrays, each with 3 elements"
+ return
+ end
+ self.properties["filters"].each do |filter|
+ if !filter.is_a?(Array)
+ errors.add :properties, "filters property must be an array of arrays, each with 3 elements"
+ return
+ end
+ if filter.length() != 3
+ errors.add :properties, "filters property must be an array of arrays, each with 3 elements"
+ return
+ end
+ if !filter[0].include?(".") and filter[0].downcase != "uuid"
+ errors.add :properties, "filter attribute must be 'uuid' or contain a dot (e.g. groups.name)"
+ return
+ end
+ if (filter[0].downcase != "uuid" and filter[1].downcase == "is_a")
+ errors.add :properties, "when filter operator is 'is_a', attribute must be 'uuid'"
+ return
+ end
+ if ! ["=","<","<=",">",">=","!=","like","ilike","in","not in","is_a","exists","contains"].include?(filter[1].downcase)
+ errors.add :properties, "filter operator is not valid (must be =,<,<=,>,>=,!=,like,ilike,in,not in,is_a,exists,contains)"
+ return
+ end
+ end
+ end
+ end
+
def update_trash
if saved_change_to_trash_at? or saved_change_to_owner_uuid?
# The group was added or removed from the trash.
name: 'can_read').empty?
# Add can_read link from this user to "all users" which makes this
- # user "invited"
- group_perm = create_user_group_link
+ # user "invited", and (depending on config) a link in the opposite
+ # direction which makes this user visible to other users.
+ group_perms = add_to_all_users_group
# Add git repo
repo_perm = if (!repo_name.nil? || Rails.configuration.Users.AutoSetupNewUsersWithRepository) and !username.nil?
forget_cached_group_perms
- return [repo_perm, vm_login_perm, group_perm, self].compact
+ return [repo_perm, vm_login_perm, *group_perms, self].compact
end
# delete user signatures, login, repo, and vm perms, and mark as inactive
Link.where(link_class: 'signature',
tail_uuid: self.uuid).destroy_all
+ # delete tokens for this user
+ ApiClientAuthorization.where(user_id: self.id).destroy_all
+ # delete ssh keys for this user
+ AuthorizedKey.where(owner_uuid: self.uuid).destroy_all
+ AuthorizedKey.where(authorized_user_uuid: self.uuid).destroy_all
+
# delete user preferences (including profile)
self.prefs = {}
end
end
- def update_uuid(new_uuid:)
- if !current_user.andand.is_admin
- raise PermissionDeniedError
- end
- if uuid == system_user_uuid || uuid == anonymous_user_uuid
- raise "update_uuid cannot update system accounts"
- end
- if self.class != self.class.resource_class_for_uuid(new_uuid)
- raise "invalid new_uuid #{new_uuid.inspect}"
- end
- transaction(requires_new: true) do
- reload
- old_uuid = self.uuid
- self.uuid = new_uuid
- save!(validate: false)
- change_all_uuid_refs(old_uuid: old_uuid, new_uuid: new_uuid)
- ActiveRecord::Base.connection.exec_update %{
-update #{PERMISSION_VIEW} set user_uuid=$1 where user_uuid = $2
-},
- 'User.update_uuid.update_permissions_user_uuid',
- [[nil, new_uuid],
- [nil, old_uuid]]
- ActiveRecord::Base.connection.exec_update %{
-update #{PERMISSION_VIEW} set target_uuid=$1 where target_uuid = $2
-},
- 'User.update_uuid.update_permissions_target_uuid',
- [[nil, new_uuid],
- [nil, old_uuid]]
- end
- end
-
# Move this user's (i.e., self's) owned items to new_owner_uuid and
# new_user_uuid (for things normally owned directly by the user).
#
login_perm
end
- # add the user to the 'All users' group
- def create_user_group_link
- return (Link.where(tail_uuid: self.uuid,
+ def add_to_all_users_group
+ resp = [Link.where(tail_uuid: self.uuid,
head_uuid: all_users_group_uuid,
link_class: 'permission',
- name: 'can_read').first or
+ name: 'can_read').first ||
Link.create(tail_uuid: self.uuid,
head_uuid: all_users_group_uuid,
link_class: 'permission',
- name: 'can_read'))
+ name: 'can_read')]
+ if Rails.configuration.Users.ActivatedUsersAreVisibleToOthers
+ resp += [Link.where(tail_uuid: all_users_group_uuid,
+ head_uuid: self.uuid,
+ link_class: 'permission',
+ name: 'can_read').first ||
+ Link.create(tail_uuid: all_users_group_uuid,
+ head_uuid: self.uuid,
+ link_class: 'permission',
+ name: 'can_read')]
+ end
+ return resp
end
# Give the special "System group" permission to manage this user and
<% end %>
•
<a class="logout" href="/logout">Log out</a>
- <% else %>
- <!--<a class="logout" href="/auth/joshid">Log in</a>-->
<% end %>
<% if current_user and session[:real_uid] and session[:switch_back_to] and User.find(session[:real_uid].to_i).verify_userswitch_cookie(session[:switch_back_to]) %>
<% if current_user or session['invite_code'] %>
<div id="footer">
- <div style="float:right">Questions → <a href="mailto:arvados@curoverse.com">arvados@curoverse.com</a></div>
<div style="clear:both"></div>
</div>
<% end %>
});
<% end %>
<div id="intropage">
- <img class="curoverse-logo" src="<%= asset_path('logo.png') %>" style="display:block; margin:2em auto"/>
+ <img class="arvados-logo" src="<%= asset_path('logo.png') %>" style="display:block; margin:2em auto"/>
<div style="width:30em; margin:2em auto 0 auto">
<h1>Welcome</h1>
- <h4>Curoverse ARVADOS</h4>
+ <h4>ARVADOS</h4>
<% if !current_user and session['invite_code'] %>
- <p>Curoverse Arvados lets you manage and process human genomes and exomes. You can start using the private beta
- now with your Google account.</p>
+ <p>Arvados lets you manage and process biomedical data.</p>
<p style="float:right;margin-top:1em">
- <button class="login" href="/auth/joshid">Log in and get started</button>
+ <button class="login" href="/login">Log in and get started</button>
</p>
<% else %>
- <p>Curoverse ARVADOS is transforming how researchers and
- clinical geneticists use whole genome sequences. </p>
- <p>If you’re interested in learning more, we’d love to hear
- from you —
- contact <a href="mailto:arvados@curoverse.com">arvados@curoverse.com</a>.</p>
-
<% if !current_user %>
<p style="float:right;margin-top:1em">
- <a href="/auth/joshid">Log in here.</a>
+ <a href="/login">Log in here.</a>
</p>
<% end %>
<div id="intropage">
- <img class="curoverse-logo" src="<%= asset_path('logo.png') rescue '/logo.png' %>" style="display:block; margin:2em auto"/>
+ <img class="arvados-logo" src="<%= asset_path('logo.png') rescue '/logo.png' %>" style="display:block; margin:2em auto"/>
<div style="width:30em; margin:2em auto 0 auto">
<h1>Error</h1>
<%= notice %>
<br/>
-<a href="/auth/joshid">Retry Login</a>
+<a href="/login">Retry Login</a>
+++ /dev/null
-#!/usr/bin/env ruby
-
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-APP_ROOT = File.expand_path('..', __dir__)
-Dir.chdir(APP_ROOT) do
- begin
- exec "yarnpkg", *ARGV
- rescue Errno::ENOENT
- $stderr.puts "Yarn executable was not detected in the system."
- $stderr.puts "Download Yarn at https://yarnpkg.com/en/docs/install"
- exit 1
- end
-end
# configured by application.yml (i.e., here!) instead.
end
-if (File.exist?(File.expand_path '../omniauth.rb', __FILE__) and
- not defined? WARNED_OMNIAUTH_CONFIG)
- Rails.logger.warn <<-EOS
-DEPRECATED CONFIGURATION:
- Please move your SSO provider config into config/application.yml
- and delete config/initializers/omniauth.rb.
-EOS
- # Real values will be copied from globals by omniauth_init.rb. For
- # now, assign some strings so the generic *.yml config loader
- # doesn't overwrite them or complain that they're missing.
- Rails.configuration.Login["SSO"]["ProviderAppID"] = 'xxx'
- Rails.configuration.Login["SSO"]["ProviderAppSecret"] = 'xxx'
- Rails.configuration.Services["SSO"]["ExternalURL"] = '//xxx'
- WARNED_OMNIAUTH_CONFIG = true
-end
-
# Load the defaults, used by config:migrate and fallback loading
# legacy application.yml
-Open3.popen2("arvados-server", "config-dump", "-config=-", "-skip-legacy") do |stdin, stdout, status_thread|
- stdin.write("Clusters: {xxxxx: {}}")
- stdin.close
- confs = YAML.load(stdout, deserialize_symbols: false)
- clusterID, clusterConfig = confs["Clusters"].first
- $arvados_config_defaults = clusterConfig
- $arvados_config_defaults["ClusterID"] = clusterID
+defaultYAML, stderr, status = Open3.capture3("arvados-server", "config-dump", "-config=-", "-skip-legacy", stdin_data: "Clusters: {xxxxx: {}}")
+if !status.success?
+ puts stderr
+ raise "error loading config: #{status}"
end
+confs = YAML.load(defaultYAML, deserialize_symbols: false)
+clusterID, clusterConfig = confs["Clusters"].first
+$arvados_config_defaults = clusterConfig
+$arvados_config_defaults["ClusterID"] = clusterID
-# Load the global config file
-Open3.popen2("arvados-server", "config-dump", "-skip-legacy") do |stdin, stdout, status_thread|
- confs = YAML.load(stdout, deserialize_symbols: false)
- if confs && !confs.empty?
- # config-dump merges defaults with user configuration, so every
- # key should be set.
- clusterID, clusterConfig = confs["Clusters"].first
- $arvados_config_global = clusterConfig
- $arvados_config_global["ClusterID"] = clusterID
- else
- # config-dump failed, assume we will be loading from legacy
- # application.yml, initialize with defaults.
- $arvados_config_global = $arvados_config_defaults.deep_dup
+if ENV["ARVADOS_CONFIG"] == "none"
+ # Don't load config. This magic value is set by packaging scripts so
+ # they can run "rake assets:precompile" without a real config.
+ $arvados_config_global = $arvados_config_defaults.deep_dup
+else
+ # Load the global config file
+ Open3.popen2("arvados-server", "config-dump", "-skip-legacy") do |stdin, stdout, status_thread|
+ confs = YAML.load(stdout, deserialize_symbols: false)
+ if confs && !confs.empty?
+ # config-dump merges defaults with user configuration, so every
+ # key should be set.
+ clusterID, clusterConfig = confs["Clusters"].first
+ $arvados_config_global = clusterConfig
+ $arvados_config_global["ClusterID"] = clusterID
+ else
+ # config-dump failed, assume we will be loading from legacy
+ # application.yml, initialize with defaults.
+ $arvados_config_global = $arvados_config_defaults.deep_dup
+ end
end
end
arvcfg.declare_config "API.MaxRequestSize", Integer, :max_request_size
arvcfg.declare_config "API.MaxIndexDatabaseRead", Integer, :max_index_database_read
arvcfg.declare_config "API.MaxItemsPerResponse", Integer, :max_items_per_response
+arvcfg.declare_config "API.MaxTokenLifetime", ActiveSupport::Duration
+arvcfg.declare_config "API.RequestTimeout", ActiveSupport::Duration
arvcfg.declare_config "API.AsyncPermissionsUpdateInterval", ActiveSupport::Duration, :async_permissions_update_interval
arvcfg.declare_config "Users.AutoSetupNewUsers", Boolean, :auto_setup_new_users
arvcfg.declare_config "Users.AutoSetupNewUsersWithVmUUID", String, :auto_setup_new_users_with_vm_uuid
arvcfg.declare_config "Users.AdminNotifierEmailFrom", String, :admin_notifier_email_from
arvcfg.declare_config "Users.EmailSubjectPrefix", String, :email_subject_prefix
arvcfg.declare_config "Users.UserNotifierEmailFrom", String, :user_notifier_email_from
+arvcfg.declare_config "Users.UserNotifierEmailBcc", Hash
arvcfg.declare_config "Users.NewUserNotificationRecipients", Hash, :new_user_notification_recipients, ->(cfg, k, v) { arrayToHash cfg, "Users.NewUserNotificationRecipients", v }
arvcfg.declare_config "Users.NewInactiveUserNotificationRecipients", Hash, :new_inactive_user_notification_recipients, method(:arrayToHash)
-arvcfg.declare_config "Login.SSO.ProviderAppSecret", String, :sso_app_secret
-arvcfg.declare_config "Login.SSO.ProviderAppID", String, :sso_app_id
+arvcfg.declare_config "Users.RoleGroupsVisibleToAll", Boolean
arvcfg.declare_config "Login.LoginCluster", String
arvcfg.declare_config "Login.TrustedClients", Hash
arvcfg.declare_config "Login.RemoteTokenRefresh", ActiveSupport::Duration
arvcfg.declare_config "Login.TokenLifetime", ActiveSupport::Duration
arvcfg.declare_config "TLS.Insecure", Boolean, :sso_insecure
-arvcfg.declare_config "Services.SSO.ExternalURL", String, :sso_provider_url
arvcfg.declare_config "AuditLogs.MaxAge", ActiveSupport::Duration, :max_audit_log_age
arvcfg.declare_config "AuditLogs.MaxDeleteBatch", Integer, :max_audit_log_delete_batch
arvcfg.declare_config "AuditLogs.UnloggedAttributes", Hash, :unlogged_attributes, ->(cfg, k, v) { arrayToHash cfg, "AuditLogs.UnloggedAttributes", v }
arvcfg.declare_config "Collections.CollectionVersioning", Boolean, :collection_versioning
arvcfg.declare_config "Collections.PreserveVersionIfIdle", ActiveSupport::Duration, :preserve_version_if_idle
arvcfg.declare_config "Collections.TrashSweepInterval", ActiveSupport::Duration, :trash_sweep_interval
-arvcfg.declare_config "Collections.BlobSigningKey", NonemptyString, :blob_signing_key
+arvcfg.declare_config "Collections.BlobSigningKey", String, :blob_signing_key
arvcfg.declare_config "Collections.BlobSigningTTL", ActiveSupport::Duration, :blob_signature_ttl
arvcfg.declare_config "Collections.BlobSigning", Boolean, :permit_create_collection_with_unsigned_manifest, ->(cfg, k, v) { ConfigLoader.set_cfg cfg, "Collections.BlobSigning", !v }
arvcfg.declare_config "Collections.ForwardSlashNameSubstitution", String
ConfigLoader.set_cfg cfg, "RemoteClusters", h
}
arvcfg.declare_config "RemoteClusters.*.Proxy", Boolean, :remote_hosts_via_dns
+arvcfg.declare_config "StorageClasses", Hash
dbcfg = ConfigLoader.new
raise "default_trash_lifetime is %d, must be at least 86400" % Rails.configuration.Collections.DefaultTrashLifetime
end
+default_storage_classes = []
+$arvados_config["StorageClasses"].each do |cls, cfg|
+ if cfg["Default"]
+ default_storage_classes << cls
+ end
+end
+if default_storage_classes.length == 0
+ default_storage_classes = ["default"]
+end
+$arvados_config["DefaultStorageClasses"] = default_storage_classes.sort
+
#
# Special case for test database where there's no database.yml,
# because the Arvados config.yml doesn't have a concept of multiple
$arvados_config["PostgreSQL"]["Connection"]["collation"] = "en_US.UTF-8"
end
+if ENV["ARVADOS_CONFIG"] == "none"
+ # We need the postgresql connection URI to be valid, even if we
+ # don't use it.
+ $arvados_config["PostgreSQL"]["Connection"]["host"] = "localhost"
+ $arvados_config["PostgreSQL"]["Connection"]["user"] = "x"
+ $arvados_config["PostgreSQL"]["Connection"]["password"] = "x"
+ $arvados_config["PostgreSQL"]["Connection"]["dbname"] = "x"
+end
+
if $arvados_config["PostgreSQL"]["Connection"]["password"].empty?
raise "Database password is empty, PostgreSQL section is: #{$arvados_config["PostgreSQL"]}"
end
ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../Gemfile', __dir__)
require 'bundler/setup' # Set up gems listed in the Gemfile.
-require 'bootsnap/setup' # Speed up boot time by caching expensive operations.
\ No newline at end of file
# Load the rails application
require_relative 'application'
-require 'josh_id'
# Initialize the rails application
Rails.application.initialize!
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+ActiveRecord::ConnectionAdapters::AbstractAdapter.set_callback :checkout, :before, ->(conn) do
+ ms = Rails.configuration.API.RequestTimeout.to_i * 1000
+ conn.execute("SET statement_timeout = #{ms}")
+ conn.execute("SET lock_timeout = #{ms}")
+end
Rails.application.configure do
begin
- if ActiveRecord::Base.connection.tables.include?('jobs')
+ if ENV["ARVADOS_CONFIG"] != "none" && ActiveRecord::Base.connection.tables.include?('jobs')
check_enable_legacy_jobs_api
end
rescue ActiveRecord::NoDatabaseError
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.lograge.custom_options = lambda do |event|
payload = {
+ ClusterID: Rails.configuration.ClusterID,
request_id: event.payload[:request_id],
client_ipaddr: event.payload[:client_ipaddr],
client_auth: event.payload[:client_auth],
+++ /dev/null
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-# This file is called omniauth_init.rb instead of omniauth.rb because
-# older versions had site configuration in omniauth.rb.
-#
-# It must come after omniauth.rb in (lexical) load order.
-
-if defined? CUSTOM_PROVIDER_URL
- Rails.logger.warn "Copying omniauth from globals in legacy config file."
- Rails.configuration.Login["SSO"]["ProviderAppID"] = APP_ID
- Rails.configuration.Login["SSO"]["ProviderAppSecret"] = APP_SECRET
- Rails.configuration.Services["SSO"]["ExternalURL"] = CUSTOM_PROVIDER_URL.sub(/\/$/, "") + "/"
-else
- Rails.application.config.middleware.use OmniAuth::Builder do
- provider(:josh_id,
- Rails.configuration.Login["SSO"]["ProviderAppID"],
- Rails.configuration.Login["SSO"]["ProviderAppSecret"],
- Rails.configuration.Services["SSO"]["ExternalURL"])
- end
- OmniAuth.config.on_failure = StaticController.action(:login_failure)
-end
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+module CustomRequestId
+ def make_request_id(req_id)
+ if !req_id || req_id.length < 1 || req_id.length > 1024
+ # Client-supplied ID is either missing or too long to be
+ # considered friendly.
+ internal_request_id
+ else
+ req_id
+ end
+ end
+
+ def internal_request_id
+ "req-" + Random::DEFAULT.rand(2**128).to_s(36)[0..19]
+ end
+end
+
+class ActionDispatch::RequestId
+ # Instead of using the default UUID-like format for X-Request-Id headers,
+ # use our own.
+ prepend CustomRequestId
+end
\ No newline at end of file
post 'activate', on: :member
post 'setup', on: :collection
post 'unsetup', on: :member
- post 'update_uuid', on: :member
post 'merge', on: :collection
patch 'batch_update', on: :collection
end
end
end
+ post '/sys/trash_sweep', to: 'sys#trash_sweep'
+
if Rails.env == 'test'
post '/database/reset', to: 'database#reset'
end
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+class AddContainerOutputStorageClass < ActiveRecord::Migration[5.2]
+ def change
+ add_column :container_requests, :output_storage_classes, :jsonb, :default => ["default"]
+ add_column :containers, :output_storage_classes, :jsonb, :default => ["default"]
+ end
+end
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+class DropFtsIndex < ActiveRecord::Migration[5.2]
+ def fts_indexes
+ {
+ "collections" => "collections_full_text_search_idx",
+ "container_requests" => "container_requests_full_text_search_idx",
+ "groups" => "groups_full_text_search_idx",
+ "jobs" => "jobs_full_text_search_idx",
+ "pipeline_instances" => "pipeline_instances_full_text_search_idx",
+ "pipeline_templates" => "pipeline_templates_full_text_search_idx",
+ "workflows" => "workflows_full_text_search_idx",
+ }
+ end
+
+ def up
+ fts_indexes.keys.each do |t|
+ i = fts_indexes[t]
+ execute "DROP INDEX IF EXISTS #{i}"
+ end
+ end
+
+ def down
+ fts_indexes.keys.each do |t|
+ i = fts_indexes[t]
+ execute "CREATE INDEX #{i} ON #{t} USING gin(#{t.classify.constantize.full_text_tsvector})"
+ end
+ end
+end
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+class DeleteDisabledUserTokensAndKeys < ActiveRecord::Migration[5.2]
+ def up
+ execute "delete from api_client_authorizations where user_id in (select id from users where is_active ='false' and uuid not like '%-tpzed-anonymouspublic' and uuid not like '%-tpzed-000000000000000')"
+ execute "delete from authorized_keys where owner_uuid in (select uuid from users where is_active ='false' and uuid not like '%-tpzed-anonymouspublic' and uuid not like '%-tpzed-000000000000000')"
+ execute "delete from authorized_keys where authorized_user_uuid in (select uuid from users where is_active ='false' and uuid not like '%-tpzed-anonymouspublic' and uuid not like '%-tpzed-000000000000000')"
+ end
+
+ def down
+ # This migration is not reversible.
+ end
+end
output_name character varying(255) DEFAULT NULL::character varying,
output_ttl integer DEFAULT 0 NOT NULL,
secret_mounts jsonb DEFAULT '{}'::jsonb,
- runtime_token text
+ runtime_token text,
+ output_storage_classes jsonb DEFAULT '["default"]'::jsonb
);
runtime_token text,
lock_count integer DEFAULT 0 NOT NULL,
gateway_address character varying,
- interactive_session_started boolean DEFAULT false NOT NULL
+ interactive_session_started boolean DEFAULT false NOT NULL,
+ output_storage_classes jsonb DEFAULT '["default"]'::jsonb
);
CREATE INDEX collection_index_on_properties ON public.collections USING gin (properties);
---
--- Name: collections_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX collections_full_text_search_idx ON public.collections USING gin (to_tsvector('english'::regconfig, substr((((((((((((((((((COALESCE(owner_uuid, ''::character varying))::text || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(portable_data_hash, ''::character varying))::text) || ' '::text) || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || (COALESCE(description, ''::character varying))::text) || ' '::text) || COALESCE((properties)::text, ''::text)) || ' '::text) || COALESCE(file_names, ''::text)), 0, 1000000)));
-
-
--
-- Name: collections_search_index; Type: INDEX; Schema: public; Owner: -
--
CREATE INDEX collections_trgm_text_search_idx ON public.collections USING gin (((((((((((((((((((COALESCE(owner_uuid, ''::character varying))::text || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(portable_data_hash, ''::character varying))::text) || ' '::text) || (COALESCE(uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || (COALESCE(description, ''::character varying))::text) || ' '::text) || COALESCE((properties)::text, ''::text)) || ' '::text) || COALESCE(file_names, ''::text))) public.gin_trgm_ops);
---
--- Name: container_requests_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX container_requests_full_text_search_idx ON public.container_requests USING gin (to_tsvector('english'::regconfig, substr((((((((((((((((((((((((((((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(description, ''::text)) || ' '::text) || COALESCE((properties)::text, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || (COALESCE(requesting_container_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(container_uuid, ''::character varying))::text) || ' '::text) || COALESCE(runtime_constraints, ''::text)) || ' '::text) || (COALESCE(container_image, ''::character varying))::text) || ' '::text) || COALESCE(environment, ''::text)) || ' '::text) || (COALESCE(cwd, ''::character varying))::text) || ' '::text) || COALESCE(command, ''::text)) || ' '::text) || (COALESCE(output_path, ''::character varying))::text) || ' '::text) || COALESCE(filters, ''::text)) || ' '::text) || COALESCE(scheduling_parameters, ''::text)) || ' '::text) || (COALESCE(output_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(log_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(output_name, ''::character varying))::text), 0, 1000000)));
-
-
--
-- Name: container_requests_index_on_properties; Type: INDEX; Schema: public; Owner: -
--
CREATE INDEX group_index_on_properties ON public.groups USING gin (properties);
---
--- Name: groups_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX groups_full_text_search_idx ON public.groups USING gin (to_tsvector('english'::regconfig, substr((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || (COALESCE(description, ''::character varying))::text) || ' '::text) || (COALESCE(group_class, ''::character varying))::text) || ' '::text) || COALESCE((properties)::text, ''::text)), 0, 1000000)));
-
-
--
-- Name: groups_search_index; Type: INDEX; Schema: public; Owner: -
--
CREATE INDEX job_tasks_search_index ON public.job_tasks USING btree (uuid, owner_uuid, modified_by_client_uuid, modified_by_user_uuid, job_uuid, created_by_job_task_uuid);
---
--- Name: jobs_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX jobs_full_text_search_idx ON public.jobs USING gin (to_tsvector('english'::regconfig, substr((((((((((((((((((((((((((((((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(submit_id, ''::character varying))::text) || ' '::text) || (COALESCE(script, ''::character varying))::text) || ' '::text) || (COALESCE(script_version, ''::character varying))::text) || ' '::text) || COALESCE(script_parameters, ''::text)) || ' '::text) || (COALESCE(cancelled_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(cancelled_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(output, ''::character varying))::text) || ' '::text) || (COALESCE(is_locked_by_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(log, ''::character varying))::text) || ' '::text) || COALESCE(tasks_summary, ''::text)) || ' '::text) || COALESCE(runtime_constraints, ''::text)) || ' '::text) || (COALESCE(repository, ''::character varying))::text) || ' '::text) || (COALESCE(supplied_script_version, ''::character varying))::text) || ' '::text) || (COALESCE(docker_image_locator, ''::character varying))::text) || ' '::text) || (COALESCE(description, ''::character varying))::text) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || (COALESCE(arvados_sdk_version, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)), 0, 1000000)));
-
-
--
-- Name: jobs_search_index; Type: INDEX; Schema: public; Owner: -
--
CREATE UNIQUE INDEX permission_user_target ON public.materialized_permissions USING btree (user_uuid, target_uuid);
---
--- Name: pipeline_instances_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX pipeline_instances_full_text_search_idx ON public.pipeline_instances USING gin (to_tsvector('english'::regconfig, substr((((((((((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(pipeline_template_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || COALESCE(properties, ''::text)) || ' '::text) || (COALESCE(state, ''::character varying))::text) || ' '::text) || COALESCE(components_summary, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text), 0, 1000000)));
-
-
--
-- Name: pipeline_instances_search_index; Type: INDEX; Schema: public; Owner: -
--
CREATE UNIQUE INDEX pipeline_template_owner_uuid_name_unique ON public.pipeline_templates USING btree (owner_uuid, name);
---
--- Name: pipeline_templates_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX pipeline_templates_full_text_search_idx ON public.pipeline_templates USING gin (to_tsvector('english'::regconfig, substr((((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(components, ''::text)) || ' '::text) || (COALESCE(description, ''::character varying))::text), 0, 1000000)));
-
-
--
-- Name: pipeline_templates_search_index; Type: INDEX; Schema: public; Owner: -
--
CREATE INDEX virtual_machines_search_index ON public.virtual_machines USING btree (uuid, owner_uuid, modified_by_client_uuid, modified_by_user_uuid, hostname);
---
--- Name: workflows_full_text_search_idx; Type: INDEX; Schema: public; Owner: -
---
-
-CREATE INDEX workflows_full_text_search_idx ON public.workflows USING gin (to_tsvector('english'::regconfig, substr((((((((((((COALESCE(uuid, ''::character varying))::text || ' '::text) || (COALESCE(owner_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_client_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(modified_by_user_uuid, ''::character varying))::text) || ' '::text) || (COALESCE(name, ''::character varying))::text) || ' '::text) || COALESCE(description, ''::text)), 0, 1000000)));
-
-
--
-- Name: workflows_search_idx; Type: INDEX; Schema: public; Owner: -
--
('20201105190435'),
('20201202174753'),
('20210108033940'),
-('20210126183521');
+('20210126183521'),
+('20210621204455'),
+('20210816191509'),
+('20211027154300');
case "$TARGET" in
centos*)
- fpm_depends+=(libcurl-devel postgresql-devel bison make automake gcc gcc-c++)
+ fpm_depends+=(libcurl-devel postgresql-devel bison make automake gcc gcc-c++ postgresql shared-mime-info)
+ ;;
+ ubuntu1804)
+ fpm_depends+=(libcurl-ssl-dev libpq-dev g++ bison zlib1g-dev make postgresql-client shared-mime-info)
+ fpm_conflicts+=(ruby-bundler)
;;
debian* | ubuntu*)
- fpm_depends+=(libcurl-ssl-dev libpq-dev g++ bison zlib1g-dev make)
+ fpm_depends+=(libcurl-ssl-dev libpq-dev g++ bison zlib1g-dev make postgresql-client shared-mime-info)
;;
esac
# shouldn't be anything to do at all.
act_as_system_user do
ActiveRecord::Base.transaction do
- Group.where("group_class != 'project' or group_class is null").each do |g|
- # 1) any group not group_class != project becomes a 'role' (both empty and invalid groups)
+ Group.where("(group_class != 'project' and group_class != 'filter') or group_class is null").each do |g|
+ # 1) any group not group_class != project and != filter becomes a 'role' (both empty and invalid groups)
old_owner = g.owner_uuid
g.owner_uuid = system_user_uuid
g.group_class = 'role'
+++ /dev/null
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-require 'omniauth-oauth2'
-module OmniAuth
- module Strategies
- class JoshId < OmniAuth::Strategies::OAuth2
-
- args [:client_id, :client_secret, :custom_provider_url]
-
- option :custom_provider_url, ''
-
- uid { raw_info['id'] }
-
- option :client_options, {}
-
- info do
- {
- :first_name => raw_info['info']['first_name'],
- :last_name => raw_info['info']['last_name'],
- :email => raw_info['info']['email'],
- :identity_url => raw_info['info']['identity_url'],
- :username => raw_info['info']['username'],
- }
- end
-
- extra do
- {
- 'raw_info' => raw_info
- }
- end
-
- def authorize_params
- options.authorize_params[:auth_provider] = request.params['auth_provider']
- super
- end
-
- def client
- options.client_options[:site] = options[:custom_provider_url]
- options.client_options[:authorize_url] = "#{options[:custom_provider_url]}/auth/josh_id/authorize"
- options.client_options[:access_token_url] = "#{options[:custom_provider_url]}/auth/josh_id/access_token"
- if Rails.configuration.TLS.Insecure
- options.client_options[:ssl] = {verify_mode: OpenSSL::SSL::VERIFY_NONE}
- end
- ::OAuth2::Client.new(options.client_id, options.client_secret, deep_symbolize(options.client_options))
- end
-
- def callback_url
- full_host + script_name + callback_path + "?return_to=" + CGI.escape(request.params['return_to'] || '')
- end
-
- def raw_info
- @raw_info ||= access_token.get("/auth/josh_id/user.json?oauth_token=#{access_token.token}").parsed
- end
- end
- end
-end
end
end
+ @distinct = params[:distinct] && true
+ end
+
+ def load_select_param
case params[:select]
when Array
@select = params[:select]
end
end
- if @select
+ if @select && @orders
# Any ordering columns must be selected when doing select,
# otherwise it is an SQL error, so filter out invaliding orderings.
@orders.select! { |o|
@select.select { |s| col == "#{table_name}.#{s}" }.any?
}
end
-
- @distinct = true if (params[:distinct] == true || params[:distinct] == "true")
- @distinct = false if (params[:distinct] == false || params[:distinct] == "false")
end
end
model_table_name = model_class.table_name
filters.each do |filter|
attrs_in, operator, operand = filter
- if attrs_in == 'any' && operator != '@@'
+ if operator == '@@'
+ raise ArgumentError.new("Full text search operator is no longer supported")
+ end
+ if attrs_in == 'any'
attrs = model_class.searchable_columns(operator)
elsif attrs_in.is_a? Array
attrs = attrs_in
raise ArgumentError.new("Invalid operator '#{operator}' (#{operator.class}) in filter")
end
+ operator = operator.downcase
cond_out = []
- if attrs_in == 'any' && (operator.casecmp('ilike').zero? || operator.casecmp('like').zero?) && (operand.is_a? String) && operand.match('^[%].*[%]$')
+ if attrs_in == 'any' && (operator == 'ilike' || operator == 'like') && (operand.is_a? String) && operand.match('^[%].*[%]$')
# Trigram index search
cond_out << model_class.full_text_trgm + " #{operator} ?"
param_out << operand
attrs = []
end
- if operator == '@@'
- # Full-text search
- if attrs_in != 'any'
- raise ArgumentError.new("Full text search on individual columns is not supported")
- end
- if operand.is_a? Array
- raise ArgumentError.new("Full text search not supported for array operands")
- end
-
- # Skip the generic per-column operator loop below
- attrs = []
- # Use to_tsquery since plainto_tsquery does not support prefix
- # search. And, split operand and join the words with ' & '
- cond_out << model_class.full_text_tsvector+" @@ to_tsquery(?)"
- param_out << operand.split.join(' & ')
- end
attrs.each do |attr|
subproperty = attr.split(".", 2)
end
# jsonb search
- case operator.downcase
+ case operator
when '=', '!='
- not_in = if operator.downcase == "!=" then "NOT " else "" end
+ not_in = if operator == "!=" then "NOT " else "" end
cond_out << "#{not_in}(#{attr_table_name}.#{attr} @> ?::jsonb)"
param_out << SafeJSON.dump({proppath => operand})
when 'in'
else
raise ArgumentError.new("Invalid operator for subproperty search '#{operator}'")
end
- elsif operator.downcase == "exists"
+ elsif operator == "exists"
if col.type != :jsonb
raise ArgumentError.new("Invalid attribute '#{attr}' for operator '#{operator}' in filter")
end
cond_out << "jsonb_exists(#{attr_table_name}.#{attr}, ?)"
param_out << operand
+ elsif expr = /^ *\( *(\w+) *(<=?|>=?|=) *(\w+) *\) *$/.match(attr)
+ if operator != '=' || ![true,"true"].index(operand)
+ raise ArgumentError.new("Invalid expression filter '#{attr}': subsequent elements must be [\"=\", true]")
+ end
+ operator = expr[2]
+ attr1, attr2 = expr[1], expr[3]
+ allowed = attr_model_class.searchable_columns(operator)
+ [attr1, attr2].each do |tok|
+ if !allowed.index(tok)
+ raise ArgumentError.new("Invalid attribute in expression: '#{tok}'")
+ end
+ col = attr_model_class.columns.select { |c| c.name == tok }.first
+ if col.type != :integer
+ raise ArgumentError.new("Non-numeric attribute in expression: '#{tok}'")
+ end
+ end
+ cond_out << "#{attr1} #{operator} #{attr2}"
else
- if !attr_model_class.searchable_columns(operator).index attr
+ if !attr_model_class.searchable_columns(operator).index(attr) &&
+ !(col.andand.type == :jsonb && ['contains', '=', '<>', '!='].index(operator))
raise ArgumentError.new("Invalid attribute '#{attr}' in filter")
end
- case operator.downcase
+ case operator
when '=', '<', '<=', '>', '>=', '!=', 'like', 'ilike'
attr_type = attr_model_class.attribute_column(attr).type
operator = '<>' if operator == '!='
end
end
cond_out << cond.join(' OR ')
+ when 'contains'
+ if col.andand.type != :jsonb
+ raise ArgumentError.new("Invalid attribute '#{attr}' for '#{operator}' operator")
+ end
+ if operand == []
+ raise ArgumentError.new("Invalid operand '#{operand.inspect}' for '#{operator}' operator")
+ end
+ operand = [operand] unless operand.is_a? Array
+ operand.each do |op|
+ if !op.is_a?(String)
+ raise ArgumentError.new("Invalid element #{operand.inspect} in operand for #{operator.inspect} operator (operand must be a string or array of strings)")
+ end
+ end
+ # We use jsonb_exists_all(a,b) instead of "a ?& b" because
+ # the pg gem thinks "?" is a bind var. And we use string
+ # interpolation instead of param_out because the pg gem
+ # flattens param_out / doesn't support passing arrays as
+ # bind vars.
+ q = operand.map { |s| ActiveRecord::Base.connection.quote(s) }.join(',')
+ cond_out << "jsonb_exists_all(#{attr_table_name}.#{attr}, array[#{q}])"
else
raise ArgumentError.new("Invalid operator '#{operator}'")
end
namespace :db do
desc "Apply expiration policy on long lived tokens"
task fix_long_lived_tokens: :environment do
- if Rails.configuration.Login.TokenLifetime == 0
- puts("No expiration policy set on Login.TokenLifetime.")
- else
- exp_date = Time.now + Rails.configuration.Login.TokenLifetime
- puts("Setting token expiration to: #{exp_date}")
- token_count = 0
- ll_tokens.each do |auth|
- if (auth.user.uuid =~ /-tpzed-000000000000000/).nil?
- CurrentApiClientHelper.act_as_system_user do
- auth.update_attributes!(expires_at: exp_date)
- end
- token_count += 1
+ lifetime = Rails.configuration.API.MaxTokenLifetime
+ if lifetime.nil? or lifetime == 0
+ lifetime = Rails.configuration.Login.TokenLifetime
+ end
+ if lifetime.nil? or lifetime == 0
+ puts("No expiration policy set (API.MaxTokenLifetime nor Login.TokenLifetime is set), nothing to do.")
+ # abort the rake task
+ next
+ end
+ exp_date = Time.now + lifetime
+ puts("Setting token expiration to: #{exp_date}")
+ token_count = 0
+ ll_tokens(lifetime).each do |auth|
+ if auth.user.nil?
+ printf("*** WARNING, found ApiClientAuthorization with invalid user: auth id: %d, user id: %d\n", auth.id, auth.user_id)
+ # skip this token
+ next
+ end
+ if (auth.user.uuid =~ /-tpzed-000000000000000/).nil? and (auth.user.uuid =~ /-tpzed-anonymouspublic/).nil?
+ CurrentApiClientHelper.act_as_system_user do
+ auth.update_attributes!(expires_at: exp_date)
end
+ token_count += 1
end
- puts("#{token_count} tokens updated.")
end
+ puts("#{token_count} tokens updated.")
end
desc "Show users with long lived tokens"
task check_long_lived_tokens: :environment do
+ lifetime = Rails.configuration.API.MaxTokenLifetime
+ if lifetime.nil? or lifetime == 0
+ lifetime = Rails.configuration.Login.TokenLifetime
+ end
+ if lifetime.nil? or lifetime == 0
+ puts("No expiration policy set (API.MaxTokenLifetime nor Login.TokenLifetime is set), nothing to do.")
+ # abort the rake task
+ next
+ end
user_ids = Set.new()
token_count = 0
- ll_tokens.each do |auth|
- if (auth.user.uuid =~ /-tpzed-000000000000000/).nil?
+ ll_tokens(lifetime).each do |auth|
+ if auth.user.nil?
+ printf("*** WARNING, found ApiClientAuthorization with invalid user: auth id: %d, user id: %d\n", auth.id, auth.user_id)
+ # skip this token
+ next
+ end
+ if not auth.user.nil? and (auth.user.uuid =~ /-tpzed-000000000000000/).nil? and (auth.user.uuid =~ /-tpzed-anonymouspublic/).nil?
user_ids.add(auth.user_id)
token_count += 1
end
end
end
- def ll_tokens
+ def ll_tokens(lifetime)
query = ApiClientAuthorization.where(expires_at: nil)
- if Rails.configuration.Login.TokenLifetime > 0
- query = query.or(ApiClientAuthorization.where("expires_at > ?", Time.now + Rails.configuration.Login.TokenLifetime))
- end
+ query = query.or(ApiClientAuthorization.where("expires_at > ?", Time.now + lifetime))
query
end
end
ActiveRecord::Base.connection.exec_query "SET LOCAL enable_mergejoin to true;"
+ # Now that we have recomputed a set of permissions, delete any
+ # rows from the materialized_permissions table where (target_uuid,
+ # user_uuid) is not present or has perm_level=0 in the recomputed
+ # set.
ActiveRecord::Base.connection.exec_delete %{
delete from #{PERMISSION_VIEW} where
target_uuid in (select target_uuid from #{temptable_perms}) and
},
"update_permissions.delete"
+ # Now insert-or-update permissions in the recomputed set. The
+ # WHERE clause is important to avoid redundantly updating rows
+ # that haven't actually changed.
ActiveRecord::Base.connection.exec_query %{
insert into #{PERMISSION_VIEW} (user_uuid, target_uuid, perm_level, traverse_owned)
select user_uuid, target_uuid, val as perm_level, traverse_owned from #{temptable_perms} where val>0
-on conflict (user_uuid, target_uuid) do update set perm_level=EXCLUDED.perm_level, traverse_owned=EXCLUDED.traverse_owned;
+on conflict (user_uuid, target_uuid) do update
+set perm_level=EXCLUDED.perm_level, traverse_owned=EXCLUDED.traverse_owned
+where #{PERMISSION_VIEW}.user_uuid=EXCLUDED.user_uuid and
+ #{PERMISSION_VIEW}.target_uuid=EXCLUDED.target_uuid and
+ (#{PERMISSION_VIEW}.perm_level != EXCLUDED.perm_level or
+ #{PERMISSION_VIEW}.traverse_owned != EXCLUDED.traverse_owned);
},
"update_permissions.insert"
# This script does the actual gitolite config management on disk.
#
-# Ward Vandewege <ward@curoverse.com>
+# Ward Vandewege <ward@curii.com>
# Default is development
production = ARGV[0] == "production"
api_client_auth = ApiClientAuthorization.where(attr).first
if !api_client_auth
+ # The anonymous user token should never expire but we are not allowed to
+ # set :expires_at to nil, so we set it to 1000 years in the future.
+ attr[:expires_at] = Time.now + 1000.years
api_client_auth = ApiClientAuthorization.create!(attr)
end
api_client_auth
output_path: test
command: ["echo", "hello"]
container_uuid: zzzzz-dz642-runningcontainr
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
requester_for_running:
uuid: zzzzz-xvhdp-req4runningcntr
command: ["echo", "hello"]
container_uuid: zzzzz-dz642-logscontainer03
requesting_container_uuid: zzzzz-dz642-runningcontainr
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
running_older:
uuid: zzzzz-xvhdp-cr4runningcntn2
output_path: test
command: ["echo", "hello"]
container_uuid: zzzzz-dz642-runningcontain2
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
completed:
uuid: zzzzz-xvhdp-cr4completedctr
output_path: test
command: ["echo", "hello"]
container_uuid: zzzzz-dz642-runningcontain2
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
cr_for_failed:
uuid: zzzzz-xvhdp-cr4failedcontnr
output_path: test
command: ["echo", "hello"]
container_uuid: zzzzz-dz642-runningcontainr
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
running_to_be_deleted:
uuid: zzzzz-xvhdp-cr5runningcntnr
cwd: test
output_path: test
command: ["echo", "hello"]
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
container_uuid: zzzzz-dz642-runnincntrtodel
runtime_constraints:
vcpus: 1
ram: 123
- mounts: {}
completed_with_input_mounts:
uuid: zzzzz-xvhdp-crwithinputmnts
updated_at: <%= 1.minute.ago.to_s(:db) %>
started_at: <%= 1.minute.ago.to_s(:db) %>
container_image: test
- cwd: test
- output_path: test
+ cwd: /tmp
+ output_path: /tmp
command: ["echo", "hello"]
runtime_constraints:
ram: 12000000000
vcpus: 4
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
secret_mounts:
/secret/6x9:
kind: text
updated_at: <%= 2.minute.ago.to_s(:db) %>
started_at: <%= 2.minute.ago.to_s(:db) %>
container_image: test
- cwd: test
- output_path: test
+ cwd: /tmp
+ output_path: /tmp
command: ["echo", "hello"]
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
ram: 12000000000
vcpus: 4
cwd: test
output_path: test
command: ["echo", "hello"]
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
ram: 12000000000
vcpus: 4
cwd: test
output_path: test
command: ["echo", "hello"]
+ mounts:
+ /tmp:
+ kind: tmp
+ capacity: 24000000000
runtime_constraints:
ram: 12000000000
vcpus: 4
description: "Test project belonging to active user's first test project"
group_class: project
+afiltergroup:
+ uuid: zzzzz-j7d0g-thisfiltergroup
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2014-04-21 15:37:48 -0400
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ modified_at: 2014-04-21 15:37:48 -0400
+ updated_at: 2014-04-21 15:37:48 -0400
+ name: This filter group
+ group_class: filter
+ properties:
+ filters: [[ "collections.name", "like", "baz%" ], [ "groups.name", "=", "A Subproject" ]]
+
+afiltergroup2:
+ uuid: zzzzz-j7d0g-afiltergrouptwo
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2014-04-21 15:37:48 -0400
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ modified_at: 2014-04-21 15:37:48 -0400
+ updated_at: 2014-04-21 15:37:48 -0400
+ name: A filter group without filters
+ group_class: filter
+ properties:
+ filters: []
+
+afiltergroup3:
+ uuid: zzzzz-j7d0g-filtergroupthre
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2014-04-21 15:37:48 -0400
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ modified_at: 2014-04-21 15:37:48 -0400
+ updated_at: 2014-04-21 15:37:48 -0400
+ name: A filter group with an is_a collection filter
+ group_class: filter
+ properties:
+ filters: [["uuid", "is_a", "arvados#collection"]]
+
+afiltergroup4:
+ uuid: zzzzz-j7d0g-filtergroupfour
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2014-04-21 15:37:48 -0400
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ modified_at: 2014-04-21 15:37:48 -0400
+ updated_at: 2014-04-21 15:37:48 -0400
+ name: A filter group with an exists collections filter
+ group_class: filter
+ properties:
+ filters: [["collections.properties.listprop","exists",true],["uuid", "is_a", "arvados#collection"]]
+
+afiltergroup5:
+ uuid: zzzzz-j7d0g-filtergroupfive
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2014-04-21 15:37:48 -0400
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ modified_at: 2014-04-21 15:37:48 -0400
+ updated_at: 2014-04-21 15:37:48 -0400
+ name: A filter group with a contains collections filter
+ group_class: filter
+ properties:
+ filters: [["collections.properties.listprop","contains","elem1"],["uuid", "is_a", "arvados#collection"]]
+
future_project_viewing_group:
uuid: zzzzz-j7d0g-futrprojviewgrp
owner_uuid: zzzzz-tpzed-000000000000000
script: hash
repository: active/foo
script_version: 7def43a4d3f20789dda4700f703b5514cc3ed250
- supplied_script_version: master
+ supplied_script_version: main
script_parameters:
input: 1f4b0bc7583c2a7f9102c395f4ffc5e3+45
created_at: <%= 3.minute.ago.to_s(:db) %>
uuid: zzzzz-8i9sb-n7omg50bvt0m1nf
owner_uuid: zzzzz-j7d0g-zhxawtyetzwc5f0
modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
- repository: active/foo
+ repository: active/bar
script: running_job_script
script_version: 4fe459abe02d9b365932b8f5dc419439ab4e2577
state: Running
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters: {}
has_component_with_empty_script_parameters:
components:
foo:
script: foo
- script_version: master
+ script_version: main
has_component_with_completed_jobs:
# Test that the job "started_at" and "finished_at" fields are parsed
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-rft1xdewxkwgxnz
- script_version: master
+ script_version: main
created_at: <%= 10.minute.ago.to_s(:db) %>
started_at: <%= 10.minute.ago.to_s(:db) %>
finished_at: <%= 9.minute.ago.to_s(:db) %>
done: 1
bar:
script: bar
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-r2dtbzr6bfread7
- script_version: master
+ script_version: main
created_at: <%= 9.minute.ago.to_s(:db) %>
started_at: <%= 9.minute.ago.to_s(:db) %>
state: Running
done: 3
baz:
script: baz
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-c7408rni11o7r6s
- script_version: master
+ script_version: main
created_at: <%= 9.minute.ago.to_s(:db) %>
state: Queued
tasks_summary: {}
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters: {}
job: {
uuid: zzzzz-8i9sb-pshmckwoma9plh7,
- script_version: master
+ script_version: main
}
components_is_jobspec:
# Helps test that clients cope with funny-shaped components.
# For an example, see #3321.
- uuid: zzzzz-d1hrv-jobspeccomponts
- created_at: <%= 30.minute.ago.to_s(:db) %>
+ uuid: zzzzz-d1hrv-1yfj61234abcdk4
+ created_at: <%= 2.minute.ago.to_s(:db) %>
owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
- created_at: 2014-04-14 12:35:04 -0400
- updated_at: 2014-04-14 12:35:04 -0400
- modified_at: 2014-04-14 12:35:04 -0400
modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
state: RunningOnServer
components:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
uuid: zzzzz-8i9sb-jyq01m7in1jlofj
repository: active/foo
script: foo
- script_version: master
+ script_version: main
script_parameters:
input: zzzzz-4zz18-4en62shvi99lxd4
log: zzzzz-4zz18-4en62shvi99lxd4
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
uuid: zzzzz-8i9sb-aceg2bnq7jt7kon
repository: active/foo
script: foo
- script_version: master
+ script_version: main
script_parameters:
input: zzzzz-4zz18-bv31uwvy3neko21
log: zzzzz-4zz18-bv31uwvy3neko21
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-pshmckwoma9plh7
- script_version: master
+ script_version: main
running_pipeline_with_complete_job:
uuid: zzzzz-d1hrv-partdonepipelin
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-job1atlevel3noc
- script_version: master
+ script_version: main
created_at: <%= 12.hour.ago.to_s(:db) %>
started_at: <%= 12.hour.ago.to_s(:db) %>
state: Running
done: 1
bar:
script: bar
- script_version: master
+ script_version: main
script_parameters: {}
job:
uuid: zzzzz-8i9sb-job2atlevel3noc
- script_version: master
+ script_version: main
created_at: <%= 12.hour.ago.to_s(:db) %>
started_at: <%= 12.hour.ago.to_s(:db) %>
state: Running
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
part-one:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
title: "Foo/bar pair"
part-two:
script: bar
- script_version: master
+ script_version: main
script_parameters:
input:
output_of: part-one
name: Pipeline Template with Jobspec Components
components:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
with-search:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
title: foo template input
bar:
script: bar
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo_component:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
part-one:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
description: "Provide an input file"
part-two:
script: bar
- script_version: master
+ script_version: main
script_parameters:
input:
output_of: part-one
components:
work:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
foo_component:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
name: Template to test owner uuid and name unique key violation upon removal
components:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
name: Template to test owner uuid and name unique key violation upon removal
components:
script: foo
- script_version: master
+ script_version: main
script_parameters:
input:
required: true
components:
part-one:
script: foo
- script_version: master
+ script_version: main
script_parameters:
ex_string:
required: true
ex_string_def:
required: true
dataclass: string
- default: hello-testing-123
\ No newline at end of file
+ default: hello-testing-123
token_time = token.split('+', 2).first.to_i
assert_operator(token_time, :>=, @start_stamp, "error token too old")
assert_operator(token_time, :<=, now_timestamp, "error token too new")
- json_response['errors'].each do |err|
- assert_match(/req-[a-z0-9]{20}/, err, "X-Request-Id value missing on error message")
- end
end
def check_404(errmsg="Path not found")
check_error_token
end
- test "X-Request-Id header" do
- authorize_with :spectator
- get(:index)
- assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
- end
-
- # The response header is the one that gets logged, so this test also
- # ensures we log the ID supplied in the request, if any.
- test "X-Request-Id given by client" do
- authorize_with :spectator
- @request.headers['X-Request-Id'] = 'abcdefG'
- get(:index)
- assert_equal 'abcdefG', response.headers['X-Request-Id']
- end
-
- test "X-Request-Id given by client is ignored if too long" do
- authorize_with :spectator
- @request.headers['X-Request-Id'] = 'abcdefG' * 1000
- get(:index)
- assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
- end
-
['foo', '', 'FALSE', 'TRUE', nil, [true], {a:true}, '"true"'].each do |bogus|
test "bogus boolean parameter #{bogus.inspect} returns error" do
@controller = Arvados::V1::GroupsController.new
end
end
+ [:admin, :active].each do |token|
+ test "using '#{token}', get token details via 'current'" do
+ authorize_with token
+ get :current
+ assert_response 200
+ assert_equal json_response['scopes'], ['all']
+ end
+ end
+
[# anyone can look up the token they're currently using
[:admin, :admin, 200, 200, 1],
[:active, :active, 200, 200, 1],
end
end
- def assert_unsigned_manifest resp, label=''
- txt = resp['unsigned_manifest_text']
+ def assert_unsigned_manifest txt, label=''
assert_not_nil(txt, "#{label} unsigned_manifest_text was nil")
locs = 0
txt.scan(/ [[:xdigit:]]{32}\S*/) do |tok|
"past version not included on index")
end
- test "collections.get returns signed locators, and no unsigned_manifest_text" do
+ test "collections.get returns unsigned locators, and no unsigned_manifest_text" do
permit_unsigned_manifests
authorize_with :active
get :show, params: {id: collections(:foo_file).uuid}
assert_response :success
- assert_signed_manifest json_response['manifest_text'], 'foo_file'
+ assert_unsigned_manifest json_response["manifest_text"], 'foo_file'
refute_includes json_response, 'unsigned_manifest_text'
end
['v1token', 'v2token'].each do |token_method|
- test "correct signatures are given for #{token_method}" do
- token = api_client_authorizations(:active).send(token_method)
- authorize_with_token token
- get :show, params: {id: collections(:foo_file).uuid}
- assert_response :success
- assert_signed_manifest json_response['manifest_text'], 'foo_file', token: token
- end
-
test "signatures with #{token_method} are accepted" do
token = api_client_authorizations(:active).send(token_method)
signed = Blob.sign_locator(
},
}
assert_response :success
- assert_signed_manifest json_response['manifest_text'], 'updated', token: token
+ assert_unsigned_manifest json_response['manifest_text'], 'updated'
end
end
- test "index with manifest_text selected returns signed locators" do
+ test "index with manifest_text selected returns unsigned locators" do
columns = %w(uuid owner_uuid manifest_text)
authorize_with :active
get :index, params: {select: columns}
json_response["items"].each do |coll|
assert_equal(coll.keys - ['kind'], columns,
"Collections index did not respect selected columns")
- assert_signed_manifest coll['manifest_text'], coll['uuid']
+ assert_unsigned_manifest coll['manifest_text'], coll['uuid']
end
end
json_response["items"].each do |coll|
assert_equal(coll.keys - ['kind'], ['unsigned_manifest_text'],
"Collections index did not respect selected columns")
- locs += assert_unsigned_manifest coll, coll['uuid']
+ assert_nil coll['manifest_text']
+ locs += assert_unsigned_manifest coll['unsigned_manifest_text'], coll['uuid']
end
assert_operator locs, :>, 0, "no locators found in any manifests"
end
assert_not_nil assigns(:object)
resp = assigns(:object)
assert_equal foo_collection[:portable_data_hash], resp[:portable_data_hash]
- assert_signed_manifest resp[:manifest_text]
+ assert_unsigned_manifest resp[:manifest_text]
# The manifest in the response will have had permission hints added.
# Remove any permission hints in the response before comparing it to the source.
authorize_with :active
manifest_text = ". acbd18db4cc2f85cedef654fccc4a4d8+3 0:0:foo.txt\n"
if !unsigned
- manifest_text = Collection.sign_manifest manifest_text, api_token(:active)
+ manifest_text = Collection.sign_manifest_only_for_tests manifest_text, api_token(:active)
end
post :create, params: {
collection: {
assert_not_nil assigns(:object)
resp = JSON.parse(@response.body)
assert_equal manifest_uuid, resp['portable_data_hash']
- # All of the locators in the output must be signed.
+ # All of the signatures in the output must be valid.
resp['manifest_text'].lines.each do |entry|
m = /([[:xdigit:]]{32}\+\S+)/.match(entry)
- if m
+ if m && m[0].index('+A')
assert Blob.verify_signature m[0], signing_opts
end
end
assert_not_nil assigns(:object)
resp = JSON.parse(@response.body)
assert_equal manifest_uuid, resp['portable_data_hash']
- # All of the locators in the output must be signed.
+ # All of the signatures in the output must be valid.
resp['manifest_text'].lines.each do |entry|
m = /([[:xdigit:]]{32}\+\S+)/.match(entry)
- if m
+ if m && m[0].index('+A')
assert Blob.verify_signature m[0], signing_opts
end
end
assert_equal manifest_text, stripped_manifest
end
- test "multiple signed locators per line" do
- permit_unsigned_manifests
- authorize_with :active
- locators = %w(
- d41d8cd98f00b204e9800998ecf8427e+0
- acbd18db4cc2f85cedef654fccc4a4d8+3
- ea10d51bcf88862dbcc36eb292017dfd+45)
-
- signing_opts = {
- key: Rails.configuration.Collections.BlobSigningKey,
- api_token: api_token(:active),
- }
-
- unsigned_manifest = [".", *locators, "0:0:foo.txt\n"].join(" ")
- manifest_uuid = Digest::MD5.hexdigest(unsigned_manifest) +
- '+' +
- unsigned_manifest.length.to_s
-
- signed_locators = locators.map { |loc| Blob.sign_locator loc, signing_opts }
- signed_manifest = [".", *signed_locators, "0:0:foo.txt\n"].join(" ")
-
- post :create, params: {
- collection: {
- manifest_text: signed_manifest,
- portable_data_hash: manifest_uuid,
- }
- }
- assert_response :success
- assert_not_nil assigns(:object)
- resp = JSON.parse(@response.body)
- assert_equal manifest_uuid, resp['portable_data_hash']
- # All of the locators in the output must be signed.
- # Each line is of the form "path locator locator ... 0:0:file.txt"
- # entry.split[1..-2] will yield just the tokens in the middle of the line
- returned_locator_count = 0
- resp['manifest_text'].lines.each do |entry|
- entry.split[1..-2].each do |tok|
- returned_locator_count += 1
- assert Blob.verify_signature tok, signing_opts
- end
- end
- assert_equal locators.count, returned_locator_count
- end
-
test 'Reject manifest with unsigned blob' do
permit_unsigned_manifests false
authorize_with :active
assert_response :success
assert_equal col.version, json_response['version'], 'Trashing a collection should not create a new version'
end
+
+ [['<', :<],
+ ['<=', :<=],
+ ['>', :>],
+ ['>=', :>=],
+ ['=', :==]].each do |op, rubyop|
+ test "filter collections by replication_desired #{op} replication_confirmed" do
+ authorize_with(:active)
+ get :index, params: {
+ filters: [["(replication_desired #{op} replication_confirmed)", "=", true]],
+ }
+ assert_response :success
+ json_response["items"].each do |c|
+ assert_operator(c["replication_desired"], rubyop, c["replication_confirmed"])
+ end
+ end
+ end
+
+ ["(replication_desired < bogus)",
+ "replication_desired < replication_confirmed",
+ "(replication_desired < replication_confirmed",
+ "(replication_desired ! replication_confirmed)",
+ "(replication_desired <)",
+ "(replication_desired < manifest_text)",
+ "(manifest_text < manifest_text)", # currently only numeric attrs are supported
+ "(replication_desired < 2)", # currently only attrs are supported, not literals
+ "(1 < 2)",
+ ].each do |expr|
+ test "invalid filter expression #{expr}" do
+ authorize_with(:active)
+ get :index, params: {
+ filters: [[expr, "=", true]],
+ }
+ assert_response 422
+ end
+ end
+
+ test "invalid op/arg with filter expression" do
+ authorize_with(:active)
+ get :index, params: {
+ filters: [["replication_desired < replication_confirmed", "!=", false]],
+ }
+ assert_response 422
+ end
+
+ ["storage_classes_desired", "storage_classes_confirmed"].each do |attr|
+ test "filter collections by #{attr}" do
+ authorize_with(:active)
+ get :index, params: {
+ filters: [[attr, "=", '["default"]']]
+ }
+ assert_response :success
+ assert_not_equal 0, json_response["items"].length
+ json_response["items"].each do |c|
+ assert_equal ["default"], c[attr]
+ end
+ end
+ end
+
+ test "select param is respected in 'show' response" do
+ authorize_with :active
+ get :show, params: {
+ id: collections(:collection_owned_by_active).uuid,
+ select: ["name"],
+ }
+ assert_response :success
+ assert_raises ActiveModel::MissingAttributeError do
+ assigns(:object).manifest_text
+ end
+ assert_nil json_response["manifest_text"]
+ assert_nil json_response["properties"]
+ assert_equal collections(:collection_owned_by_active).name, json_response["name"]
+ end
+
+ test "select param is respected in 'update' response" do
+ authorize_with :active
+ post :update, params: {
+ id: collections(:collection_owned_by_active).uuid,
+ collection: {
+ manifest_text: ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:foobar.txt\n",
+ },
+ select: ["name"],
+ }
+ assert_response :success
+ assert_nil json_response["manifest_text"]
+ assert_nil json_response["properties"]
+ assert_equal collections(:collection_owned_by_active).name, json_response["name"]
+ end
+
+ [nil,
+ [],
+ ["is_trashed", "trash_at"],
+ ["is_trashed", "trash_at", "portable_data_hash"],
+ ["portable_data_hash"],
+ ["portable_data_hash", "manifest_text"],
+ ].each do |select|
+ test "select=#{select.inspect} param is respected in 'get by pdh' response" do
+ authorize_with :active
+ get :show, params: {
+ id: collections(:collection_owned_by_active).portable_data_hash,
+ select: select,
+ }
+ assert_response :success
+ if !select || select.index("manifest_text")
+ assert_not_nil json_response["manifest_text"]
+ else
+ assert_nil json_response["manifest_text"]
+ end
+ end
+ end
end
json_response['errors'].join(' '))
end
- test 'error message for full text search on a specific column' do
+ test 'error message for unsupported full text search' do
@controller = Arvados::V1::CollectionsController.new
authorize_with :active
get :index, params: {
filters: [['uuid', '@@', 'abcdef']],
}
assert_response 422
- assert_match(/not supported/, json_response['errors'].join(' '))
- end
-
- test 'difficult characters in full text search' do
- @controller = Arvados::V1::CollectionsController.new
- authorize_with :active
- get :index, params: {
- filters: [['any', '@@', 'a|b"c']],
- }
- assert_response :success
- # (Doesn't matter so much which results are returned.)
- end
-
- test 'array operand in full text search' do
- @controller = Arvados::V1::CollectionsController.new
- authorize_with :active
- get :index, params: {
- filters: [['any', '@@', ['abc', 'def']]],
- }
- assert_response 422
- assert_match(/not supported/, json_response['errors'].join(' '))
+ assert_match(/no longer supported/, json_response['errors'].join(' '))
end
test 'api responses provide timestamps with nanoseconds' do
end
end
- test "full text search with count='none'" do
- @controller = Arvados::V1::GroupsController.new
- authorize_with :admin
-
- get :contents, params: {
- format: :json,
- count: 'none',
- limit: 1000,
- filters: [['any', '@@', Rails.configuration.ClusterID]],
- }
-
- assert_response :success
-
- all_objects = Hash.new(0)
- json_response['items'].map{|o| o['kind']}.each{|t| all_objects[t] += 1}
-
- assert_equal true, all_objects['arvados#group']>0
- assert_equal true, all_objects['arvados#job']>0
- assert_equal true, all_objects['arvados#pipelineInstance']>0
- assert_equal true, all_objects['arvados#pipelineTemplate']>0
-
- # Perform test again mimicking a second page request with:
- # last_object_class = PipelineInstance
- # and hence groups and jobs should not be included in the response
- # offset = 5, which means first 5 pipeline instances were already received in page 1
- # and hence the remaining pipeline instances and all other object types should be included in the response
-
- @test_counter = 0 # Reset executed action counter
-
- @controller = Arvados::V1::GroupsController.new
-
- get :contents, params: {
- format: :json,
- count: 'none',
- limit: 1000,
- offset: '5',
- last_object_class: 'PipelineInstance',
- filters: [['any', '@@', Rails.configuration.ClusterID]],
- }
-
- assert_response :success
-
- second_page = Hash.new(0)
- json_response['items'].map{|o| o['kind']}.each{|t| second_page[t] += 1}
-
- assert_equal false, second_page.include?('arvados#group')
- assert_equal false, second_page.include?('arvados#job')
- assert_equal true, second_page['arvados#pipelineInstance']>0
- assert_equal all_objects['arvados#pipelineInstance'], second_page['arvados#pipelineInstance']+5
- assert_equal true, second_page['arvados#pipelineTemplate']>0
- end
-
[['prop1', '=', 'value1', [:collection_with_prop1_value1], [:collection_with_prop1_value2, :collection_with_prop2_1]],
['prop1', '!=', 'value1', [:collection_with_prop1_value2, :collection_with_prop2_1], [:collection_with_prop1_value1]],
['prop1', 'exists', true, [:collection_with_prop1_value1, :collection_with_prop1_value2, :collection_with_prop1_value3, :collection_with_prop1_other1], [:collection_with_prop2_1]],
assert_includes(found, collections(:replication_desired_2_unconfirmed).uuid)
assert_includes(found, collections(:replication_desired_2_confirmed_2).uuid)
end
+
+ [
+ [1, "foo"],
+ [1, ["foo"]],
+ [1, ["bar"]],
+ [1, ["bar", "foo"]],
+ [0, ["foo", "qux"]],
+ [0, ["qux"]],
+ [nil, []],
+ [nil, [[]]],
+ [nil, [["bogus"]]],
+ [nil, [{"foo" => "bar"}]],
+ [nil, {"foo" => "bar"}],
+ ].each do |results, operand|
+ test "storage_classes_desired contains #{operand.inspect}" do
+ @controller = Arvados::V1::CollectionsController.new
+ authorize_with(:active)
+ c = Collection.create!(
+ manifest_text: "",
+ storage_classes_desired: ["foo", "bar", "baz"])
+ get :index, params: {
+ filters: [["storage_classes_desired", "contains", operand]],
+ }
+ if results.nil?
+ assert_response 422
+ next
+ end
+ assert_response :success
+ assert_equal results, json_response["items"].length
+ if results > 0
+ assert_equal c.uuid, json_response["items"][0]["uuid"]
+ end
+ end
+ end
+
+ test "collections properties contains top level key" do
+ @controller = Arvados::V1::CollectionsController.new
+ authorize_with(:active)
+ get :index, params: {
+ filters: [["properties", "contains", "prop1"]],
+ }
+ assert_response :success
+ assert_not_empty json_response["items"]
+ json_response["items"].each do |c|
+ assert c["properties"].has_key?("prop1")
+ end
+ end
end
class Arvados::V1::GroupsControllerTest < ActionController::TestCase
- test "attempt to delete group without read or write access" do
+ test "attempt to delete group that cannot be seen" do
+ Rails.configuration.Users.RoleGroupsVisibleToAll = false
authorize_with :active
post :destroy, params: {id: groups(:empty_lonely_group).uuid}
assert_response 404
end
+ test "attempt to delete group without read or write access" do
+ authorize_with :active
+ post :destroy, params: {id: groups(:empty_lonely_group).uuid}
+ assert_response 403
+ end
+
test "attempt to delete group without write access" do
authorize_with :active
post :destroy, params: {id: groups(:all_users).uuid}
assert_includes(owners, groups(:asubproject).uuid)
end
+ [:afiltergroup, :private_role].each do |grp|
+ test "delete non-project group #{grp}" do
+ authorize_with :admin
+ assert_not_nil Group.find_by_uuid(groups(grp).uuid)
+ assert !Group.find_by_uuid(groups(grp).uuid).is_trashed
+ post :destroy, params: {
+ id: groups(grp).uuid,
+ format: :json,
+ }
+ assert_response :success
+ # Should not be trashed
+ assert_nil Group.find_by_uuid(groups(grp).uuid)
+ end
+ end
+
+ [
+ [false, :inactive, :private_role, false],
+ [false, :spectator, :private_role, false],
+ [false, :admin, :private_role, true],
+ [true, :inactive, :private_role, false],
+ [true, :spectator, :private_role, true],
+ [true, :admin, :private_role, true],
+ # project (non-role) groups are invisible even when RoleGroupsVisibleToAll is true
+ [true, :inactive, :private, false],
+ [true, :spectator, :private, false],
+ [true, :admin, :private, true],
+ ].each do |visibleToAll, userFixture, groupFixture, visible|
+ test "with RoleGroupsVisibleToAll=#{visibleToAll}, #{groupFixture} group is #{visible ? '' : 'in'}visible to #{userFixture} user" do
+ Rails.configuration.Users.RoleGroupsVisibleToAll = visibleToAll
+ authorize_with userFixture
+ get :show, params: {id: groups(groupFixture).uuid, format: :json}
+ if visible
+ assert_response :success
+ else
+ assert_response 404
+ end
+ end
+ end
+
### trashed project tests ###
#
BASE_FILTERS = {
'repository' => ['=', 'active/foo'],
'script' => ['=', 'hash'],
- 'script_version' => ['in git', 'master'],
+ 'script_version' => ['in git', 'main'],
'docker_image_locator' => ['=', nil],
'arvados_sdk_version' => ['=', nil],
}
refute(json_response.has_key?('items_available'))
end
+ test 'do not count items_available if count=none for group contents endpoint' do
+ @controller = Arvados::V1::GroupsController.new
+ authorize_with :active
+ get :contents, params: {
+ count: 'none',
+ }
+ assert_response(:success)
+ refute(json_response.has_key?('items_available'))
+ end
+
[{}, {count: nil}, {count: ''}, {count: 'exact'}].each do |params|
test "count items_available if params=#{params.inspect}" do
@controller = Arvados::V1::LinksController.new
@initial_link_count = Link.count
@vm_uuid = virtual_machines(:testvm).uuid
ActionMailer::Base.deliveries = []
+ Rails.configuration.Users.ActivatedUsersAreVisibleToOthers = false
end
test "activate a user after signing UA" do
"user's writable_by should include its owner_uuid")
end
- [
- [:admin, true],
- [:active, false],
- ].each do |auth_user, expect_success|
- test "update_uuid as #{auth_user}" do
- authorize_with auth_user
- orig_uuid = users(:active).uuid
- post :update_uuid, params: {
- id: orig_uuid,
- new_uuid: 'zbbbb-tpzed-abcde12345abcde',
- }
- if expect_success
- assert_response :success
- assert_empty User.where(uuid: orig_uuid)
- else
- assert_response 403
- assert_not_empty User.where(uuid: orig_uuid)
- end
- end
- end
-
test "merge with redirect_to_user_uuid=false" do
authorize_with :project_viewer_trustedclient
tok = api_client_authorizations(:project_viewer).api_token
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+require 'test_helper'
+
+class SysControllerTest < ActionController::TestCase
+ include CurrentApiClient
+ include DbCurrentTime
+
+ test "trash_sweep - delete expired tokens" do
+ assert_not_empty ApiClientAuthorization.where(uuid: api_client_authorizations(:expired).uuid)
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+ assert_empty ApiClientAuthorization.where(uuid: api_client_authorizations(:expired).uuid)
+ end
+
+ test "trash_sweep - fail with non-admin token" do
+ authorize_with :active
+ post :trash_sweep
+ assert_response 403
+ end
+
+ test "trash_sweep - move collections to trash" do
+ c = collections(:trashed_on_next_sweep)
+ refute_empty Collection.where('uuid=? and is_trashed=false', c.uuid)
+ assert_raises(ActiveRecord::RecordNotUnique) do
+ act_as_user users(:active) do
+ Collection.create!(owner_uuid: c.owner_uuid,
+ name: c.name)
+ end
+ end
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+ c = Collection.where('uuid=? and is_trashed=true', c.uuid).first
+ assert c
+ act_as_user users(:active) do
+ assert Collection.create!(owner_uuid: c.owner_uuid,
+ name: c.name)
+ end
+ end
+
+ test "trash_sweep - delete collections" do
+ uuid = 'zzzzz-4zz18-3u1p5umicfpqszp' # deleted_on_next_sweep
+ assert_not_empty Collection.where(uuid: uuid)
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+ assert_empty Collection.where(uuid: uuid)
+ end
+
+ test "trash_sweep - delete referring links" do
+ uuid = collections(:trashed_on_next_sweep).uuid
+ act_as_system_user do
+ assert_raises ActiveRecord::RecordInvalid do
+ # Cannot create because :trashed_on_next_sweep is already trashed
+ Link.create!(head_uuid: uuid,
+ tail_uuid: system_user_uuid,
+ link_class: 'whatever',
+ name: 'something')
+ end
+
+ # Bump trash_at to now + 1 minute
+ Collection.where(uuid: uuid).
+ update(trash_at: db_current_time + (1).minute)
+
+ # Not considered trashed now
+ Link.create!(head_uuid: uuid,
+ tail_uuid: system_user_uuid,
+ link_class: 'whatever',
+ name: 'something')
+ end
+ past = db_current_time
+ Collection.where(uuid: uuid).
+ update_all(is_trashed: true, trash_at: past, delete_at: past)
+ assert_not_empty Collection.where(uuid: uuid)
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+ assert_empty Collection.where(uuid: uuid)
+ end
+
+ test "trash_sweep - move projects to trash" do
+ p = groups(:trashed_on_next_sweep)
+ assert_empty Group.where('uuid=? and is_trashed=true', p.uuid)
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+ assert_not_empty Group.where('uuid=? and is_trashed=true', p.uuid)
+ end
+
+ test "trash_sweep - delete projects and their contents" do
+ g_foo = groups(:trashed_project)
+ g_bar = groups(:trashed_subproject)
+ g_baz = groups(:trashed_subproject3)
+ col = collections(:collection_in_trashed_subproject)
+ job = jobs(:job_in_trashed_project)
+ cr = container_requests(:cr_in_trashed_project)
+ # Save how many objects were before the sweep
+ user_nr_was = User.all.length
+ coll_nr_was = Collection.all.length
+ group_nr_was = Group.where('group_class<>?', 'project').length
+ project_nr_was = Group.where(group_class: 'project').length
+ cr_nr_was = ContainerRequest.all.length
+ job_nr_was = Job.all.length
+ assert_not_empty Group.where(uuid: g_foo.uuid)
+ assert_not_empty Group.where(uuid: g_bar.uuid)
+ assert_not_empty Group.where(uuid: g_baz.uuid)
+ assert_not_empty Collection.where(uuid: col.uuid)
+ assert_not_empty Job.where(uuid: job.uuid)
+ assert_not_empty ContainerRequest.where(uuid: cr.uuid)
+
+ authorize_with :admin
+ post :trash_sweep
+ assert_response :success
+
+ assert_empty Group.where(uuid: g_foo.uuid)
+ assert_empty Group.where(uuid: g_bar.uuid)
+ assert_empty Group.where(uuid: g_baz.uuid)
+ assert_empty Collection.where(uuid: col.uuid)
+ assert_empty Job.where(uuid: job.uuid)
+ assert_empty ContainerRequest.where(uuid: cr.uuid)
+ # No unwanted deletions should have happened
+ assert_equal user_nr_was, User.all.length
+ assert_equal coll_nr_was-2, # collection_in_trashed_subproject
+ Collection.all.length # & deleted_on_next_sweep collections
+ assert_equal group_nr_was, Group.where('group_class<>?', 'project').length
+ assert_equal project_nr_was-3, Group.where(group_class: 'project').length
+ assert_equal cr_nr_was-1, ContainerRequest.all.length
+ assert_equal job_nr_was-1, Job.all.length
+ end
+
+end
test "redirect to joshid" do
api_client_page = 'http://client.example.com/home'
get :login, params: {return_to: api_client_page}
- assert_response :redirect
- assert_equal("http://test.host/auth/joshid?return_to=%2Chttp%3A%2F%2Fclient.example.com%2Fhome", @response.redirect_url)
- assert_nil assigns(:api_client)
+ # Not supported any more
+ assert_response 404
end
test "send token when user is already logged in" do
authorize_with :inactive
api_client_page = 'http://client.example.com/home'
get :login, params: {return_to: api_client_page}
+ assert_response :redirect
assert_not_nil assigns(:api_client)
assert_nil assigns(:api_client_auth).expires_at
end
authorize_with :inactive
api_client_page = 'http://client.example.com/home'
get :login, params: {return_to: api_client_page}
+ assert_response :redirect
assert_not_nil assigns(:api_client)
api_client_auth = assigns(:api_client_auth)
assert_in_delta(api_client_auth.expires_at,
Rails.configuration.Login.LoginCluster = 'zzzzz'
api_client_page = 'http://client.example.com/home'
get :login, params: {return_to: api_client_page}
- assert_response :redirect
- assert_equal("http://test.host/auth/joshid?return_to=%2Chttp%3A%2F%2Fclient.example.com%2Fhome", @response.redirect_url)
- assert_nil assigns(:api_client)
+ # Doesn't redirect, just fail.
+ assert_response 404
end
test "controller cannot create session without SystemRootToken" do
require 'tmpdir'
# Commit log for "foo" repository in test.git.tar
-# master is the main branch
-# b1 is a branch off of master
+# main is the main branch
+# b1 is a branch off of main
# tag1 is a tag
#
# 1de84a8 * b1
-# 077ba2a * master
+# 077ba2a * main
# 4fe459a * tag1
# 31ce37f * foo
require 'test_helper'
class ApiClientAuthorizationsApiTest < ActionDispatch::IntegrationTest
+ include DbCurrentTime
+ extend DbCurrentTime
fixtures :all
test "create system auth" do
assert_response 403
end
+ [nil, db_current_time + 2.hours].each do |desired_expiration|
+ test "expires_at gets clamped on non-admins when API.MaxTokenLifetime is set and desired expires_at #{desired_expiration.nil? ? 'is not set' : 'exceeds the limit'}" do
+ Rails.configuration.API.MaxTokenLifetime = 1.hour
+
+ # Test token creation
+ start_t = db_current_time
+ post "/arvados/v1/api_client_authorizations",
+ params: {
+ :format => :json,
+ :api_client_authorization => {
+ :owner_uuid => users(:active).uuid,
+ :expires_at => desired_expiration,
+ }
+ },
+ headers: {'HTTP_AUTHORIZATION' => "OAuth2 #{api_client_authorizations(:active_trustedclient).api_token}"}
+ end_t = db_current_time
+ assert_response 200
+ expiration_t = json_response['expires_at'].to_time
+ assert_operator expiration_t.to_f, :>, (start_t + Rails.configuration.API.MaxTokenLifetime).to_f
+ if !desired_expiration.nil?
+ assert_operator expiration_t.to_f, :<, desired_expiration.to_f
+ else
+ assert_operator expiration_t.to_f, :<, (end_t + Rails.configuration.API.MaxTokenLifetime).to_f
+ end
+
+ # Test token update
+ previous_expiration = expiration_t
+ token_uuid = json_response["uuid"]
+ start_t = db_current_time
+ put "/arvados/v1/api_client_authorizations/#{token_uuid}",
+ params: {
+ :api_client_authorization => {
+ :expires_at => desired_expiration
+ }
+ },
+ headers: {'HTTP_AUTHORIZATION' => "OAuth2 #{api_client_authorizations(:active_trustedclient).api_token}"}
+ end_t = db_current_time
+ assert_response 200
+ expiration_t = json_response['expires_at'].to_time
+ assert_operator previous_expiration.to_f, :<, expiration_t.to_f
+ assert_operator expiration_t.to_f, :>, (start_t + Rails.configuration.API.MaxTokenLifetime).to_f
+ if !desired_expiration.nil?
+ assert_operator expiration_t.to_f, :<, desired_expiration.to_f
+ else
+ assert_operator expiration_t.to_f, :<, (end_t + Rails.configuration.API.MaxTokenLifetime).to_f
+ end
+ end
+
+ test "behavior when expires_at is set to #{desired_expiration.nil? ? 'nil' : 'exceed the limit'} by admins when API.MaxTokenLifetime is set" do
+ Rails.configuration.API.MaxTokenLifetime = 1.hour
+
+ # Test token creation
+ post "/arvados/v1/api_client_authorizations",
+ params: {
+ :format => :json,
+ :api_client_authorization => {
+ :owner_uuid => users(:admin).uuid,
+ :expires_at => desired_expiration,
+ }
+ },
+ headers: {'HTTP_AUTHORIZATION' => "OAuth2 #{api_client_authorizations(:admin_trustedclient).api_token}"}
+ assert_response 200
+ if desired_expiration.nil?
+ # When expires_at is nil, default to MaxTokenLifetime
+ assert_operator (json_response['expires_at'].to_time.to_i - (db_current_time + Rails.configuration.API.MaxTokenLifetime).to_i).abs, :<, 2
+ else
+ assert_equal json_response['expires_at'].to_time.to_i, desired_expiration.to_i
+ end
+
+ # Test token update (reverse the above behavior)
+ token_uuid = json_response['uuid']
+ if desired_expiration.nil?
+ submitted_updated_expiration = db_current_time + Rails.configuration.API.MaxTokenLifetime + 1.hour
+ else
+ submitted_updated_expiration = nil
+ end
+ put "/arvados/v1/api_client_authorizations/#{token_uuid}",
+ params: {
+ :api_client_authorization => {
+ :expires_at => submitted_updated_expiration,
+ }
+ },
+ headers: {'HTTP_AUTHORIZATION' => "OAuth2 #{api_client_authorizations(:admin_trustedclient).api_token}"}
+ assert_response 200
+ if submitted_updated_expiration.nil?
+ assert_operator (json_response['expires_at'].to_time.to_i - (db_current_time + Rails.configuration.API.MaxTokenLifetime).to_i).abs, :<, 2
+ else
+ assert_equal json_response['expires_at'].to_time.to_i, submitted_updated_expiration.to_i
+ end
+ end
+ end
+
+ test "get current token using salted token" do
+ salted = salt_token(fixture: :active, remote: 'abcde')
+ get('/arvados/v1/api_client_authorizations/current',
+ params: {remote: 'abcde'},
+ headers: {'HTTP_AUTHORIZATION' => "Bearer #{salted}"})
+ assert_response :success
+ assert_equal(json_response['uuid'], api_client_authorizations(:active).uuid)
+ assert_equal(json_response['scopes'], ['all'])
+ assert_not_nil(json_response['expires_at'])
+ assert_nil(json_response['api_token'])
+ end
end
test "create collection, update manifest, and search with filename" do
# create collection
- signed_manifest = Collection.sign_manifest(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
+ signed_manifest = Collection.sign_manifest_only_for_tests(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
post "/arvados/v1/collections",
params: {
format: :json,
search_using_filter 'my_test_file.txt', 1
# update the collection's manifest text
- signed_manifest = Collection.sign_manifest(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_updated_test_file.txt\n", api_token(:active))
+ signed_manifest = Collection.sign_manifest_only_for_tests(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_updated_test_file.txt\n", api_token(:active))
put "/arvados/v1/collections/#{created['uuid']}",
params: {
format: :json,
end
end
- test "search collection using full text search" do
- # create collection to be searched for
- signed_manifest = Collection.sign_manifest(". 85877ca2d7e05498dd3d109baf2df106+95+A3a4e26a366ee7e4ed3e476ccf05354761be2e4ae@545a9920 0:95:file_in_subdir1\n./subdir2/subdir3 2bbc341c702df4d8f42ec31f16c10120+64+A315d7e7bad2ce937e711fc454fae2d1194d14d64@545a9920 0:32:file1_in_subdir3.txt 32:32:file2_in_subdir3.txt\n./subdir2/subdir3/subdir4 2bbc341c702df4d8f42ec31f16c10120+64+A315d7e7bad2ce937e711fc454fae2d1194d14d64@545a9920 0:32:file3_in_subdir4.txt 32:32:file4_in_subdir4.txt\n", api_token(:active))
- post "/arvados/v1/collections",
- params: {
- format: :json,
- collection: {description: 'specific collection description', manifest_text: signed_manifest}.to_json,
- },
- headers: auth(:active)
- assert_response :success
- assert_equal true, json_response['manifest_text'].include?('file4_in_subdir4.txt')
-
- # search using the filename
- search_using_full_text_search 'subdir2', 0
- search_using_full_text_search 'subdir2:*', 1
- search_using_full_text_search 'subdir2/subdir3/subdir4', 1
- search_using_full_text_search 'file4:*', 1
- search_using_full_text_search 'file4_in_subdir4.txt', 1
- search_using_full_text_search 'subdir2 file4:*', 0 # first word is incomplete
- search_using_full_text_search 'subdir2/subdir3/subdir4 file4:*', 1
- search_using_full_text_search 'subdir2/subdir3/subdir4 file4_in_subdir4.txt', 1
- search_using_full_text_search 'ile4', 0 # not a prefix match
- end
-
- def search_using_full_text_search search_filter, expected_items
- get '/arvados/v1/collections',
- params: {:filters => [['any', '@@', search_filter]].to_json},
- headers: auth(:active)
- assert_response :success
- response_items = json_response['items']
- assert_not_nil response_items
- if expected_items == 0
- assert_empty response_items
- else
- refute_empty response_items
- first_item = response_items.first
- assert_not_nil first_item
- end
- end
-
- # search for the filename in the file_names column and expect error
- test "full text search not supported for individual columns" do
- get '/arvados/v1/collections',
- params: {:filters => [['name', '@@', 'General']].to_json},
- headers: auth(:active)
- assert_response 422
- end
-
- [
- 'quick fox',
- 'quick_brown fox',
- 'brown_ fox',
- 'fox dogs',
- ].each do |search_filter|
- test "full text search ignores special characters and finds with filter #{search_filter}" do
- # description: The quick_brown_fox jumps over the lazy_dog
- # full text search treats '_' as space apparently
- get '/arvados/v1/collections',
- params: {:filters => [['any', '@@', search_filter]].to_json},
- headers: auth(:active)
- assert_response 200
- response_items = json_response['items']
- assert_not_nil response_items
- first_item = response_items.first
- refute_empty first_item
- assert_equal first_item['description'], 'The quick_brown_fox jumps over the lazy_dog'
- end
- end
-
test "create and get collection with properties" do
# create collection to be searched for
- signed_manifest = Collection.sign_manifest(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
+ signed_manifest = Collection.sign_manifest_only_for_tests(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
post "/arvados/v1/collections",
params: {
format: :json,
test "create collection and update it with json encoded hash properties" do
# create collection to be searched for
- signed_manifest = Collection.sign_manifest(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
+ signed_manifest = Collection.sign_manifest_only_for_tests(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
post "/arvados/v1/collections",
params: {
format: :json,
Rails.configuration.Collections.CollectionVersioning = true
Rails.configuration.Collections.PreserveVersionIfIdle = -1 # Disable auto versioning
- signed_manifest = Collection.sign_manifest(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
+ signed_manifest = Collection.sign_manifest_only_for_tests(". bad42fa702ae3ea7d888fef11b46f450+44 0:44:my_test_file.txt\n", api_token(:active))
post "/arvados/v1/collections",
params: {
format: :json,
get("/arvados/v1/api_client_authorizations/current",
headers: authheaders)
assert_response 200
- #assert_not_empty json_response['uuid']
system_auth_uuid = json_response['uuid']
post("/arvados/v1/containers/#{containers(:queued).uuid}/lock",
assert_nil assigns(:object)
assert_not_nil json_response['errors']
assert_response 404
+ assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
end
end
# Generally, new routes should appear under /arvados/v1/. If
# they appear elsewhere, that might have been caused by default
# rails generator behavior that we don't want.
- assert_match(/^\/(|\*a|arvados\/v1\/.*|auth\/.*|login|logout|database\/reset|discovery\/.*|static\/.*|themes\/.*|assets|_health\/.*)(\(\.:format\))?$/,
+ assert_match(/^\/(|\*a|arvados\/v1\/.*|auth\/.*|login|logout|database\/reset|discovery\/.*|static\/.*|sys\/trash_sweep|themes\/.*|assets|_health\/.*)(\(\.:format\))?$/,
route.path.spec.to_s,
"Unexpected new route: #{route.path.spec}")
end
end
+
+ test "X-Request-Id header" do
+ get "/", headers: auth(:spectator)
+ assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
+ end
+
+ test "X-Request-Id header on non-existant object URL" do
+ get "/arvados/v1/container_requests/invalid",
+ params: {:format => :json}, headers: auth(:active)
+ assert_response 404
+ assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
+ end
+
+ # The response header is the one that gets logged, so this test also
+ # ensures we log the ID supplied in the request, if any.
+ test "X-Request-Id given by client" do
+ get "/", headers: auth(:spectator).merge({'X-Request-Id': 'abcdefG'})
+ assert_equal 'abcdefG', response.headers['X-Request-Id']
+ end
+
+ test "X-Request-Id given by client is ignored if too long" do
+ authorize_with :spectator
+ long_reqId = 'abcdefG' * 1000
+ get "/", headers: auth(:spectator).merge({'X-Request-Id': long_reqId})
+ assert_match /^req-[0-9a-zA-Z]{20}$/, response.headers['X-Request-Id']
+ end
end
end
end
- [
- ['Collection_', true], # collections and pipelines templates
- ['hash', true], # pipeline templates
- ['fa7aeb5140e2848d39b', false], # script_parameter of pipeline instances
- ['fa7aeb5140e2848d39b:*', true], # script_parameter of pipeline instances
- ['project pipeline', true], # finds "Completed pipeline in A Project"
- ['project pipeli:*', true], # finds "Completed pipeline in A Project"
- ['proje pipeli:*', false], # first word is incomplete, so no prefix match
- ['no-such-thing', false], # script_parameter of pipeline instances
- ].each do |search_filter, expect_results|
- test "full text search of group-owned objects for #{search_filter}" do
- get "/arvados/v1/groups/contents",
- params: {
- id: groups(:aproject).uuid,
- limit: 5,
- :filters => [['any', '@@', search_filter]].to_json
- },
- headers: auth(:active)
- assert_response :success
- if expect_results
- refute_empty json_response['items']
- json_response['items'].each do |item|
- assert item['uuid']
- assert_equal groups(:aproject).uuid, item['owner_uuid']
- end
- else
- assert_empty json_response['items']
- end
- end
- end
-
- test "full text search is not supported for individual columns" do
- get "/arvados/v1/groups/contents",
- params: {
- :filters => [['name', '@@', 'Private']].to_json
- },
- headers: auth(:active)
- assert_response 422
- end
-
test "group contents with include trash collections" do
get "/arvados/v1/groups/contents",
params: {
end
assert_equal true, found_projects.include?(groups(:starred_and_shared_active_user_project).uuid)
end
+
+ test 'count none works with offset' do
+ first_results = nil
+ (0..10).each do |offset|
+ get "/arvados/v1/groups/contents", params: {
+ id: groups(:aproject).uuid,
+ offset: offset,
+ format: :json,
+ order: :uuid,
+ count: :none,
+ }, headers: auth(:active)
+ assert_response :success
+ assert_nil json_response['items_available']
+ if first_results.nil?
+ first_results = json_response['items']
+ else
+ assert_equal first_results[offset]['uuid'], json_response['items'][0]['uuid']
+ end
+ end
+ end
end
class NonTransactionalGroupsTest < ActionDispatch::IntegrationTest
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+require 'stringio'
+
+class LoggingTest < ActionDispatch::IntegrationTest
+ fixtures :collections
+
+ test "request_id" do
+ buf = StringIO.new
+ logcopy = ActiveSupport::Logger.new(buf)
+ logcopy.level = :info
+ begin
+ Rails.logger.extend(ActiveSupport::Logger.broadcast(logcopy))
+ get "/arvados/v1/collections/#{collections(:foo_file).uuid}",
+ params: {:format => :json},
+ headers: auth(:active).merge({ 'X-Request-Id' => 'req-aaaaaaaaaaaaaaaaaaaa' })
+ assert_response :success
+ assert_match /^{.*"request_id":"req-aaaaaaaaaaaaaaaaaaaa"/, buf.string
+ ensure
+ # We don't seem to have an "unbroadcast" option, so this is how
+ # we avoid filling buf with unlimited logs from subsequent
+ # tests.
+ logcopy.level = :fatal
+ end
+ end
+end
params: {specimen: {}},
headers: {'HTTP_ACCEPT' => 'text/html'})
assert_response 302
- assert_match(%r{/auth/joshid$}, @response.headers['Location'],
+ assert_match(%r{http://www.example.com/login$}, @response.headers['Location'],
"HTML login prompt did not include expected redirect")
end
end
headers: auth(:active)
assert_response 404
- # add some permissions, including can_manage
- # permission for user :active
+ get "/arvados/v1/links",
+ params: {
+ :filters => [["link_class", "=", "permission"], ["head_uuid", "=", groups(:public).uuid]].to_json
+ },
+ headers: auth(:active)
+ assert_response :success
+ assert_equal [], json_response['items']
+
+ ### add some permissions, including can_manage
+ ### permission for user :active
post "/arvados/v1/links",
params: {
:format => :json,
assert_response :success
can_write_uuid = json_response['uuid']
+ # Still should not be able read these permission links
+ get "/arvados/v1/permissions/#{groups(:public).uuid}",
+ params: nil,
+ headers: auth(:active)
+ assert_response 404
+
+ get "/arvados/v1/links",
+ params: {
+ :filters => [["link_class", "=", "permission"], ["head_uuid", "=", groups(:public).uuid]].to_json
+ },
+ headers: auth(:active)
+ assert_response :success
+ assert_equal [], json_response['items']
+
+ # Shouldn't be able to read links directly either
+ get "/arvados/v1/links/#{can_read_uuid}",
+ params: {},
+ headers: auth(:active)
+ assert_response 404
+
+ ### Now add a can_manage link
post "/arvados/v1/links",
params: {
:format => :json,
assert_response :success
can_manage_uuid = json_response['uuid']
- # Now user :active should be able to retrieve permissions
- # on group :public.
+ # user :active should be able to retrieve permissions
+ # on group :public using get_permissions
get("/arvados/v1/permissions/#{groups(:public).uuid}",
params: { :format => :json },
headers: auth(:active))
assert_includes perm_uuids, can_read_uuid, "can_read_uuid not found"
assert_includes perm_uuids, can_write_uuid, "can_write_uuid not found"
assert_includes perm_uuids, can_manage_uuid, "can_manage_uuid not found"
+
+ # user :active should be able to retrieve permissions
+ # on group :public using link list
+ get "/arvados/v1/links",
+ params: {
+ :filters => [["link_class", "=", "permission"], ["head_uuid", "=", groups(:public).uuid]].to_json
+ },
+ headers: auth(:active)
+ assert_response :success
+
+ perm_uuids = json_response['items'].map { |item| item['uuid'] }
+ assert_includes perm_uuids, can_read_uuid, "can_read_uuid not found"
+ assert_includes perm_uuids, can_write_uuid, "can_write_uuid not found"
+ assert_includes perm_uuids, can_manage_uuid, "can_manage_uuid not found"
+
+ # Should be able to read links directly too
+ get "/arvados/v1/links/#{can_read_uuid}",
+ params: {},
+ headers: auth(:active)
+ assert_response :success
+
+ ### Now delete the can_manage link
+ delete "/arvados/v1/links/#{can_manage_uuid}",
+ params: nil,
+ headers: auth(:active)
+ assert_response :success
+
+ # Should not be able read these permission links again
+ get "/arvados/v1/permissions/#{groups(:public).uuid}",
+ params: nil,
+ headers: auth(:active)
+ assert_response 404
+
+ get "/arvados/v1/links",
+ params: {
+ :filters => [["link_class", "=", "permission"], ["head_uuid", "=", groups(:public).uuid]].to_json
+ },
+ headers: auth(:active)
+ assert_response :success
+ assert_equal [], json_response['items']
+
+ # Should not be able to read links directly either
+ get "/arvados/v1/links/#{can_read_uuid}",
+ params: {},
+ headers: auth(:active)
+ assert_response 404
end
test "get_permissions returns 404 for nonexistent uuid" do
res.status = @stub_status
res.body = @stub_content.is_a?(String) ? @stub_content : @stub_content.to_json
end
+ srv.mount_proc '/arvados/v1/api_client_authorizations/current' do |req, res|
+ if clusterid == 'zbbbb' and req.header['authorization'][0][10..14] == 'zbork'
+ # asking zbbbb about zbork should yield an error, zbbbb doesn't trust zbork
+ res.status = 401
+ return
+ end
+ res.status = @stub_token_status
+ if res.status == 200
+ res.body = {
+ uuid: api_client_authorizations(:active).uuid.sub('zzzzz', clusterid),
+ scopes: @stub_token_scopes,
+ }.to_json
+ end
+ end
Thread.new do
srv.start
end
is_active: true,
is_invited: true,
}
+ @stub_token_status = 200
+ @stub_token_scopes = ["all"]
end
teardown do
end
end
+ test 'authenticate with remote token that has limited scope' do
+ get '/arvados/v1/collections',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response :success
+
+ @stub_token_scopes = ["GET /arvados/v1/users/current"]
+
+ # re-authorize before cache expires
+ get '/arvados/v1/collections',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response :success
+
+ # simulate cache expiry
+ ApiClientAuthorization.where('uuid like ?', 'zbbbb-%').
+ update_all(expires_at: db_current_time - 1.minute)
+
+ # re-authorize after cache expires
+ get '/arvados/v1/collections',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response 403
+ end
+
test 'authenticate with remote token' do
get '/arvados/v1/users/current',
params: {format: 'json'},
assert_response :success
# simulate cache expiry
- ApiClientAuthorization.where(
- uuid: salted_active_token(remote: 'zbbbb').split('/')[1]).
+ ApiClientAuthorization.where('uuid like ?', 'zbbbb-%').
update_all(expires_at: db_current_time - 1.minute)
# re-authorize after cache expires
end
test "list readable groups with salted token" do
+ Rails.configuration.Users.RoleGroupsVisibleToAll = false
salted_token = salt_token(fixture: :active, remote: 'zbbbb')
get '/arvados/v1/groups',
params: {
end
test "fewer distinct than total count" do
+ get "/arvados/v1/links",
+ params: {:format => :json, :select => ['link_class']},
+ headers: auth(:active)
+ assert_response :success
+ distinct_unspecified = json_response['items']
+
get "/arvados/v1/links",
params: {:format => :json, :select => ['link_class'], :distinct => false},
headers: auth(:active)
assert_response :success
- links = json_response['items']
+ distinct_false = json_response['items']
get "/arvados/v1/links",
params: {:format => :json, :select => ['link_class'], :distinct => true},
assert_response :success
distinct = json_response['items']
- assert_operator(distinct.count, :<, links.count,
- "distinct count should be less than link count")
- assert_equal links.uniq.count, distinct.count
+ assert_operator(distinct.count, :<, distinct_false.count,
+ "distinct=true count should be less than distinct=false count")
+ assert_equal(distinct_unspecified.count, distinct_false.count,
+ "distinct=false should be the default")
+ assert_equal distinct_false.uniq.count, distinct.count
end
test "select with order" do
def mock_auth_with(email: nil, username: nil, identity_url: nil, remote: nil, expected_response: :redirect)
mock = {
- 'provider' => 'josh_id',
- 'uid' => 'https://edward.example.com',
- 'info' => {
'identity_url' => 'https://edward.example.com',
'name' => 'Edward Example',
'first_name' => 'Edward',
'last_name' => 'Example',
- },
}
- mock['info']['email'] = email unless email.nil?
- mock['info']['username'] = username unless username.nil?
- mock['info']['identity_url'] = identity_url unless identity_url.nil?
- post('/auth/josh_id/callback',
- params: {return_to: client_url(remote: remote)},
- headers: {'omniauth.auth' => mock})
+ mock['email'] = email unless email.nil?
+ mock['username'] = username unless username.nil?
+ mock['identity_url'] = identity_url unless identity_url.nil?
+ post('/auth/controller/callback',
+ params: {return_to: client_url(remote: remote), :auth_info => SafeJSON.dump(mock)},
+ headers: {'Authorization' => 'Bearer ' + Rails.configuration.SystemRootToken})
errors = {
:redirect => 'Did not redirect to client with token',
test 'existing user login' do
mock_auth_with(identity_url: "https://active-user.openid.local")
u = assigns(:user)
- assert_equal 'zzzzz-tpzed-xurymjxw79nv3jz', u.uuid
+ assert_equal users(:active).uuid, u.uuid
end
test 'user redirect_to_user_uuid' do
mock_auth_with(identity_url: "https://redirects-to-active-user.openid.local")
u = assigns(:user)
- assert_equal 'zzzzz-tpzed-xurymjxw79nv3jz', u.uuid
+ assert_equal users(:active).uuid, u.uuid
end
test 'user double redirect_to_user_uuid' do
mock_auth_with(identity_url: "https://double-redirects-to-active-user.openid.local")
u = assigns(:user)
- assert_equal 'zzzzz-tpzed-xurymjxw79nv3jz', u.uuid
+ assert_equal users(:active).uuid, u.uuid
end
test 'create new user during omniauth callback' do
verify_link_existence created['uuid'], created['email'], true, true, true, true, false
+ # create a token
+ token = act_as_system_user do
+ ApiClientAuthorization.create!(user: User.find_by_uuid(created['uuid']), api_client: ApiClient.all.first).api_token
+ end
+
+ assert_equal 1, ApiClientAuthorization.where(user_id: User.find_by_uuid(created['uuid']).id).size, 'expected token not found'
+
post "/arvados/v1/users/#{created['uuid']}/unsetup", params: {}, headers: auth(:admin)
assert_response :success
created2 = json_response
assert_not_nil created2['uuid'], 'expected uuid for the newly created user'
assert_equal created['uuid'], created2['uuid'], 'expected uuid not found'
+ assert_equal 0, ApiClientAuthorization.where(user_id: User.find_by_uuid(created['uuid']).id).size, 'token should have been deleted by user unsetup'
verify_link_existence created['uuid'], created['email'], false, false, false, false, false
end
params: {},
headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
assert_response(:success)
- user = json_response
- assert_equal true, user['is_active']
+ userJSON = json_response
+ assert_equal true, userJSON['is_active']
post("/arvados/v1/users/#{user['uuid']}/unsetup",
params: {},
headers: auth(:admin))
assert_response :success
+ # Need to get a new token, the old one was invalidated by the unsetup call
+ act_as_system_user do
+ ap = ApiClientAuthorization.create!(user: user, api_client_id: 0)
+ token = ap.api_token
+ end
+
get("/arvados/v1/users/#{user['uuid']}",
params: {},
headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
assert_response(:success)
- user = json_response
- assert_equal false, user['is_active']
+ userJSON = json_response
+ assert_equal false, userJSON['is_active']
post("/arvados/v1/users/#{user['uuid']}/activate",
params: {},
SimpleCov.start do
add_filter '/test/'
add_filter 'initializers/secret_token'
- add_filter 'initializers/omniauth'
end
rescue Exception => e
$stderr.puts "SimpleCov unavailable (#{e}). Proceeding without."
# SPDX-License-Identifier: AGPL-3.0
require 'test_helper'
-require 'sweep_trashed_objects'
class ApiClientAuthorizationTest < ActiveSupport::TestCase
include CurrentApiClient
end
end
- test "delete expired in SweepTrashedObjects" do
- assert_not_empty ApiClientAuthorization.where(uuid: api_client_authorizations(:expired).uuid)
- SweepTrashedObjects.sweep_now
- assert_empty ApiClientAuthorization.where(uuid: api_client_authorizations(:expired).uuid)
- end
-
test "accepts SystemRootToken" do
assert_nil ApiClientAuthorization.validate(token: "xxxSystemRootTokenxxx")
[true, false].each do |token_lifetime_enabled|
test "configured workbench is trusted when token lifetime is#{token_lifetime_enabled ? '': ' not'} enabled" do
Rails.configuration.Login.TokenLifetime = token_lifetime_enabled ? 8.hours : 0
+ Rails.configuration.Login.IssueTrustedTokens = !token_lifetime_enabled;
Rails.configuration.Services.Workbench1.ExternalURL = URI("http://wb1.example.com")
Rails.configuration.Services.Workbench2.ExternalURL = URI("https://wb2.example.com:443")
Rails.configuration.Login.TrustedClients = ActiveSupport::OrderedOptions.new
end
end
- test "full text search index exists on models" do
- indexes = {}
- conn = ActiveRecord::Base.connection
- conn.exec_query("SELECT i.relname as indname,
- i.relowner as indowner,
- idx.indrelid::regclass::text as table,
- am.amname as indam,
- idx.indkey,
- ARRAY(
- SELECT pg_get_indexdef(idx.indexrelid, k + 1, true)
- FROM generate_subscripts(idx.indkey, 1) as k
- ORDER BY k
- ) as keys,
- idx.indexprs IS NOT NULL as indexprs,
- idx.indpred IS NOT NULL as indpred
- FROM pg_index as idx
- JOIN pg_class as i
- ON i.oid = idx.indexrelid
- JOIN pg_am as am
- ON i.relam = am.oid
- JOIN pg_namespace as ns
- ON ns.oid = i.relnamespace
- AND ns.nspname = ANY(current_schemas(false))").each do |idx|
- if idx['keys'].match(/to_tsvector/)
- indexes[idx['table']] ||= []
- indexes[idx['table']] << idx
- end
- end
- fts_tables = ["collections", "container_requests", "groups", "jobs",
- "pipeline_instances", "pipeline_templates", "workflows"]
- fts_tables.each do |table|
- table_class = table.classify.constantize
- if table_class.respond_to?('full_text_searchable_columns')
- expect = table_class.full_text_searchable_columns
- ok = false
- indexes[table].andand.each do |idx|
- if expect == idx['keys'].scan(/COALESCE\(([A-Za-z_]+)/).flatten
- ok = true
- end
- end
- assert ok, "#{table} has no full-text index\nexpect: #{expect.inspect}\nfound: #{indexes[table].inspect}"
- end
- end
- end
-
[
%w[collections collections_trgm_text_search_idx],
%w[container_requests container_requests_trgm_text_search_idx],
c = time_block 'read' do
Collection.find_by_uuid(c.uuid)
end
- time_block 'sign' do
- c.signed_manifest_text
- end
- time_block 'sign + render' do
+ time_block 'render' do
c.as_api_response(nil)
end
loc = Blob.sign_locator(Digest::MD5.hexdigest('foo') + '+3',
# SPDX-License-Identifier: AGPL-3.0
require 'test_helper'
-require 'sweep_trashed_objects'
require 'fix_collection_versions_timestamps'
class CollectionTest < ActiveSupport::TestCase
c.reload
assert_equal 'foobar', c.name
assert_equal 2, c.version
+ # Simulate a keep-balance run and trigger a new versionable update
+ # This tests bug #18005
+ assert_nil c.replication_confirmed
+ assert_nil c.replication_confirmed_at
+ # Updates without validations/callbacks
+ c.update_column('modified_at', fifteen_min_ago)
+ c.update_column('replication_confirmed_at', Time.now)
+ c.update_column('replication_confirmed', 2)
+ c.reload
+ assert_equal fifteen_min_ago.to_i, c.modified_at.to_i
+ assert_not_nil c.replication_confirmed_at
+ assert_not_nil c.replication_confirmed
+ # Make the versionable update
+ c.update_attributes!({'name' => 'foobarbaz'})
+ c.reload
+ assert_equal 'foobarbaz', c.name
+ assert_equal 3, c.version
end
end
end
end
+ test "storage_classes_desired default respects config" do
+ saved = Rails.configuration.DefaultStorageClasses
+ Rails.configuration.DefaultStorageClasses = ["foo"]
+ begin
+ act_as_user users(:active) do
+ c = Collection.create!
+ assert_equal ["foo"], c.storage_classes_desired
+ end
+ ensure
+ Rails.configuration.DefaultStorageClasses = saved
+ end
+ end
+
test "storage_classes_desired cannot be empty" do
act_as_user users(:active) do
c = collections(:collection_owned_by_active)
test "clear replication_confirmed* when introducing a new block in manifest" do
c = collections(:replication_desired_2_confirmed_2)
act_as_user users(:active) do
- assert c.update_attributes(manifest_text: collections(:user_agreement).signed_manifest_text)
+ assert c.update_attributes(manifest_text: collections(:user_agreement).signed_manifest_text_only_for_tests)
assert_nil c.replication_confirmed
assert_nil c.replication_confirmed_at
end
test "don't clear replication_confirmed* when just renaming a file" do
c = collections(:replication_desired_2_confirmed_2)
act_as_user users(:active) do
- new_manifest = c.signed_manifest_text.sub(':bar', ':foo')
+ new_manifest = c.signed_manifest_text_only_for_tests.sub(':bar', ':foo')
assert c.update_attributes(manifest_text: new_manifest)
assert_equal 2, c.replication_confirmed
assert_not_nil c.replication_confirmed_at
test "don't clear replication_confirmed* when just deleting a data block" do
c = collections(:replication_desired_2_confirmed_2)
act_as_user users(:active) do
- new_manifest = c.signed_manifest_text
+ new_manifest = c.signed_manifest_text_only_for_tests
new_manifest.sub!(/ \S+:bar/, '')
new_manifest.sub!(/ acbd\S+/, '')
# Confirm that we did just remove a block from the manifest (if
# not, this test would pass without testing the relevant case):
- assert_operator new_manifest.length+40, :<, c.signed_manifest_text.length
+ assert_operator new_manifest.length+40, :<, c.signed_manifest_text_only_for_tests.length
assert c.update_attributes(manifest_text: new_manifest)
assert_equal 2, c.replication_confirmed
c = Collection.create!(manifest_text: ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:x\n", name: 'foo')
c.update_attributes! trash_at: (t0 + 1.hours)
c.reload
- sig_exp = /\+A[0-9a-f]{40}\@([0-9]+)/.match(c.signed_manifest_text)[1].to_i
+ sig_exp = /\+A[0-9a-f]{40}\@([0-9]+)/.match(c.signed_manifest_text_only_for_tests)[1].to_i
assert_operator sig_exp.to_i, :<=, (t0 + 1.hours).to_i
end
end
c = Collection.create!(manifest_text: ". d41d8cd98f00b204e9800998ecf8427e+0 0:0:x\n",
name: 'foo',
trash_at: db_current_time + 1.years)
- sig_exp = /\+A[0-9a-f]{40}\@([0-9]+)/.match(c.signed_manifest_text)[1].to_i
+ sig_exp = /\+A[0-9a-f]{40}\@([0-9]+)/.match(c.signed_manifest_text_only_for_tests)[1].to_i
expect_max_sig_exp = db_current_time.to_i + Rails.configuration.Collections.BlobSigningTTL.to_i
assert_operator c.trash_at.to_i, :>, expect_max_sig_exp
assert_operator sig_exp.to_i, :<=, expect_max_sig_exp
assert_includes(coll_uuids, collections(:docker_image).uuid)
end
- test "move collections to trash in SweepTrashedObjects" do
- c = collections(:trashed_on_next_sweep)
- refute_empty Collection.where('uuid=? and is_trashed=false', c.uuid)
- assert_raises(ActiveRecord::RecordNotUnique) do
- act_as_user users(:active) do
- Collection.create!(owner_uuid: c.owner_uuid,
- name: c.name)
- end
- end
- SweepTrashedObjects.sweep_now
- c = Collection.where('uuid=? and is_trashed=true', c.uuid).first
- assert c
- act_as_user users(:active) do
- assert Collection.create!(owner_uuid: c.owner_uuid,
- name: c.name)
- end
- end
-
- test "delete collections in SweepTrashedObjects" do
- uuid = 'zzzzz-4zz18-3u1p5umicfpqszp' # deleted_on_next_sweep
- assert_not_empty Collection.where(uuid: uuid)
- SweepTrashedObjects.sweep_now
- assert_empty Collection.where(uuid: uuid)
- end
-
- test "delete referring links in SweepTrashedObjects" do
- uuid = collections(:trashed_on_next_sweep).uuid
- act_as_system_user do
- assert_raises ActiveRecord::RecordInvalid do
- # Cannot create because :trashed_on_next_sweep is already trashed
- Link.create!(head_uuid: uuid,
- tail_uuid: system_user_uuid,
- link_class: 'whatever',
- name: 'something')
- end
-
- # Bump trash_at to now + 1 minute
- Collection.where(uuid: uuid).
- update(trash_at: db_current_time + (1).minute)
-
- # Not considered trashed now
- Link.create!(head_uuid: uuid,
- tail_uuid: system_user_uuid,
- link_class: 'whatever',
- name: 'something')
- end
- past = db_current_time
- Collection.where(uuid: uuid).
- update_all(is_trashed: true, trash_at: past, delete_at: past)
- assert_not_empty Collection.where(uuid: uuid)
- SweepTrashedObjects.sweep_now
- assert_empty Collection.where(uuid: uuid)
- end
-
test "empty names are exempt from name uniqueness" do
act_as_user users(:active) do
c1 = Collection.new(name: nil, manifest_text: '', owner_uuid: groups(:aproject).uuid)
test 'find_commit_range does not bypass permissions' do
authorize_with :inactive
assert_raises ArgumentError do
- CommitsHelper::find_commit_range 'foo', nil, 'master', []
+ CommitsHelper::find_commit_range 'foo', nil, 'main', []
end
end
fake_gitdir = repositories(:foo).server_path
CommitsHelper::expects(:cache_dir_for).once.with(url).returns fake_gitdir
CommitsHelper::expects(:fetch_remote_repository).once.with(fake_gitdir, url).returns true
- c = CommitsHelper::find_commit_range url, nil, 'master', []
+ c = CommitsHelper::find_commit_range url, nil, 'main', []
refute_empty c
end
end
test "find_commit_range skips fetch_remote_repository for #{url}" do
CommitsHelper::expects(:fetch_remote_repository).never
assert_raises ArgumentError do
- CommitsHelper::find_commit_range url, nil, 'master', []
+ CommitsHelper::find_commit_range url, nil, 'main', []
end
end
end
test 'fetch_remote_repository does not leak commits across repositories' do
url = "http://localhost:1/fake/fake.git"
fetch_remote_from_local_repo url, :foo
- c = CommitsHelper::find_commit_range url, nil, 'master', []
+ c = CommitsHelper::find_commit_range url, nil, 'main', []
assert_equal ['077ba2ad3ea24a929091a9e6ce545c93199b8e57'], c
url = "http://localhost:2/fake/fake.git"
def with_foo_repository
Dir.chdir("#{Rails.configuration.Git.Repositories}/#{repositories(:foo).uuid}") do
- must_pipe("git checkout master 2>&1")
+ must_pipe("git checkout main 2>&1")
yield
end
end
assert_equal ['31ce37fe365b3dc204300a3e4c396ad333ed0556'], a
#test "test_branch1" do
- a = CommitsHelper::find_commit_range('active/foo', nil, 'master', nil)
+ a = CommitsHelper::find_commit_range('active/foo', nil, 'main', nil)
assert_includes(a, '077ba2ad3ea24a929091a9e6ce545c93199b8e57')
#test "test_branch2" do
#test "test_tag" do
# complains "fatal: ambiguous argument 'tag1': unknown revision or path
# not in the working tree."
- a = CommitsHelper::find_commit_range('active/foo', 'tag1', 'master', nil)
+ a = CommitsHelper::find_commit_range('active/foo', 'tag1', 'main', nil)
assert_equal ['077ba2ad3ea24a929091a9e6ce545c93199b8e57', '4fe459abe02d9b365932b8f5dc419439ab4e2577'], a
#test "test_multi_revision_exclude" do
['Committed', false, {container_count: 2}],
['Committed', false, {container_count: 0}],
['Committed', false, {container_count: nil}],
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"kind" => "tmp", "capacity" => 1000000}}}],
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp"}}}],
+ # Addition of default values for mounts / runtime_constraints /
+ # scheduling_parameters, as happens in a round-trip through
+ # controller, does not have any real effect and should be
+ # accepted/ignored rather than causing an error when the CR state
+ # dictates those attributes are not allowed to change.
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"capacity" => 0, "kind" => "tmp"}}}, {mounts: {"/out" => {"kind" => "tmp"}}}],
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp", "exclude_from_output": false}}}],
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp", "repository_name": ""}}}],
+ ['Committed', true, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp", "content": nil}}}],
+ ['Committed', false, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp", "content": {}}}}],
+ ['Committed', false, {priority: 0, mounts: {"/out" => {"capacity" => 1000000, "kind" => "tmp", "repository_name": "foo"}}}],
+ ['Committed', false, {priority: 0, mounts: {"/out" => {"kind" => "tmp", "capacity" => 1234567}}}],
+ ['Committed', false, {priority: 0, mounts: {}}],
+ ['Committed', true, {priority: 0, runtime_constraints: {"vcpus" => 1, "ram" => 2}}],
+ ['Committed', true, {priority: 0, runtime_constraints: {"vcpus" => 1, "ram" => 2, "keep_cache_ram" => 0}}],
+ ['Committed', true, {priority: 0, runtime_constraints: {"vcpus" => 1, "ram" => 2, "API" => false}}],
+ ['Committed', false, {priority: 0, runtime_constraints: {"vcpus" => 1, "ram" => 2, "keep_cache_ram" => 1}}],
+ ['Committed', false, {priority: 0, runtime_constraints: {"vcpus" => 1, "ram" => 2, "API" => true}}],
+ ['Committed', true, {priority: 0, scheduling_parameters: {"preemptible" => false}}],
+ ['Committed', true, {priority: 0, scheduling_parameters: {"partitions" => []}}],
+ ['Committed', true, {priority: 0, scheduling_parameters: {"max_run_time" => 0}}],
+ ['Committed', false, {priority: 0, scheduling_parameters: {"preemptible" => true}}],
+ ['Committed', false, {priority: 0, scheduling_parameters: {"partitions" => ["foo"]}}],
+ ['Committed', false, {priority: 0, scheduling_parameters: {"max_run_time" => 1}}],
['Final', false, {state: ContainerRequest::Committed, name: "foobar"}],
['Final', false, {name: "foobar", priority: 123}],
['Final', false, {name: "foobar", output_uuid: "zzzzz-4zz18-znfnqtbbv4spc3w"}],
['Final', false, {container_count: 2}],
['Final', true, {name: "foobar"}],
['Final', true, {name: "foobar", description: "baz"}],
- ].each do |state, permitted, updates|
+ ].each do |state, permitted, updates, create_attrs|
test "state=#{state} can#{'not' if !permitted} update #{updates.inspect}" do
act_as_user users(:active) do
- cr = create_minimal_req!(priority: 1,
- state: "Committed",
- container_count_max: 1)
+ attrs = {
+ priority: 1,
+ state: "Committed",
+ container_count_max: 1
+ }
+ if !create_attrs.nil?
+ attrs.merge!(create_attrs)
+ end
+ cr = create_minimal_req!(attrs)
case state
when 'Committed'
# already done
cr.save!
end
end
+
+ test "default output_storage_classes" do
+ saved = Rails.configuration.DefaultStorageClasses
+ Rails.configuration.DefaultStorageClasses = ["foo"]
+ begin
+ act_as_user users(:active) do
+ cr = create_minimal_req!(priority: 1,
+ state: ContainerRequest::Committed,
+ output_name: 'foo')
+ run_container(cr)
+ cr.reload
+ output = Collection.find_by_uuid(cr.output_uuid)
+ assert_equal ["foo"], output.storage_classes_desired
+ end
+ ensure
+ Rails.configuration.DefaultStorageClasses = saved
+ end
+ end
+
+ test "setting output_storage_classes" do
+ act_as_user users(:active) do
+ cr = create_minimal_req!(priority: 1,
+ state: ContainerRequest::Committed,
+ output_name: 'foo',
+ output_storage_classes: ["foo_storage_class", "bar_storage_class"])
+ run_container(cr)
+ cr.reload
+ output = Collection.find_by_uuid(cr.output_uuid)
+ assert_equal ["foo_storage_class", "bar_storage_class"], output.storage_classes_desired
+ log = Collection.find_by_uuid(cr.log_uuid)
+ assert_equal ["foo_storage_class", "bar_storage_class"], log.storage_classes_desired
+ end
+ end
+
+ test "reusing container with different container_request.output_storage_classes" do
+ common_attrs = {cwd: "test",
+ priority: 1,
+ command: ["echo", "hello"],
+ output_path: "test",
+ runtime_constraints: {"vcpus" => 4,
+ "ram" => 12000000000},
+ mounts: {"test" => {"kind" => "json"}},
+ environment: {"var" => "value1"},
+ output_storage_classes: ["foo_storage_class"]}
+ set_user_from_auth :active
+ cr1 = create_minimal_req!(common_attrs.merge({state: ContainerRequest::Committed}))
+ cont1 = run_container(cr1)
+ cr1.reload
+
+ output1 = Collection.find_by_uuid(cr1.output_uuid)
+
+ # Testing with use_existing default value
+ cr2 = create_minimal_req!(common_attrs.merge({state: ContainerRequest::Uncommitted,
+ output_storage_classes: ["bar_storage_class"]}))
+
+ assert_not_nil cr1.container_uuid
+ assert_nil cr2.container_uuid
+
+ # Update cr2 to commited state, check for reuse, then run it
+ cr2.update_attributes!({state: ContainerRequest::Committed})
+ assert_equal cr1.container_uuid, cr2.container_uuid
+
+ cr2.reload
+ output2 = Collection.find_by_uuid(cr2.output_uuid)
+
+ # the original CR output has the original storage class,
+ # but the second CR output has the new storage class.
+ assert_equal ["foo_storage_class"], cont1.output_storage_classes
+ assert_equal ["foo_storage_class"], output1.storage_classes_desired
+ assert_equal ["bar_storage_class"], output2.storage_classes_desired
+ end
end
check_no_change_from_cancelled c
end
+ test "Container locked with non-expiring token" do
+ Rails.configuration.API.TokenMaxLifetime = 1.hour
+ set_user_from_auth :active
+ c, _ = minimal_new
+ set_user_from_auth :dispatch1
+ assert c.lock, show_errors(c)
+ refute c.auth.nil?
+ assert c.auth.expires_at.nil?
+ assert c.auth.user_id == User.find_by_uuid(users(:active).uuid).id
+ end
+
test "Container locked cancel with log" do
set_user_from_auth :active
c, _ = minimal_new
assert g_foo.errors.messages[:owner_uuid].join(" ").match(/ownership cycle/)
end
- test "cannot create a group that is not a 'role' or 'project'" do
+ test "cannot create a group that is not a 'role' or 'project' or 'filter'" do
set_user_from_auth :active_trustedclient
assert_raises(ActiveRecord::RecordInvalid) do
assert User.readable_by(users(:admin)).where(uuid: u_bar.uuid).any?
end
- test "move projects to trash in SweepTrashedObjects" do
- p = groups(:trashed_on_next_sweep)
- assert_empty Group.where('uuid=? and is_trashed=true', p.uuid)
- SweepTrashedObjects.sweep_now
- assert_not_empty Group.where('uuid=? and is_trashed=true', p.uuid)
- end
-
- test "delete projects and their contents in SweepTrashedObjects" do
- g_foo = groups(:trashed_project)
- g_bar = groups(:trashed_subproject)
- g_baz = groups(:trashed_subproject3)
- col = collections(:collection_in_trashed_subproject)
- job = jobs(:job_in_trashed_project)
- cr = container_requests(:cr_in_trashed_project)
- # Save how many objects were before the sweep
- user_nr_was = User.all.length
- coll_nr_was = Collection.all.length
- group_nr_was = Group.where('group_class<>?', 'project').length
- project_nr_was = Group.where(group_class: 'project').length
- cr_nr_was = ContainerRequest.all.length
- job_nr_was = Job.all.length
- assert_not_empty Group.where(uuid: g_foo.uuid)
- assert_not_empty Group.where(uuid: g_bar.uuid)
- assert_not_empty Group.where(uuid: g_baz.uuid)
- assert_not_empty Collection.where(uuid: col.uuid)
- assert_not_empty Job.where(uuid: job.uuid)
- assert_not_empty ContainerRequest.where(uuid: cr.uuid)
- SweepTrashedObjects.sweep_now
- assert_empty Group.where(uuid: g_foo.uuid)
- assert_empty Group.where(uuid: g_bar.uuid)
- assert_empty Group.where(uuid: g_baz.uuid)
- assert_empty Collection.where(uuid: col.uuid)
- assert_empty Job.where(uuid: job.uuid)
- assert_empty ContainerRequest.where(uuid: cr.uuid)
- # No unwanted deletions should have happened
- assert_equal user_nr_was, User.all.length
- assert_equal coll_nr_was-2, # collection_in_trashed_subproject
- Collection.all.length # & deleted_on_next_sweep collections
- assert_equal group_nr_was, Group.where('group_class<>?', 'project').length
- assert_equal project_nr_was-3, Group.where(group_class: 'project').length
- assert_equal cr_nr_was-1, ContainerRequest.all.length
- assert_equal job_nr_was-1, Job.all.length
- end
-
test "project names must be displayable in a filesystem" do
set_user_from_auth :active
["", "{SOLIDUS}"].each do |subst|
Rails.configuration.Collections.ForwardSlashNameSubstitution = subst
proj = Group.create group_class: "project"
role = Group.create group_class: "role"
+ filt = Group.create group_class: "filter", properties: {"filters":[]}
[[nil, true],
["", true],
[".", false],
role.name = name
assert_equal true, role.valid?
proj.name = name
- assert_equal valid, proj.valid?, "#{name.inspect} should be #{valid ? "valid" : "invalid"}"
+ assert_equal valid, proj.valid?, "project: #{name.inspect} should be #{valid ? "valid" : "invalid"}"
+ filt.name = name
+ assert_equal valid, filt.valid?, "filter: #{name.inspect} should be #{valid ? "valid" : "invalid"}"
end
end
end
# Default (valid) set of attributes, with given overrides
{
script: "hash",
- script_version: "master",
+ script_version: "main",
repository: "active/foo",
}.merge(merge_me)
end
"new #{o_class} should really be in DB")
old_uuid = o.uuid
new_uuid = o.uuid.sub(/..........$/, rand(2**256).to_s(36)[0..9])
- if o.respond_to? :update_uuid
- o.update_uuid(new_uuid: new_uuid)
- else
- assert(o.update_attributes(uuid: new_uuid),
- "should change #{o_class} uuid from #{old_uuid} to #{new_uuid}")
- end
+ assert(o.update_attributes(uuid: new_uuid),
+ "should change #{o_class} uuid from #{old_uuid} to #{new_uuid}")
assert_equal(false, o_class.where(uuid: old_uuid).any?,
"#{old_uuid} should disappear when renamed to #{new_uuid}")
end
check_permissions_against_full_refresh
end
- test "change uuid of User that owns self" do
- o = User.create!
- assert User.where(uuid: o.uuid).any?, "new User should really be in DB"
- assert_equal(true, o.update_attributes(owner_uuid: o.uuid),
- "setting owner to self should work")
- old_uuid = o.uuid
- new_uuid = o.uuid.sub(/..........$/, rand(2**256).to_s(36)[0..9])
- o.update_uuid(new_uuid: new_uuid)
- o = User.find_by_uuid(new_uuid)
- assert_equal(false, User.where(uuid: old_uuid).any?,
- "#{old_uuid} should not be in DB after deleting")
- assert_equal(true, User.where(uuid: new_uuid).any?,
- "#{new_uuid} should be in DB after renaming")
- assert_equal(new_uuid, User.where(uuid: new_uuid).first.owner_uuid,
- "#{new_uuid} should be its own owner in DB after renaming")
- end
-
end
end
test "manager user gets permission to minions' articles via can_manage link" do
+ Rails.configuration.Users.RoleGroupsVisibleToAll = false
+ Rails.configuration.Users.ActivatedUsersAreVisibleToOthers = false
manager = create :active_user, first_name: "Manage", last_name: "Er"
minion = create :active_user, first_name: "Min", last_name: "Ion"
minions_specimen = act_as_user minion do
end
test "users with bidirectional read permission in group can see each other, but cannot see each other's private articles" do
+ Rails.configuration.Users.ActivatedUsersAreVisibleToOthers = false
a = create :active_user, first_name: "A"
b = create :active_user, first_name: "B"
other = create :active_user, first_name: "OTHER"
test "account is setup" do
user = users :active
+ Rails.configuration.Users.UserNotifierEmailBcc = ConfigLoader.to_OrderedOptions({"bcc-notify@example.com"=>{},"bcc-notify2@example.com"=>{}})
Rails.configuration.Users.UserSetupMailText = %{
<% if not @user.full_name.empty? -%>
<%= @user.full_name %>,
# Test the body of the sent email contains what we expect it to
assert_equal Rails.configuration.Users.UserNotifierEmailFrom, email.from.first
+ assert_equal Rails.configuration.Users.UserNotifierEmailBcc.stringify_keys.keys, email.bcc
assert_equal user.email, email.to.first
assert_equal 'Welcome to Arvados - account enabled', email.subject
assert (email.body.to_s.include? 'Your Arvados shell account has been set up'),
assert_not_allowed { User.new.save }
end
- test "setup new user" do
- set_user_from_auth :admin
+ [true, false].each do |visible|
+ test "setup new user with ActivatedUsersAreVisibleToOthers=#{visible}" do
+ Rails.configuration.Users.ActivatedUsersAreVisibleToOthers = visible
+ set_user_from_auth :admin
- email = 'foo@example.com'
+ email = 'foo@example.com'
- user = User.create ({uuid: 'zzzzz-tpzed-abcdefghijklmno', email: email})
+ user = User.create ({uuid: 'zzzzz-tpzed-abcdefghijklmno', email: email})
- vm = VirtualMachine.create
+ vm = VirtualMachine.create
- response = user.setup(repo_name: 'foo/testrepo',
- vm_uuid: vm.uuid)
+ response = user.setup(repo_name: 'foo/testrepo',
+ vm_uuid: vm.uuid)
- resp_user = find_obj_in_resp response, 'User'
- verify_user resp_user, email
+ resp_user = find_obj_in_resp response, 'User'
+ verify_user resp_user, email
- group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
- verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
+ group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
+ verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
- repo_perm = find_obj_in_resp response, 'Link', 'arvados#repository'
- verify_link repo_perm, 'permission', 'can_manage', resp_user[:uuid], nil
+ group_perm2 = find_obj_in_resp response, 'Link', 'arvados#user'
+ if visible
+ verify_link group_perm2, 'permission', 'can_read', groups(:all_users).uuid, nil
+ else
+ assert_nil group_perm2
+ end
- vm_perm = find_obj_in_resp response, 'Link', 'arvados#virtualMachine'
- verify_link vm_perm, 'permission', 'can_login', resp_user[:uuid], vm.uuid
- assert_equal("foo", vm_perm.properties["username"])
+ repo_perm = find_obj_in_resp response, 'Link', 'arvados#repository'
+ verify_link repo_perm, 'permission', 'can_manage', resp_user[:uuid], nil
+
+ vm_perm = find_obj_in_resp response, 'Link', 'arvados#virtualMachine'
+ verify_link vm_perm, 'permission', 'can_login', resp_user[:uuid], vm.uuid
+ assert_equal("foo", vm_perm.properties["username"])
+ end
end
test "setup new user with junk in database" do
group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
+ group_perm2 = find_obj_in_resp response, 'Link', 'arvados#user'
+ verify_link group_perm2, 'permission', 'can_read', groups(:all_users).uuid, nil
+
# invoke setup again with repo_name
response = user.setup(repo_name: 'foo/testrepo')
resp_user = find_obj_in_resp response, 'User', nil
break
end
else # looking for a link
- if ArvadosModel::resource_class_for_uuid(x['head_uuid']).kind == head_kind
+ if ArvadosModel::resource_class_for_uuid(x['head_uuid']).andand.kind == head_kind
return_obj = x
break
end
end
end
- [
- [:active, 'zzzzz-borkd-abcde12345abcde'],
- [:active, 'zzzzz-j7d0g-abcde12345abcde'],
- [:active, 'zzzzz-tpzed-borkd'],
- [:system_user, 'zzzzz-tpzed-abcde12345abcde'],
- [:anonymous, 'zzzzz-tpzed-abcde12345abcde'],
- ].each do |fixture, new_uuid|
- test "disallow update_uuid #{fixture} -> #{new_uuid}" do
- u = users(fixture)
- orig_uuid = u.uuid
- act_as_system_user do
- assert_raises do
- u.update_uuid(new_uuid: new_uuid)
- end
- end
- # "Successfully aborted orig->new" outcome looks the same as
- # "successfully updated new->orig".
- assert_update_success(old_uuid: new_uuid,
- new_uuid: orig_uuid,
- expect_owned_objects: fixture == :active)
- end
- end
-
- [:active, :spectator, :admin].each do |target|
- test "update_uuid on #{target} as non-admin user" do
- act_as_user users(:active) do
- assert_raises(ArvadosModel::PermissionDeniedError) do
- users(target).update_uuid(new_uuid: 'zzzzz-tpzed-abcde12345abcde')
- end
- end
- end
- end
-
- test "update_uuid to existing uuid" do
- u = users(:active)
- orig_uuid = u.uuid
- new_uuid = users(:admin).uuid
- act_as_system_user do
- assert_raises do
- u.update_uuid(new_uuid: new_uuid)
- end
- end
- u.reload
- assert_equal u.uuid, orig_uuid
- assert_not_empty Collection.where(owner_uuid: orig_uuid)
- assert_not_empty Group.where(owner_uuid: orig_uuid)
- end
-
- [
- [:active, 'zbbbb-tpzed-abcde12345abcde'],
- [:active, 'zzzzz-tpzed-abcde12345abcde'],
- [:admin, 'zbbbb-tpzed-abcde12345abcde'],
- [:admin, 'zzzzz-tpzed-abcde12345abcde'],
- ].each do |fixture, new_uuid|
- test "update_uuid #{fixture} to unused uuid #{new_uuid}" do
- u = users(fixture)
- orig_uuid = u.uuid
- act_as_system_user do
- u.update_uuid(new_uuid: new_uuid)
- end
- assert_update_success(old_uuid: orig_uuid,
- new_uuid: new_uuid,
- expect_owned_objects: fixture == :active)
- end
- end
-
def assert_update_success(old_uuid:, new_uuid:, expect_owned_objects: true)
[[User, :uuid],
[Link, :head_uuid],
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
var statusText string
var apiToken string
var repoName string
- var validApiToken bool
+ var validAPIToken bool
w := httpserver.WrapResponseWriter(wOrig)
// If the given password is a valid token, log the first 10 characters of the token.
// Otherwise: log the string <invalid> if a password is given, else an empty string.
passwordToLog := ""
- if !validApiToken {
+ if !validAPIToken {
if len(apiToken) > 0 {
passwordToLog = "<invalid>"
}
statusCode, statusText = http.StatusInternalServerError, err.Error()
return
}
- validApiToken = true
+ validAPIToken = true
if repoUUID == "" {
statusCode, statusText = http.StatusNotFound, "not found"
return
cluster *arvados.Cluster
}
-func (s *AuthHandlerSuite) SetUpSuite(c *check.C) {
- arvadostest.StartAPI()
-}
-
-func (s *AuthHandlerSuite) TearDownSuite(c *check.C) {
- arvadostest.StopAPI()
-}
-
func (s *AuthHandlerSuite) SetUpTest(c *check.C) {
arvadostest.ResetEnv()
repoRoot, err := filepath.Abs("../api/tmp/git/test")
}
func (s *GitoliteSuite) TestFetch(c *check.C) {
- err := s.RunGit(c, activeToken, "fetch", "active/foo.git")
+ err := s.RunGit(c, activeToken, "fetch", "active/foo.git", "refs/heads/main")
c.Check(err, check.Equals, nil)
}
}
func (s *GitoliteSuite) TestPush(c *check.C) {
- err := s.RunGit(c, activeToken, "push", "active/foo.git", "master:gitolite-push")
+ err := s.RunGit(c, activeToken, "push", "active/foo.git", "main:gitolite-push")
c.Check(err, check.Equals, nil)
// Check that the commit hash appears in the gitolite log, as
}
func (s *GitoliteSuite) TestPushUnwritable(c *check.C) {
- err := s.RunGit(c, spectatorToken, "push", "active/foo.git", "master:gitolite-push-fail")
+ err := s.RunGit(c, spectatorToken, "push", "active/foo.git", "main:gitolite-push-fail")
c.Check(err, check.ErrorMatches, `.*HTTP (code = )?403.*`)
}
cluster *arvados.Cluster
}
-func (s *IntegrationSuite) SetUpSuite(c *check.C) {
- arvadostest.StartAPI()
-}
-
-func (s *IntegrationSuite) TearDownSuite(c *check.C) {
- arvadostest.StopAPI()
-}
-
func (s *IntegrationSuite) SetUpTest(c *check.C) {
arvadostest.ResetEnv()
c.Assert(err, check.Equals, nil)
_, err = exec.Command("git", "init", "--bare", s.tmpRepoRoot+"/zzzzz-s0uqq-382brsig8rp3666.git").Output()
c.Assert(err, check.Equals, nil)
+ // we need git 2.28 to specify the initial branch with -b; Buster only has 2.20; so we do it in 2 steps
_, err = exec.Command("git", "init", s.tmpWorkdir).Output()
c.Assert(err, check.Equals, nil)
+ _, err = exec.Command("sh", "-c", "cd "+s.tmpWorkdir+" && git checkout -b main").Output()
+ c.Assert(err, check.Equals, nil)
_, err = exec.Command("sh", "-c", "cd "+s.tmpWorkdir+" && echo initial >initial && git add initial && git -c user.name=Initial -c user.email=Initial commit -am 'foo: initial commit'").CombinedOutput()
c.Assert(err, check.Equals, nil)
- _, err = exec.Command("sh", "-c", "cd "+s.tmpWorkdir+" && git push "+s.tmpRepoRoot+"/zzzzz-s0uqq-382brsig8rp3666.git master:master").CombinedOutput()
+ _, err = exec.Command("sh", "-c", "cd "+s.tmpWorkdir+" && git push "+s.tmpRepoRoot+"/zzzzz-s0uqq-382brsig8rp3666.git main:main").CombinedOutput()
c.Assert(err, check.Equals, nil)
_, err = exec.Command("sh", "-c", "cd "+s.tmpWorkdir+" && echo work >work && git add work && git -c user.name=Foo -c user.email=Foo commit -am 'workdir: test'").CombinedOutput()
c.Assert(err, check.Equals, nil)
"fmt"
"os"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
TimestampFormat: "2006-01-02T15:04:05.000000000Z07:00",
})
- flags := flag.NewFlagSet(os.Args[0], flag.ExitOnError)
+ flags := flag.NewFlagSet(os.Args[0], flag.ContinueOnError)
loader := config.NewLoader(os.Stdin, logger)
loader.SetupFlags(flags)
getVersion := flags.Bool("version", false, "print version information and exit.")
args := loader.MungeLegacyConfigArgs(logger, os.Args[1:], "-legacy-git-httpd-config")
- flags.Parse(args)
-
- if *getVersion {
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], args, "", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
fmt.Printf("arv-git-httpd %s\n", version)
return
}
func (s *GitSuite) TestPathVariants(c *check.C) {
s.makeArvadosRepo(c)
for _, repo := range []string{"active/foo.git", "active/foo/.git", "arvados.git", "arvados/.git"} {
- err := s.RunGit(c, spectatorToken, "fetch", repo)
+ err := s.RunGit(c, spectatorToken, "fetch", repo, "refs/heads/main")
c.Assert(err, check.Equals, nil)
}
}
func (s *GitSuite) TestReadonly(c *check.C) {
- err := s.RunGit(c, spectatorToken, "fetch", "active/foo.git")
+ err := s.RunGit(c, spectatorToken, "fetch", "active/foo.git", "refs/heads/main")
c.Assert(err, check.Equals, nil)
- err = s.RunGit(c, spectatorToken, "push", "active/foo.git", "master:newbranchfail")
+ err = s.RunGit(c, spectatorToken, "push", "active/foo.git", "main:newbranchfail")
c.Assert(err, check.ErrorMatches, `.*HTTP (code = )?403.*`)
_, err = os.Stat(s.tmpRepoRoot + "/zzzzz-s0uqq-382brsig8rp3666.git/refs/heads/newbranchfail")
c.Assert(err, check.FitsTypeOf, &os.PathError{})
}
func (s *GitSuite) TestReadwrite(c *check.C) {
- err := s.RunGit(c, activeToken, "fetch", "active/foo.git")
+ err := s.RunGit(c, activeToken, "fetch", "active/foo.git", "refs/heads/main")
c.Assert(err, check.Equals, nil)
- err = s.RunGit(c, activeToken, "push", "active/foo.git", "master:newbranch")
+ err = s.RunGit(c, activeToken, "push", "active/foo.git", "main:newbranch")
c.Assert(err, check.Equals, nil)
_, err = os.Stat(s.tmpRepoRoot + "/zzzzz-s0uqq-382brsig8rp3666.git/refs/heads/newbranch")
c.Assert(err, check.Equals, nil)
}
func (s *GitSuite) TestNonexistent(c *check.C) {
- err := s.RunGit(c, spectatorToken, "fetch", "thisrepodoesnotexist.git")
+ err := s.RunGit(c, spectatorToken, "fetch", "thisrepodoesnotexist.git", "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* not found.*`)
}
func (s *GitSuite) TestMissingGitdirReadableRepository(c *check.C) {
- err := s.RunGit(c, activeToken, "fetch", "active/foo2.git")
+ err := s.RunGit(c, activeToken, "fetch", "active/foo2.git", "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* not found.*`)
}
func (s *GitSuite) TestNoPermission(c *check.C) {
for _, repo := range []string{"active/foo.git", "active/foo/.git"} {
- err := s.RunGit(c, anonymousToken, "fetch", repo)
+ err := s.RunGit(c, anonymousToken, "fetch", repo, "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* not found.*`)
}
}
func (s *GitSuite) TestExpiredToken(c *check.C) {
for _, repo := range []string{"active/foo.git", "active/foo/.git"} {
- err := s.RunGit(c, expiredToken, "fetch", repo)
+ err := s.RunGit(c, expiredToken, "fetch", repo, "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* (500 while accessing|requested URL returned error: 500).*`)
}
}
func (s *GitSuite) TestInvalidToken(c *check.C) {
for _, repo := range []string{"active/foo.git", "active/foo/.git"} {
- err := s.RunGit(c, "s3cr3tp@ssw0rd", "fetch", repo)
+ err := s.RunGit(c, "s3cr3tp@ssw0rd", "fetch", repo, "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* requested URL returned error.*`)
}
}
func (s *GitSuite) TestShortToken(c *check.C) {
for _, repo := range []string{"active/foo.git", "active/foo/.git"} {
- err := s.RunGit(c, "s3cr3t", "fetch", repo)
+ err := s.RunGit(c, "s3cr3t", "fetch", repo, "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.* (500 while accessing|requested URL returned error: 500).*`)
}
}
func (s *GitSuite) TestShortTokenBadReq(c *check.C) {
for _, repo := range []string{"bogus"} {
- err := s.RunGit(c, "s3cr3t", "fetch", repo)
+ err := s.RunGit(c, "s3cr3t", "fetch", repo, "refs/heads/main")
c.Assert(err, check.ErrorMatches, `.*not found.*`)
}
}
msg, err := exec.Command("git", "init", "--bare", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git").CombinedOutput()
c.Log(string(msg))
c.Assert(err, check.Equals, nil)
- msg, err = exec.Command("git", "--git-dir", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git", "fetch", "../../.git", "HEAD:master").CombinedOutput()
+ msg, err = exec.Command("git", "--git-dir", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git", "fetch", "../../.git", "HEAD:main").CombinedOutput()
c.Log(string(msg))
c.Assert(err, check.Equals, nil)
}
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/dispatch"
var version = "dev"
-func main() {
- err := doMain()
- if err != nil {
- logrus.Fatalf("%q", err)
- }
-}
-
var (
runningCmds map[string]*exec.Cmd
runningCmdsMutex sync.Mutex
crunchRunCommand *string
)
-func doMain() error {
- logger := logrus.StandardLogger()
+func main() {
+ baseLogger := logrus.StandardLogger()
if os.Getenv("DEBUG") != "" {
- logger.SetLevel(logrus.DebugLevel)
+ baseLogger.SetLevel(logrus.DebugLevel)
}
- logger.Formatter = &logrus.JSONFormatter{
+ baseLogger.Formatter = &logrus.JSONFormatter{
TimestampFormat: "2006-01-02T15:04:05.000000000Z07:00",
}
false,
"Print version information and exit.")
- // Parse args; omit the first arg which is the command name
- flags.Parse(os.Args[1:])
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], os.Args[1:], "", os.Stderr); !ok {
+ os.Exit(code)
+ }
// Print version information if requested
if *getVersion {
fmt.Printf("crunch-dispatch-local %s\n", version)
- return nil
+ return
+ }
+
+ loader := config.NewLoader(nil, baseLogger)
+ cfg, err := loader.Load()
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "error loading config: %s\n", err)
+ os.Exit(1)
+ }
+ cluster, err := cfg.GetCluster("")
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "config error: %s\n", err)
+ os.Exit(1)
}
+ logger := baseLogger.WithField("ClusterID", cluster.ClusterID)
logger.Printf("crunch-dispatch-local %s started", version)
runningCmds = make(map[string]*exec.Cmd)
+ var client arvados.Client
+ client.APIHost = cluster.Services.Controller.ExternalURL.Host
+ client.AuthToken = cluster.SystemRootToken
+ client.Insecure = cluster.TLS.Insecure
+
+ if client.APIHost != "" || client.AuthToken != "" {
+ // Copy real configs into env vars so [a]
+ // MakeArvadosClient() uses them, and [b] they get
+ // propagated to crunch-run via SLURM.
+ os.Setenv("ARVADOS_API_HOST", client.APIHost)
+ os.Setenv("ARVADOS_API_TOKEN", client.AuthToken)
+ os.Setenv("ARVADOS_API_HOST_INSECURE", "")
+ if client.Insecure {
+ os.Setenv("ARVADOS_API_HOST_INSECURE", "1")
+ }
+ os.Setenv("ARVADOS_EXTERNAL_CLIENT", "")
+ } else {
+ logger.Warnf("Client credentials missing from config, so falling back on environment variables (deprecated).")
+ }
+
arv, err := arvadosclient.MakeArvadosClient()
if err != nil {
logger.Errorf("error making Arvados client: %v", err)
- return err
+ os.Exit(1)
}
arv.Retries = 25
dispatcher := dispatch.Dispatcher{
Logger: logger,
Arv: arv,
- RunContainer: (&LocalRun{startFunc, make(chan bool, 8), ctx}).run,
+ RunContainer: (&LocalRun{startFunc, make(chan bool, 8), ctx, cluster}).run,
PollPeriod: time.Duration(*pollInterval) * time.Second,
}
err = dispatcher.Run(ctx)
if err != nil {
- return err
+ logger.Error(err)
+ return
}
c := make(chan os.Signal, 1)
// Wait for all running crunch jobs to complete / terminate
waitGroup.Wait()
-
- return nil
}
func startFunc(container arvados.Container, cmd *exec.Cmd) error {
startCmd func(container arvados.Container, cmd *exec.Cmd) error
concurrencyLimit chan bool
ctx context.Context
+ cluster *arvados.Cluster
}
// Run a container.
// crunch-run terminates, mark the container as Cancelled.
func (lr *LocalRun) run(dispatcher *dispatch.Dispatcher,
container arvados.Container,
- status <-chan arvados.Container) {
+ status <-chan arvados.Container) error {
uuid := container.UUID
case lr.concurrencyLimit <- true:
break
case <-lr.ctx.Done():
- return
+ return lr.ctx.Err()
}
defer func() { <-lr.concurrencyLimit }()
waitGroup.Add(1)
defer waitGroup.Done()
- cmd := exec.Command(*crunchRunCommand, uuid)
+ cmd := exec.Command(*crunchRunCommand, "--runtime-engine="+lr.cluster.Containers.RuntimeEngine, uuid)
cmd.Stdin = nil
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stderr
}
dispatcher.Logger.Printf("finalized container %v", uuid)
+ return nil
}
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
func (s *TestSuite) SetUpSuite(c *C) {
initialArgs = os.Args
- arvadostest.StartAPI()
runningCmds = make(map[string]*exec.Cmd)
logrus.SetFormatter(&logrus.TextFormatter{DisableColors: true})
}
-func (s *TestSuite) TearDownSuite(c *C) {
- arvadostest.StopAPI()
-}
-
func (s *TestSuite) SetUpTest(c *C) {
args := []string{"crunch-dispatch-local"}
os.Args = args
return cmd.Start()
}
- dispatcher.RunContainer = func(d *dispatch.Dispatcher, c arvados.Container, s <-chan arvados.Container) {
- (&LocalRun{startCmd, make(chan bool, 8), ctx}).run(d, c, s)
- cancel()
+ cl := arvados.Cluster{Containers: arvados.ContainersConfig{RuntimeEngine: "docker"}}
+
+ dispatcher.RunContainer = func(d *dispatch.Dispatcher, c arvados.Container, s <-chan arvados.Container) error {
+ defer cancel()
+ return (&LocalRun{startCmd, make(chan bool, 8), ctx, &cl}).run(d, c, s)
}
err = dispatcher.Run(ctx)
return cmd.Start()
}
- dispatcher.RunContainer = func(d *dispatch.Dispatcher, c arvados.Container, s <-chan arvados.Container) {
- (&LocalRun{startCmd, make(chan bool, 8), ctx}).run(d, c, s)
- cancel()
+ cl := arvados.Cluster{Containers: arvados.ContainersConfig{RuntimeEngine: "docker"}}
+
+ dispatcher.RunContainer = func(d *dispatch.Dispatcher, c arvados.Container, s <-chan arvados.Container) error {
+ defer cancel()
+ return (&LocalRun{startCmd, make(chan bool, 8), ctx, &cl}).run(d, c, s)
}
re := regexp.MustCompile(`(?ms).*` + expected + `.*`)
// Dispatcher service for Crunch that submits containers to the slurm queue.
import (
- "bytes"
"context"
"flag"
"fmt"
"strings"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/dispatchcloud"
"git.arvados.org/arvados.git/sdk/go/arvados"
if disp.logger == nil {
disp.logger = logrus.StandardLogger()
}
- flags := flag.NewFlagSet(prog, flag.ExitOnError)
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
flags.Usage = func() { usage(flags) }
loader := config.NewLoader(nil, disp.logger)
false,
"Print version information and exit.")
- args = loader.MungeLegacyConfigArgs(logrus.StandardLogger(), args, "-legacy-crunch-dispatch-slurm-config")
-
- // Parse args; omit the first arg which is the command name
- err := flags.Parse(args)
-
- if err == flag.ErrHelp {
- return nil
+ args = loader.MungeLegacyConfigArgs(disp.logger, args, "-legacy-crunch-dispatch-slurm-config")
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", os.Stderr); !ok {
+ os.Exit(code)
}
// Print version information if requested
return fmt.Errorf("config error: %s", err)
}
+ disp.logger = disp.logger.WithField("ClusterID", disp.cluster.ClusterID)
+
disp.Client.APIHost = disp.cluster.Services.Controller.ExternalURL.Host
disp.Client.AuthToken = disp.cluster.SystemRootToken
disp.Client.Insecure = disp.cluster.TLS.Insecure
// append() here avoids modifying crunchRunCommand's
// underlying array, which is shared with other goroutines.
crArgs := append([]string(nil), crunchRunCommand...)
+ crArgs = append(crArgs, "--runtime-engine="+disp.cluster.Containers.RuntimeEngine)
crArgs = append(crArgs, container.UUID)
crScript := strings.NewReader(execScript(crArgs))
// already in the queue). Cancel the slurm job if the container's
// priority changes to zero or its state indicates it's no longer
// running.
-func (disp *Dispatcher) runContainer(_ *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) {
+func (disp *Dispatcher) runContainer(_ *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
log.Printf("Submitting container %s to slurm", ctr.UUID)
cmd := []string{disp.cluster.Containers.CrunchRunCommand}
cmd = append(cmd, disp.cluster.Containers.CrunchRunArgumentsList...)
- if err := disp.submit(ctr, cmd); err != nil {
- var text string
- switch err := err.(type) {
- case dispatchcloud.ConstraintsNotSatisfiableError:
- var logBuf bytes.Buffer
- fmt.Fprintf(&logBuf, "cannot run container %s: %s\n", ctr.UUID, err)
- if len(err.AvailableTypes) == 0 {
- fmt.Fprint(&logBuf, "No instance types are configured.\n")
- } else {
- fmt.Fprint(&logBuf, "Available instance types:\n")
- for _, t := range err.AvailableTypes {
- fmt.Fprintf(&logBuf,
- "Type %q: %d VCPUs, %d RAM, %d Scratch, %f Price\n",
- t.Name, t.VCPUs, t.RAM, t.Scratch, t.Price,
- )
- }
- }
- text = logBuf.String()
- disp.UpdateState(ctr.UUID, dispatch.Cancelled)
- default:
- text = fmt.Sprintf("Error submitting container %s to slurm: %s", ctr.UUID, err)
- }
- log.Print(text)
-
- lr := arvadosclient.Dict{"log": arvadosclient.Dict{
- "object_uuid": ctr.UUID,
- "event_type": "dispatch",
- "properties": map[string]string{"text": text}}}
- disp.Arv.Create("logs", lr, nil)
-
- disp.Unlock(ctr.UUID)
- return
+ err := disp.submit(ctr, cmd)
+ if err != nil {
+ return err
}
}
case dispatch.Locked:
disp.Unlock(ctr.UUID)
}
- return
+ return nil
case updated, ok := <-status:
if !ok {
log.Printf("container %s is done: cancel slurm job", ctr.UUID)
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
}
func (s *IntegrationSuite) SetUpTest(c *C) {
- arvadostest.StartAPI()
+ arvadostest.ResetEnv()
+ arvadostest.ResetDB(c)
os.Setenv("ARVADOS_API_TOKEN", arvadostest.Dispatch1Token)
s.disp = Dispatcher{}
s.disp.cluster = &arvados.Cluster{}
func (s *IntegrationSuite) TearDownTest(c *C) {
arvadostest.ResetEnv()
- arvadostest.StopAPI()
+ arvadostest.ResetDB(c)
}
type slurmFake struct {
func (s *IntegrationSuite) integrationTest(c *C,
expectBatch [][]string,
- runContainer func(*dispatch.Dispatcher, arvados.Container)) arvados.Container {
+ runContainer func(*dispatch.Dispatcher, arvados.Container)) (arvados.Container, error) {
arvadostest.ResetEnv()
arv, err := arvadosclient.MakeArvadosClient()
ctx, cancel := context.WithCancel(context.Background())
doneRun := make(chan struct{})
+ doneDispatch := make(chan error)
s.disp.Dispatcher = &dispatch.Dispatcher{
Arv: arv,
PollPeriod: time.Second,
- RunContainer: func(disp *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) {
+ RunContainer: func(disp *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) error {
go func() {
runContainer(disp, ctr)
s.slurm.queue = ""
doneRun <- struct{}{}
}()
- s.disp.runContainer(disp, ctr, status)
+ err := s.disp.runContainer(disp, ctr, status)
cancel()
+ doneDispatch <- err
+ return nil
},
}
err = s.disp.Dispatcher.Run(ctx)
<-doneRun
c.Assert(err, Equals, context.Canceled)
+ errDispatch := <-doneDispatch
s.disp.sqCheck.Stop()
var container arvados.Container
err = arv.Get("containers", "zzzzz-dz642-queuedcontainer", nil, &container)
c.Check(err, IsNil)
- return container
+ return container, errDispatch
}
func (s *IntegrationSuite) TestNormal(c *C) {
s.slurm = slurmFake{queue: "zzzzz-dz642-queuedcontainer 10000 100 PENDING Resources\n"}
- container := s.integrationTest(c,
+ container, _ := s.integrationTest(c,
nil,
func(dispatcher *dispatch.Dispatcher, container arvados.Container) {
dispatcher.UpdateState(container.UUID, dispatch.Running)
s.slurm = slurmFake{queue: "zzzzz-dz642-queuedcontainer 10000 100 PENDING Resources\n"}
readyToCancel := make(chan bool)
s.slurm.onCancel = func() { <-readyToCancel }
- container := s.integrationTest(c,
+ container, _ := s.integrationTest(c,
nil,
func(dispatcher *dispatch.Dispatcher, container arvados.Container) {
dispatcher.UpdateState(container.UUID, dispatch.Running)
}
func (s *IntegrationSuite) TestMissingFromSqueue(c *C) {
- container := s.integrationTest(c,
+ container, _ := s.integrationTest(c,
[][]string{{
fmt.Sprintf("--job-name=%s", "zzzzz-dz642-queuedcontainer"),
fmt.Sprintf("--nice=%d", 10000),
func (s *IntegrationSuite) TestSbatchFail(c *C) {
s.slurm = slurmFake{errBatch: errors.New("something terrible happened")}
- container := s.integrationTest(c,
+ container, err := s.integrationTest(c,
[][]string{{"--job-name=zzzzz-dz642-queuedcontainer", "--nice=10000", "--no-requeue", "--mem=11445", "--cpus-per-task=4", "--tmp=45777"}},
func(dispatcher *dispatch.Dispatcher, container arvados.Container) {
dispatcher.UpdateState(container.UUID, dispatch.Running)
dispatcher.UpdateState(container.UUID, dispatch.Complete)
})
c.Check(container.State, Equals, arvados.ContainerStateComplete)
-
- arv, err := arvadosclient.MakeArvadosClient()
- c.Assert(err, IsNil)
-
- var ll arvados.LogList
- err = arv.List("logs", arvadosclient.Dict{"filters": [][]string{
- {"object_uuid", "=", container.UUID},
- {"event_type", "=", "dispatch"},
- }}, &ll)
- c.Assert(err, IsNil)
- c.Assert(len(ll.Items), Equals, 1)
+ c.Check(err, ErrorMatches, `something terrible happened`)
}
type StubbedSuite struct {
dispatcher := dispatch.Dispatcher{
Arv: arv,
PollPeriod: time.Second,
- RunContainer: func(disp *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) {
+ RunContainer: func(disp *dispatch.Dispatcher, ctr arvados.Container, status <-chan arvados.Container) error {
go func() {
time.Sleep(time.Second)
disp.UpdateState(ctr.UUID, dispatch.Running)
}()
s.disp.runContainer(disp, ctr, status)
cancel()
+ return nil
},
}
import (
"flag"
"fmt"
- "os"
)
func usage(fs *flag.FlagSet) {
- fmt.Fprintf(os.Stderr, `
+ fmt.Fprintf(fs.Output(), `
crunch-dispatch-slurm runs queued Arvados containers by submitting
SLURM batch jobs.
Options:
`)
fs.PrintDefaults()
- fmt.Fprintf(os.Stderr, `
+ fmt.Fprintf(fs.Output(), `
For configuration instructions see https://doc.arvados.org/install/crunch2-slurm/install-dispatch.html
`)
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/crunchstat"
)
Logger: log.New(os.Stderr, "crunchstat: ", 0),
}
- flag.StringVar(&reporter.CgroupRoot, "cgroup-root", "", "Root of cgroup tree")
- flag.StringVar(&reporter.CgroupParent, "cgroup-parent", "", "Name of container parent under cgroup")
- flag.StringVar(&reporter.CIDFile, "cgroup-cid", "", "Path to container id file")
- flag.IntVar(&signalOnDeadPPID, "signal-on-dead-ppid", signalOnDeadPPID, "Signal to send child if crunchstat's parent process disappears (0 to disable)")
- flag.DurationVar(&ppidCheckInterval, "ppid-check-interval", ppidCheckInterval, "Time between checks for parent process disappearance")
- pollMsec := flag.Int64("poll", 1000, "Reporting interval, in milliseconds")
- getVersion := flag.Bool("version", false, "Print version information and exit.")
-
- flag.Parse()
-
- // Print version information if requested
- if *getVersion {
+ flags := flag.NewFlagSet(os.Args[0], flag.ExitOnError)
+ flags.StringVar(&reporter.CgroupRoot, "cgroup-root", "", "Root of cgroup tree")
+ flags.StringVar(&reporter.CgroupParent, "cgroup-parent", "", "Name of container parent under cgroup")
+ flags.StringVar(&reporter.CIDFile, "cgroup-cid", "", "Path to container id file")
+ flags.IntVar(&signalOnDeadPPID, "signal-on-dead-ppid", signalOnDeadPPID, "Signal to send child if crunchstat's parent process disappears (0 to disable)")
+ flags.DurationVar(&ppidCheckInterval, "ppid-check-interval", ppidCheckInterval, "Time between checks for parent process disappearance")
+ pollMsec := flags.Int64("poll", 1000, "Reporting interval, in milliseconds")
+ getVersion := flags.Bool("version", false, "Print version information and exit.")
+
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], os.Args[1:], "program [args ...]", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
fmt.Printf("crunchstat %s\n", version)
return
+ } else if flags.NArg() == 0 {
+ fmt.Fprintf(os.Stderr, "missing required argument: program (try -help)\n")
+ os.Exit(2)
}
reporter.Logger.Printf("crunchstat %s started", version)
reporter.PollPeriod = time.Duration(*pollMsec) * time.Millisecond
reporter.Start()
- err := runCommand(flag.Args(), reporter.Logger)
+ err := runCommand(flags.Args(), reporter.Logger)
reporter.Stop()
if err, ok := err.(*exec.ExitError); ok {
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
# SPDX-License-Identifier: Apache-2.0
case "$TARGET" in
- ubuntu1604)
- fpm_depends+=()
- ;;
debian* | ubuntu*)
fpm_depends+=(python3-distutils)
;;
2. Update your package list.
-3. Install the ``python-arvados-fuse`` package.
+3. Install the ``python3-arvados-fuse`` package.
Configuration
-------------
from builtins import str
from builtins import object
import os
-import sys
import llfuse
import errno
import stat
import threading
import arvados
-import pprint
import arvados.events
-import re
-import apiclient
-import json
import logging
import time
-import _strptime
-import calendar
import threading
import itertools
-import ciso8601
import collections
import functools
import arvados.keep
self.add_argument('--read-only', action='store_false', help="Mount will be read only (default)", dest="enable_write", default=False)
self.add_argument('--read-write', action='store_true', help="Mount will be read-write", dest="enable_write", default=False)
+ self.add_argument('--storage-classes', type=str, metavar='CLASSES', help="Specify comma separated list of storage classes to be used when saving data of new collections", default=None)
self.add_argument('--crunchstat-interval', type=float, help="Write stats to stderr every N seconds (default disabled)", default=0)
usr = self.api.users().current().execute(num_retries=self.args.retries)
now = time.time()
dir_class = None
- dir_args = [llfuse.ROOT_INODE, self.operations.inodes, self.api, self.args.retries]
+ dir_args = [llfuse.ROOT_INODE, self.operations.inodes, self.api, self.args.retries, self.args.enable_write]
mount_readme = False
+ storage_classes = None
+ if self.args.storage_classes is not None:
+ storage_classes = self.args.storage_classes.replace(' ', '').split(',')
+ self.logger.info("Storage classes requested for new collections: {}".format(', '.join(storage_classes)))
+
if self.args.collection is not None:
# Set up the request handler with the collection at the root
# First check that the collection is readable
mount_readme = True
if dir_class is not None:
- ent = dir_class(*dir_args)
+ if dir_class in [TagsDirectory, CollectionDirectory]:
+ ent = dir_class(*dir_args)
+ else:
+ ent = dir_class(*dir_args, storage_classes=storage_classes)
self.operations.inodes.add_entry(ent)
self.listen_for_events = ent.want_event_subscribe()
return
e = self.operations.inodes.add_entry(Directory(
- llfuse.ROOT_INODE, self.operations.inodes, self.api.config))
+ llfuse.ROOT_INODE, self.operations.inodes, self.api.config, self.args.enable_write))
dir_args[0] = e.inode
for name in self.args.mount_by_id:
- self._add_mount(e, name, MagicDirectory(*dir_args, pdh_only=False))
+ self._add_mount(e, name, MagicDirectory(*dir_args, pdh_only=False, storage_classes=storage_classes))
for name in self.args.mount_by_pdh:
self._add_mount(e, name, MagicDirectory(*dir_args, pdh_only=True))
for name in self.args.mount_by_tag:
self._add_mount(e, name, TagsDirectory(*dir_args))
for name in self.args.mount_home:
- self._add_mount(e, name, ProjectDirectory(*dir_args, project_object=usr, poll=True))
+ self._add_mount(e, name, ProjectDirectory(*dir_args, project_object=usr, poll=True, storage_classes=storage_classes))
for name in self.args.mount_shared:
- self._add_mount(e, name, SharedDirectory(*dir_args, exclude=usr, poll=True))
+ self._add_mount(e, name, SharedDirectory(*dir_args, exclude=usr, poll=True, storage_classes=storage_classes))
for name in self.args.mount_tmp:
- self._add_mount(e, name, TmpCollectionDirectory(*dir_args))
+ self._add_mount(e, name, TmpCollectionDirectory(*dir_args, storage_classes=storage_classes))
if mount_readme:
text = self._readme_text(
#
# SPDX-License-Identifier: AGPL-3.0
-from __future__ import absolute_import
-from __future__ import division
-from future.utils import viewitems
-from future.utils import itervalues
-from builtins import dict
import apiclient
import arvados
import errno
and the value referencing a File or Directory object.
"""
- def __init__(self, parent_inode, inodes, apiconfig):
+ def __init__(self, parent_inode, inodes, apiconfig, enable_write):
"""parent_inode is the integer inode number"""
super(Directory, self).__init__()
self.apiconfig = apiconfig
self._entries = {}
self._mtime = time.time()
+ self._enable_write = enable_write
def forward_slash_subst(self):
if not hasattr(self, '_fsns'):
def in_use(self):
if super(Directory, self).in_use():
return True
- for v in itervalues(self._entries):
+ for v in self._entries.values():
if v.in_use():
return True
return False
def has_ref(self, only_children):
if super(Directory, self).has_ref(only_children):
return True
- for v in itervalues(self._entries):
+ for v in self._entries.values():
if v.has_ref(False):
return True
return False
# Find self on the parent in order to invalidate this path.
# Calling the public items() method might trigger a refresh,
# which we definitely don't want, so read the internal dict directly.
- for k,v in viewitems(parent._entries):
+ for k,v in parent._entries.items():
if v is self:
self.inodes.invalidate_entry(parent, k)
break
"""
- def __init__(self, parent_inode, inodes, apiconfig, collection):
- super(CollectionDirectoryBase, self).__init__(parent_inode, inodes, apiconfig)
+ def __init__(self, parent_inode, inodes, apiconfig, enable_write, collection):
+ super(CollectionDirectoryBase, self).__init__(parent_inode, inodes, apiconfig, enable_write)
self.apiconfig = apiconfig
self.collection = collection
item.fuse_entry.dead = False
self._entries[name] = item.fuse_entry
elif isinstance(item, arvados.collection.RichCollectionBase):
- self._entries[name] = self.inodes.add_entry(CollectionDirectoryBase(self.inode, self.inodes, self.apiconfig, item))
+ self._entries[name] = self.inodes.add_entry(CollectionDirectoryBase(self.inode, self.inodes, self.apiconfig, self._enable_write, item))
self._entries[name].populate(mtime)
else:
- self._entries[name] = self.inodes.add_entry(FuseArvadosFile(self.inode, item, mtime))
+ self._entries[name] = self.inodes.add_entry(FuseArvadosFile(self.inode, item, mtime, self._enable_write))
item.fuse_entry = self._entries[name]
def on_event(self, event, collection, name, item):
if collection == self.collection:
name = self.sanitize_filename(name)
- _logger.debug("collection notify %s %s %s %s", event, collection, name, item)
- with llfuse.lock:
- if event == arvados.collection.ADD:
- self.new_entry(name, item, self.mtime())
- elif event == arvados.collection.DEL:
- ent = self._entries[name]
- del self._entries[name]
- self.inodes.invalidate_entry(self, name)
- self.inodes.del_entry(ent)
- elif event == arvados.collection.MOD:
- if hasattr(item, "fuse_entry") and item.fuse_entry is not None:
- self.inodes.invalidate_inode(item.fuse_entry)
- elif name in self._entries:
- self.inodes.invalidate_inode(self._entries[name])
+
+ #
+ # It's possible for another thread to have llfuse.lock and
+ # be waiting on collection.lock. Meanwhile, we released
+ # llfuse.lock earlier in the stack, but are still holding
+ # on to the collection lock, and now we need to re-acquire
+ # llfuse.lock. If we don't release the collection lock,
+ # we'll deadlock where we're holding the collection lock
+ # waiting for llfuse.lock and the other thread is holding
+ # llfuse.lock and waiting for the collection lock.
+ #
+ # The correct locking order here is to take llfuse.lock
+ # first, then the collection lock.
+ #
+ # Since collection.lock is an RLock, it might be locked
+ # multiple times, so we need to release it multiple times,
+ # keep a count, then re-lock it the correct number of
+ # times.
+ #
+ lockcount = 0
+ try:
+ while True:
+ self.collection.lock.release()
+ lockcount += 1
+ except RuntimeError:
+ pass
+
+ try:
+ with llfuse.lock:
+ with self.collection.lock:
+ if event == arvados.collection.ADD:
+ self.new_entry(name, item, self.mtime())
+ elif event == arvados.collection.DEL:
+ ent = self._entries[name]
+ del self._entries[name]
+ self.inodes.invalidate_entry(self, name)
+ self.inodes.del_entry(ent)
+ elif event == arvados.collection.MOD:
+ if hasattr(item, "fuse_entry") and item.fuse_entry is not None:
+ self.inodes.invalidate_inode(item.fuse_entry)
+ elif name in self._entries:
+ self.inodes.invalidate_inode(self._entries[name])
+ finally:
+ while lockcount > 0:
+ self.collection.lock.acquire()
+ lockcount -= 1
def populate(self, mtime):
self._mtime = mtime
- self.collection.subscribe(self.on_event)
- for entry, item in viewitems(self.collection):
- self.new_entry(entry, item, self.mtime())
+ with self.collection.lock:
+ self.collection.subscribe(self.on_event)
+ for entry, item in self.collection.items():
+ self.new_entry(entry, item, self.mtime())
def writable(self):
- return self.collection.writable()
+ return self._enable_write and self.collection.writable()
@use_counter
def flush(self):
+ if not self.writable():
+ return
with llfuse.lock_released:
self.collection.root_collection().save()
@use_counter
@check_update
def create(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
with llfuse.lock_released:
self.collection.open(name, "w").close()
@use_counter
@check_update
def mkdir(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
with llfuse.lock_released:
self.collection.mkdirs(name)
@use_counter
@check_update
def unlink(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
with llfuse.lock_released:
self.collection.remove(name)
self.flush()
@use_counter
@check_update
def rmdir(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
with llfuse.lock_released:
self.collection.remove(name)
self.flush()
@use_counter
@check_update
def rename(self, name_old, name_new, src):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
+
if not isinstance(src, CollectionDirectoryBase):
raise llfuse.FUSEError(errno.EPERM)
class CollectionDirectory(CollectionDirectoryBase):
"""Represents the root of a directory tree representing a collection."""
- def __init__(self, parent_inode, inodes, api, num_retries, collection_record=None, explicit_collection=None):
- super(CollectionDirectory, self).__init__(parent_inode, inodes, api.config, None)
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, collection_record=None, explicit_collection=None):
+ super(CollectionDirectory, self).__init__(parent_inode, inodes, api.config, enable_write, None)
self.api = api
self.num_retries = num_retries
self.collection_record_file = None
self._mtime = 0
self._manifest_size = 0
if self.collection_locator:
- self._writable = (uuid_pattern.match(self.collection_locator) is not None)
+ self._writable = (uuid_pattern.match(self.collection_locator) is not None) and enable_write
self._updating_lock = threading.Lock()
def same(self, i):
return i['uuid'] == self.collection_locator or i['portable_data_hash'] == self.collection_locator
def writable(self):
- return self.collection.writable() if self.collection is not None else self._writable
+ return self._enable_write and (self.collection.writable() if self.collection is not None else self._writable)
def want_event_subscribe(self):
return (uuid_pattern.match(self.collection_locator) is not None)
return
_logger.debug("Updating collection %s inode %s to record version %s", self.collection_locator, self.inode, to_record_version)
+ new_collection_record = None
if self.collection is not None:
if self.collection.known_past_version(to_record_version):
_logger.debug("%s already processed %s", self.collection_locator, to_record_version)
new_collection_record["portable_data_hash"] = new_collection_record["uuid"]
if 'manifest_text' not in new_collection_record:
new_collection_record['manifest_text'] = coll_reader.manifest_text()
+ if 'storage_classes_desired' not in new_collection_record:
+ new_collection_record['storage_classes_desired'] = coll_reader.storage_classes_desired()
- if self.collection_record is None or self.collection_record["portable_data_hash"] != new_collection_record.get("portable_data_hash"):
- self.new_collection(new_collection_record, coll_reader)
-
- self._manifest_size = len(coll_reader.manifest_text())
- _logger.debug("%s manifest_size %i", self, self._manifest_size)
# end with llfuse.lock_released, re-acquire lock
+ if (new_collection_record is not None and
+ (self.collection_record is None or
+ self.collection_record["portable_data_hash"] != new_collection_record.get("portable_data_hash"))):
+ self.new_collection(new_collection_record, coll_reader)
+ self._manifest_size = len(coll_reader.manifest_text())
+ _logger.debug("%s manifest_size %i", self, self._manifest_size)
self.fresh()
return True
def save_new(self):
pass
- def __init__(self, parent_inode, inodes, api_client, num_retries):
+ def __init__(self, parent_inode, inodes, api_client, num_retries, enable_write, storage_classes=None):
collection = self.UnsaveableCollection(
api_client=api_client,
keep_client=api_client.keep,
- num_retries=num_retries)
+ num_retries=num_retries,
+ storage_classes_desired=storage_classes)
+ # This is always enable_write=True because it never tries to
+ # save to the backend
super(TmpCollectionDirectory, self).__init__(
- parent_inode, inodes, api_client.config, collection)
+ parent_inode, inodes, api_client.config, True, collection)
self.collection_record_file = None
self.populate(self.mtime())
def on_event(self, *args, **kwargs):
super(TmpCollectionDirectory, self).on_event(*args, **kwargs)
if self.collection_record_file:
- with llfuse.lock:
- self.collection_record_file.invalidate()
- self.inodes.invalidate_inode(self.collection_record_file)
- _logger.debug("%s invalidated collection record", self)
+
+ # See discussion in CollectionDirectoryBase.on_event
+ lockcount = 0
+ try:
+ while True:
+ self.collection.lock.release()
+ lockcount += 1
+ except RuntimeError:
+ pass
+
+ try:
+ with llfuse.lock:
+ with self.collection.lock:
+ self.collection_record_file.invalidate()
+ self.inodes.invalidate_inode(self.collection_record_file)
+ _logger.debug("%s invalidated collection record", self)
+ finally:
+ while lockcount > 0:
+ self.collection.lock.acquire()
+ lockcount -= 1
def collection_record(self):
with llfuse.lock_released:
"uuid": None,
"manifest_text": self.collection.manifest_text(),
"portable_data_hash": self.collection.portable_data_hash(),
+ "storage_classes_desired": self.collection.storage_classes_desired(),
}
def __contains__(self, k):
""".lstrip()
- def __init__(self, parent_inode, inodes, api, num_retries, pdh_only=False):
- super(MagicDirectory, self).__init__(parent_inode, inodes, api.config)
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, pdh_only=False, storage_classes=None):
+ super(MagicDirectory, self).__init__(parent_inode, inodes, api.config, enable_write)
self.api = api
self.num_retries = num_retries
self.pdh_only = pdh_only
+ self.storage_classes = storage_classes
def __setattr__(self, name, value):
super(MagicDirectory, self).__setattr__(name, value)
# If we're the root directory, add an identical by_id subdirectory.
if self.inode == llfuse.ROOT_INODE:
self._entries['by_id'] = self.inodes.add_entry(MagicDirectory(
- self.inode, self.inodes, self.api, self.num_retries, self.pdh_only))
+ self.inode, self.inodes, self.api, self.num_retries, self._enable_write,
+ self.pdh_only))
def __contains__(self, k):
if k in self._entries:
if group_uuid_pattern.match(k):
project = self.api.groups().list(
- filters=[['group_class', '=', 'project'], ["uuid", "=", k]]).execute(num_retries=self.num_retries)
+ filters=[['group_class', 'in', ['project','filter']], ["uuid", "=", k]]).execute(num_retries=self.num_retries)
if project[u'items_available'] == 0:
return False
e = self.inodes.add_entry(ProjectDirectory(
- self.inode, self.inodes, self.api, self.num_retries, project[u'items'][0]))
+ self.inode, self.inodes, self.api, self.num_retries, self._enable_write,
+ project[u'items'][0], storage_classes=self.storage_classes))
else:
e = self.inodes.add_entry(CollectionDirectory(
- self.inode, self.inodes, self.api, self.num_retries, k))
+ self.inode, self.inodes, self.api, self.num_retries, self._enable_write, k))
if e.update():
if k not in self._entries:
class TagsDirectory(Directory):
"""A special directory that contains as subdirectories all tags visible to the user."""
- def __init__(self, parent_inode, inodes, api, num_retries, poll_time=60):
- super(TagsDirectory, self).__init__(parent_inode, inodes, api.config)
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, poll_time=60):
+ super(TagsDirectory, self).__init__(parent_inode, inodes, api.config, enable_write)
self.api = api
self.num_retries = num_retries
self._poll = True
self.merge(tags['items']+[{"name": n} for n in self._extra],
lambda i: i['name'],
lambda a, i: a.tag == i['name'],
- lambda i: TagDirectory(self.inode, self.inodes, self.api, self.num_retries, i['name'], poll=self._poll, poll_time=self._poll_time))
+ lambda i: TagDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write,
+ i['name'], poll=self._poll, poll_time=self._poll_time))
@use_counter
@check_update
to the user that are tagged with a particular tag.
"""
- def __init__(self, parent_inode, inodes, api, num_retries, tag,
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, tag,
poll=False, poll_time=60):
- super(TagDirectory, self).__init__(parent_inode, inodes, api.config)
+ super(TagDirectory, self).__init__(parent_inode, inodes, api.config, enable_write)
self.api = api
self.num_retries = num_retries
self.tag = tag
self.merge(taggedcollections['items'],
lambda i: i['head_uuid'],
lambda a, i: a.collection_locator == i['head_uuid'],
- lambda i: CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, i['head_uuid']))
+ lambda i: CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write, i['head_uuid']))
class ProjectDirectory(Directory):
"""A special directory that contains the contents of a project."""
- def __init__(self, parent_inode, inodes, api, num_retries, project_object,
- poll=False, poll_time=60):
- super(ProjectDirectory, self).__init__(parent_inode, inodes, api.config)
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, project_object,
+ poll=True, poll_time=3, storage_classes=None):
+ super(ProjectDirectory, self).__init__(parent_inode, inodes, api.config, enable_write)
self.api = api
self.num_retries = num_retries
self.project_object = project_object
self._updating_lock = threading.Lock()
self._current_user = None
self._full_listing = False
+ self.storage_classes = storage_classes
def want_event_subscribe(self):
return True
def createDirectory(self, i):
if collection_uuid_pattern.match(i['uuid']):
- return CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, i)
+ return CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write, i)
elif group_uuid_pattern.match(i['uuid']):
- return ProjectDirectory(self.inode, self.inodes, self.api, self.num_retries, i, self._poll, self._poll_time)
+ return ProjectDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write,
+ i, self._poll, self._poll_time, self.storage_classes)
elif link_uuid_pattern.match(i['uuid']):
if i['head_kind'] == 'arvados#collection' or portable_data_hash_pattern.match(i['head_uuid']):
- return CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, i['head_uuid'])
+ return CollectionDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write, i['head_uuid'])
else:
return None
elif uuid_pattern.match(i['uuid']):
elif user_uuid_pattern.match(self.project_uuid):
self.project_object = self.api.users().get(
uuid=self.project_uuid).execute(num_retries=self.num_retries)
-
- contents = arvados.util.list_all(self.api.groups().list,
- self.num_retries,
- filters=[["owner_uuid", "=", self.project_uuid],
- ["group_class", "=", "project"]])
- contents.extend(arvados.util.list_all(self.api.collections().list,
- self.num_retries,
- filters=[["owner_uuid", "=", self.project_uuid]]))
+ # do this in 2 steps until #17424 is fixed
+ contents = list(arvados.util.keyset_list_all(self.api.groups().contents,
+ order_key="uuid",
+ num_retries=self.num_retries,
+ uuid=self.project_uuid,
+ filters=[["uuid", "is_a", "arvados#group"],
+ ["groups.group_class", "in", ["project","filter"]]]))
+ contents.extend(arvados.util.keyset_list_all(self.api.groups().contents,
+ order_key="uuid",
+ num_retries=self.num_retries,
+ uuid=self.project_uuid,
+ filters=[["uuid", "is_a", "arvados#collection"]]))
# end with llfuse.lock_released, re-acquire lock
else:
namefilter = ["name", "in", [k, k2]]
contents = self.api.groups().list(filters=[["owner_uuid", "=", self.project_uuid],
- ["group_class", "=", "project"],
+ ["group_class", "in", ["project","filter"]],
namefilter],
limit=2).execute(num_retries=self.num_retries)["items"]
if not contents:
@use_counter
@check_update
def writable(self):
+ if not self._enable_write:
+ return False
with llfuse.lock_released:
if not self._current_user:
self._current_user = self.api.users().current().execute(num_retries=self.num_retries)
@use_counter
@check_update
def mkdir(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
+
try:
with llfuse.lock_released:
- self.api.collections().create(body={"owner_uuid": self.project_uuid,
- "name": name,
- "manifest_text": ""}).execute(num_retries=self.num_retries)
+ c = {
+ "owner_uuid": self.project_uuid,
+ "name": name,
+ "manifest_text": "" }
+ if self.storage_classes is not None:
+ c["storage_classes_desired"] = self.storage_classes
+ try:
+ self.api.collections().create(body=c).execute(num_retries=self.num_retries)
+ except Exception as e:
+ raise
self.invalidate()
except apiclient_errors.Error as error:
_logger.error(error)
@use_counter
@check_update
def rmdir(self, name):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
+
if name not in self:
raise llfuse.FUSEError(errno.ENOENT)
if not isinstance(self[name], CollectionDirectory):
@use_counter
@check_update
def rename(self, name_old, name_new, src):
+ if not self.writable():
+ raise llfuse.FUSEError(errno.EROFS)
+
if not isinstance(src, ProjectDirectory):
raise llfuse.FUSEError(errno.EPERM)
class SharedDirectory(Directory):
"""A special directory that represents users or groups who have shared projects with me."""
- def __init__(self, parent_inode, inodes, api, num_retries, exclude,
- poll=False, poll_time=60):
- super(SharedDirectory, self).__init__(parent_inode, inodes, api.config)
+ def __init__(self, parent_inode, inodes, api, num_retries, enable_write, exclude,
+ poll=False, poll_time=60, storage_classes=None):
+ super(SharedDirectory, self).__init__(parent_inode, inodes, api.config, enable_write)
self.api = api
self.num_retries = num_retries
self.current_user = api.users().current().execute(num_retries=num_retries)
self._poll = True
self._poll_time = poll_time
self._updating_lock = threading.Lock()
+ self.storage_classes = storage_classes
@use_counter
def update(self):
if 'httpMethod' in methods.get('shared', {}):
page = []
while True:
- resp = self.api.groups().shared(filters=[['group_class', '=', 'project']]+page,
+ resp = self.api.groups().shared(filters=[['group_class', 'in', ['project','filter']]]+page,
order="uuid",
limit=10000,
count="none",
objects[r["uuid"]] = r
root_owners.add(r["uuid"])
else:
- all_projects = arvados.util.list_all(
- self.api.groups().list, self.num_retries,
- filters=[['group_class','=','project']],
- select=["uuid", "owner_uuid"])
+ all_projects = list(arvados.util.keyset_list_all(
+ self.api.groups().list,
+ order_key="uuid",
+ num_retries=self.num_retries,
+ filters=[['group_class','in',['project','filter']]],
+ select=["uuid", "owner_uuid"]))
for ob in all_projects:
objects[ob['uuid']] = ob
roots.append(ob['uuid'])
root_owners.add(ob['owner_uuid'])
- lusers = arvados.util.list_all(
- self.api.users().list, self.num_retries,
+ lusers = arvados.util.keyset_list_all(
+ self.api.users().list,
+ order_key="uuid",
+ num_retries=self.num_retries,
filters=[['uuid','in', list(root_owners)]])
- lgroups = arvados.util.list_all(
- self.api.groups().list, self.num_retries,
+ lgroups = arvados.util.keyset_list_all(
+ self.api.groups().list,
+ order_key="uuid",
+ num_retries=self.num_retries,
filters=[['uuid','in', list(root_owners)+roots]])
for l in lusers:
obr = objects[r]
if obr.get("name"):
contents[obr["name"]] = obr
- #elif obr.get("username"):
- # contents[obr["username"]] = obr
elif "first_name" in obr:
contents[u"{} {}".format(obr["first_name"], obr["last_name"])] = obr
# end with llfuse.lock_released, re-acquire lock
- self.merge(viewitems(contents),
+ self.merge(contents.items(),
lambda i: i[0],
lambda a, i: a.uuid() == i[1]['uuid'],
- lambda i: ProjectDirectory(self.inode, self.inodes, self.api, self.num_retries, i[1], poll=self._poll, poll_time=self._poll_time))
+ lambda i: ProjectDirectory(self.inode, self.inodes, self.api, self.num_retries, self._enable_write,
+ i[1], poll=self._poll, poll_time=self._poll_time, storage_classes=self.storage_classes))
except Exception:
_logger.exception("arv-mount shared dir error")
finally:
class FuseArvadosFile(File):
"""Wraps a ArvadosFile."""
- __slots__ = ('arvfile',)
+ __slots__ = ('arvfile', '_enable_write')
- def __init__(self, parent_inode, arvfile, _mtime):
+ def __init__(self, parent_inode, arvfile, _mtime, enable_write):
super(FuseArvadosFile, self).__init__(parent_inode, _mtime)
self.arvfile = arvfile
+ self._enable_write = enable_write
def size(self):
with llfuse.lock_released:
return False
def writable(self):
- return self.arvfile.writable()
+ return self._enable_write and self.arvfile.writable()
def flush(self):
with llfuse.lock_released:
if attempted:
# Report buffered stderr from previous call to fusermount,
# now that we know it didn't succeed.
- sys.stderr.write(fusermount_output)
+ sys.stderr.buffer.write(fusermount_output)
delay = 1
if deadline:
llfuse.close()
def make_mount(self, root_class, **root_kwargs):
+ enable_write = True
+ if 'enable_write' in root_kwargs:
+ enable_write = root_kwargs.pop('enable_write')
self.operations = fuse.Operations(
os.getuid(), os.getgid(),
api_client=self.api,
- enable_write=True)
+ enable_write=enable_write)
self.operations.inodes.add_entry(root_class(
- llfuse.ROOT_INODE, self.operations.inodes, self.api, 0, **root_kwargs))
+ llfuse.ROOT_INODE, self.operations.inodes, self.api, 0, enable_write, **root_kwargs))
llfuse.init(self.operations, self.mounttmp, [])
self.llfuse_thread = threading.Thread(None, lambda: self._llfuse_main())
self.llfuse_thread.daemon = True
def try_exec(mnt, cmd):
try:
+ os.environ['KEEP_LOCAL_STORE'] = tempfile.mkdtemp()
arvados_fuse.command.Mount(
arvados_fuse.command.ArgumentParser().parse_args([
'--read-write',
import subprocess
import time
import unittest
+import tempfile
import arvados
import arvados_fuse as fuse
from .integration_test import IntegrationTest
from .mount_test_base import MountTestBase
+from .test_tmp_collection import storage_classes_desired
logger = logging.getLogger('arvados.arv-mount')
self.test_project = run_test_server.fixture('groups')['aproject']['uuid']
self.non_project_group = run_test_server.fixture('groups')['public_role']['uuid']
+ self.filter_group = run_test_server.fixture('groups')['afiltergroup']['uuid']
self.collection_in_test_project = run_test_server.fixture('collections')['foo_collection_in_aproject']['name']
+ self.collection_in_filter_group = run_test_server.fixture('collections')['baz_file']['name']
cw = arvados.CollectionWriter()
llfuse.listdir(os.path.join(self.mounttmp, self.test_project)))
self.assertIn(self.collection_in_test_project,
llfuse.listdir(os.path.join(self.mounttmp, 'by_id', self.test_project)))
+ self.assertIn(self.collection_in_filter_group,
+ llfuse.listdir(os.path.join(self.mounttmp, self.filter_group)))
+ self.assertIn(self.collection_in_filter_group,
+ llfuse.listdir(os.path.join(self.mounttmp, 'by_id', self.filter_group)))
+
mount_ls = llfuse.listdir(self.mounttmp)
self.assertIn('README', mount_ls)
self.assertIn(self.test_project, mount_ls)
self.assertIn(self.test_project,
llfuse.listdir(os.path.join(self.mounttmp, 'by_id')))
+ self.assertIn(self.filter_group,
+ llfuse.listdir(os.path.join(self.mounttmp, 'by_id')))
with self.assertRaises(OSError):
llfuse.listdir(os.path.join(self.mounttmp, 'by_id', self.non_project_group))
attempt(self.assertEqual, [], llfuse.listdir(os.path.join(self.mounttmp, "aproject")))
-def fuseFileConflictTestHelper(mounttmp):
+def fuseFileConflictTestHelper(mounttmp, uuid, keeptmp, settings):
class Test(unittest.TestCase):
def runTest(self):
+ os.environ['KEEP_LOCAL_STORE'] = keeptmp
+
with open(os.path.join(mounttmp, "file1.txt"), "w") as f:
+ with arvados.collection.Collection(uuid, api_client=arvados.api_from_config('v1', apiconfig=settings)) as collection2:
+ with collection2.open("file1.txt", "w") as f2:
+ f2.write("foo")
f.write("bar")
d1 = sorted(llfuse.listdir(os.path.join(mounttmp)))
d1 = llfuse.listdir(os.path.join(self.mounttmp))
self.assertEqual([], sorted(d1))
- with arvados.collection.Collection(collection.manifest_locator(), api_client=self.api) as collection2:
- with collection2.open("file1.txt", "w") as f:
- f.write("foo")
-
# See note in MountTestBase.setUp
- self.pool.apply(fuseFileConflictTestHelper, (self.mounttmp,))
+ self.pool.apply(fuseFileConflictTestHelper, (self.mounttmp, collection.manifest_locator(), self.keeptmp, arvados.config.settings()))
def fuseUnlinkOpenFileTest(mounttmp):
class SanitizeFilenameTest(MountTestBase):
def test_sanitize_filename(self):
- pdir = fuse.ProjectDirectory(1, {}, self.api, 0, project_object=self.api.users().current().execute())
+ pdir = fuse.ProjectDirectory(1, {}, self.api, 0, False, project_object=self.api.users().current().execute())
acceptable = [
"foo.txt",
".foo",
def _test_slash_substitution_conflict(self, tmpdir, fusename):
with open(os.path.join(tmpdir, fusename, 'waz'), 'w') as f:
f.write('foo')
+
+class StorageClassesTest(IntegrationTest):
+ mnt_args = [
+ '--read-write',
+ '--mount-home', 'homedir',
+ ]
+
+ def setUp(self):
+ super(StorageClassesTest, self).setUp()
+ self.api = arvados.safeapi.ThreadSafeApiCache(arvados.config.settings())
+
+ @IntegrationTest.mount(argv=mnt_args)
+ def test_collection_default_storage_classes(self):
+ coll_path = os.path.join(self.mnt, 'homedir', 'a_collection')
+ self.api.collections().create(body={'name':'a_collection'}).execute()
+ self.pool_test(coll_path)
+ @staticmethod
+ def _test_collection_default_storage_classes(self, coll):
+ self.assertEqual(storage_classes_desired(coll), ['default'])
+
+ @IntegrationTest.mount(argv=mnt_args+['--storage-classes', 'foo'])
+ def test_collection_custom_storage_classes(self):
+ coll_path = os.path.join(self.mnt, 'homedir', 'new_coll')
+ os.mkdir(coll_path)
+ self.pool_test(coll_path)
+ @staticmethod
+ def _test_collection_custom_storage_classes(self, coll):
+ self.assertEqual(storage_classes_desired(coll), ['foo'])
+
+def _readonlyCollectionTestHelper(mounttmp):
+ f = open(os.path.join(mounttmp, 'thing1.txt'), 'rt')
+ # Testing that close() doesn't raise an error.
+ f.close()
+
+class ReadonlyCollectionTest(MountTestBase):
+ def setUp(self):
+ super(ReadonlyCollectionTest, self).setUp()
+ cw = arvados.collection.Collection()
+ with cw.open('thing1.txt', 'wt') as f:
+ f.write("data 1")
+ cw.save_new(owner_uuid=run_test_server.fixture("groups")["aproject"]["uuid"])
+ self.testcollection = cw.api_response()
+
+ def runTest(self):
+ settings = arvados.config.settings().copy()
+ settings["ARVADOS_API_TOKEN"] = run_test_server.fixture("api_client_authorizations")["project_viewer"]["api_token"]
+ self.api = arvados.safeapi.ThreadSafeApiCache(settings)
+ self.make_mount(fuse.CollectionDirectory, collection_record=self.testcollection, enable_write=False)
+
+ self.pool.apply(_readonlyCollectionTestHelper, (self.mounttmp,))
with open(os.path.join(tmpdir, '.arvados#collection')) as tmp:
return json.load(tmp)['manifest_text']
+def storage_classes_desired(tmpdir):
+ with open(os.path.join(tmpdir, '.arvados#collection')) as tmp:
+ return json.load(tmp)['storage_classes_desired']
class TmpCollectionTest(IntegrationTest):
mnt_args = [
'--mount-tmp', 'zzz',
]
+ @IntegrationTest.mount(argv=mnt_args+['--storage-classes', 'foo, bar'])
+ def test_storage_classes(self):
+ self.pool_test(os.path.join(self.mnt, 'zzz'))
+ @staticmethod
+ def _test_storage_classes(self, zzz):
+ self.assertEqual(storage_classes_desired(zzz), ['foo', 'bar'])
+
@IntegrationTest.mount(argv=mnt_args+['--mount-tmp', 'yyy'])
def test_two_tmp(self):
self.pool_test(os.path.join(self.mnt, 'zzz'),
After=network.target
AssertPathExists=/etc/arvados/config.yml
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
"bytes"
"context"
"crypto/md5"
+ "errors"
"fmt"
"io"
"io/ioutil"
"sort"
"strings"
"sync"
+ "sync/atomic"
"syscall"
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/jmoiron/sqlx"
"github.com/sirupsen/logrus"
)
// BlobSignatureTTL; and all N existing replicas of a given data block
// are in the N best positions in rendezvous probe order.
type Balancer struct {
+ DB *sqlx.DB
Logger logrus.FieldLogger
Dumper logrus.FieldLogger
Metrics *metrics
classes []string
mounts int
mountsByClass map[string]map[*KeepMount]bool
- collScanned int
+ collScanned int64
serviceRoots map[string]string
errors []error
stats balancerStats
}
if runOptions.CommitTrash {
err = bal.CommitTrash(ctx, client)
+ if err != nil {
+ return
+ }
+ }
+ if runOptions.CommitConfirmedFields {
+ err = bal.updateCollections(ctx, client, cluster)
+ if err != nil {
+ return
+ }
}
return
}
rwdev := map[string]*KeepService{}
for _, srv := range bal.KeepServices {
for _, mnt := range srv.mounts {
- if !mnt.ReadOnly && mnt.DeviceID != "" {
- rwdev[mnt.DeviceID] = srv
+ if !mnt.ReadOnly {
+ rwdev[mnt.UUID] = srv
}
}
}
for _, srv := range bal.KeepServices {
var dedup []*KeepMount
for _, mnt := range srv.mounts {
- if mnt.ReadOnly && rwdev[mnt.DeviceID] != nil {
- bal.logf("skipping srv %s readonly mount %q because same device %q is mounted read-write on srv %s", srv, mnt.UUID, mnt.DeviceID, rwdev[mnt.DeviceID])
+ if mnt.ReadOnly && rwdev[mnt.UUID] != nil {
+ bal.logf("skipping srv %s readonly mount %q because same volume is mounted read-write on srv %s", srv, mnt.UUID, rwdev[mnt.UUID])
} else {
dedup = append(dedup, mnt)
}
}
}
+ mountProblem := false
+ type deviceMount struct {
+ srv *KeepService
+ mnt *KeepMount
+ }
+ deviceMounted := map[string]deviceMount{} // DeviceID -> mount
+ for _, srv := range bal.KeepServices {
+ for _, mnt := range srv.mounts {
+ if first, dup := deviceMounted[mnt.DeviceID]; dup && first.mnt.UUID != mnt.UUID && mnt.DeviceID != "" {
+ bal.logf("config error: device %s is mounted with multiple volume UUIDs: %s on %s, and %s on %s",
+ mnt.DeviceID,
+ first.mnt.UUID, first.srv,
+ mnt.UUID, srv)
+ mountProblem = true
+ continue
+ }
+ deviceMounted[mnt.DeviceID] = deviceMount{srv, mnt}
+ }
+ }
+ if mountProblem {
+ return errors.New("cannot continue with config errors (see above)")
+ }
+
var checkPage arvados.CollectionList
if err = c.RequestAndDecode(&checkPage, "GET", "arvados/v1/collections", nil, arvados.ResourceListParams{
Limit: new(int),
deviceMount := map[string]*KeepMount{}
for _, srv := range bal.KeepServices {
for _, mnt := range srv.mounts {
- equiv := deviceMount[mnt.DeviceID]
+ equiv := deviceMount[mnt.UUID]
if equiv == nil {
equiv = mnt
- if mnt.DeviceID != "" {
- deviceMount[mnt.DeviceID] = equiv
- }
+ deviceMount[mnt.UUID] = equiv
}
equivMount[equiv] = append(equivMount[equiv], mnt)
}
}(mounts)
}
- // collQ buffers incoming collections so we can start fetching
- // the next page without waiting for the current page to
- // finish processing.
collQ := make(chan arvados.Collection, bufs)
- // Start a goroutine to process collections. (We could use a
- // worker pool here, but even with a single worker we already
- // process collections much faster than we can retrieve them.)
- wg.Add(1)
- go func() {
- defer wg.Done()
- for coll := range collQ {
- err := bal.addCollection(coll)
- if err != nil || len(errs) > 0 {
- select {
- case errs <- err:
- default:
- }
- for range collQ {
- }
- cancel()
- return
- }
- bal.collScanned++
- }
- }()
-
- // Start a goroutine to retrieve all collections from the
- // Arvados database and send them to collQ for processing.
+ // Retrieve all collections from the database and send them to
+ // collQ.
wg.Add(1)
go func() {
defer wg.Done()
- err = EachCollection(ctx, c, pageSize,
+ err = EachCollection(ctx, bal.DB, c,
func(coll arvados.Collection) error {
collQ <- coll
if len(errs) > 0 {
}
}()
+ // Parse manifests from collQ and pass the block hashes to
+ // BlockStateMap to track desired replication.
+ for i := 0; i < runtime.NumCPU(); i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for coll := range collQ {
+ err := bal.addCollection(coll)
+ if err != nil || len(errs) > 0 {
+ select {
+ case errs <- err:
+ default:
+ }
+ cancel()
+ continue
+ }
+ atomic.AddInt64(&bal.collScanned, 1)
+ }
+ }()
+ }
+
wg.Wait()
if len(errs) > 0 {
return <-errs
if coll.ReplicationDesired != nil {
repl = *coll.ReplicationDesired
}
- bal.Logger.Debugf("%v: %d block x%d", coll.UUID, len(blkids), repl)
+ bal.Logger.Debugf("%v: %d blocks x%d", coll.UUID, len(blkids), repl)
// Pass pdh to IncreaseDesired only if LostBlocksFile is being
// written -- otherwise it's just a waste of memory.
pdh := ""
// effectively read-only.
mnt.ReadOnly = mnt.ReadOnly || srv.ReadOnly
- if len(mnt.StorageClasses) == 0 {
- bal.mountsByClass["default"][mnt] = true
- continue
- }
for class := range mnt.StorageClasses {
if mbc := bal.mountsByClass[class]; mbc == nil {
bal.classes = append(bal.classes, class)
// new/remaining replicas uniformly
// across qualifying mounts on a given
// server.
- return rendezvousLess(si.mnt.DeviceID, sj.mnt.DeviceID, blkid)
+ return rendezvousLess(si.mnt.UUID, sj.mnt.UUID, blkid)
}
})
// and returns true if all requirements are met.
trySlot := func(i int) bool {
slot := slots[i]
- if wantMnt[slot.mnt] || wantDev[slot.mnt.DeviceID] {
+ if wantMnt[slot.mnt] || wantDev[slot.mnt.UUID] {
// Already allocated a replica to this
// backend device, possibly on a
// different server.
slots[i].want = true
wantSrv[slot.mnt.KeepService] = true
wantMnt[slot.mnt] = true
- if slot.mnt.DeviceID != "" {
- wantDev[slot.mnt.DeviceID] = true
- }
+ wantDev[slot.mnt.UUID] = true
replWant += slot.mnt.Replication
}
return replProt >= desired && replWant >= desired
// haven't already been added to unsafeToDelete
// because the servers report different Mtimes.
for _, slot := range slots {
- if slot.repl != nil && wantDev[slot.mnt.DeviceID] {
+ if slot.repl != nil && wantDev[slot.mnt.UUID] {
unsafeToDelete[slot.repl.Mtime] = true
}
}
if onlyCount != nil && !onlyCount[slot.mnt] {
continue
}
- if countedDev[slot.mnt.DeviceID] {
+ if countedDev[slot.mnt.UUID] {
continue
}
switch {
bbs.pulling++
repl += slot.mnt.Replication
}
- if slot.mnt.DeviceID != "" {
- countedDev[slot.mnt.DeviceID] = true
- }
+ countedDev[slot.mnt.UUID] = true
}
if repl < needRepl {
bbs.unachievable = true
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/jmoiron/sqlx"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/expfmt"
check "gopkg.in/check.v1"
var stubMounts = map[string][]arvados.KeepMount{
"keep0.zzzzz.arvadosapi.com:25107": {{
- UUID: "zzzzz-ivpuk-000000000000000",
- DeviceID: "keep0-vol0",
+ UUID: "zzzzz-ivpuk-000000000000000",
+ DeviceID: "keep0-vol0",
+ StorageClasses: map[string]bool{"default": true},
}},
"keep1.zzzzz.arvadosapi.com:25107": {{
- UUID: "zzzzz-ivpuk-100000000000000",
- DeviceID: "keep1-vol0",
+ UUID: "zzzzz-ivpuk-100000000000000",
+ DeviceID: "keep1-vol0",
+ StorageClasses: map[string]bool{"default": true},
}},
"keep2.zzzzz.arvadosapi.com:25107": {{
- UUID: "zzzzz-ivpuk-200000000000000",
- DeviceID: "keep2-vol0",
+ UUID: "zzzzz-ivpuk-200000000000000",
+ DeviceID: "keep2-vol0",
+ StorageClasses: map[string]bool{"default": true},
}},
"keep3.zzzzz.arvadosapi.com:25107": {{
- UUID: "zzzzz-ivpuk-300000000000000",
- DeviceID: "keep3-vol0",
+ UUID: "zzzzz-ivpuk-300000000000000",
+ DeviceID: "keep3-vol0",
+ StorageClasses: map[string]bool{"default": true},
}},
}
type runSuite struct {
stub stubServer
config *arvados.Cluster
+ db *sqlx.DB
client *arvados.Client
}
Metrics: newMetrics(prometheus.NewRegistry()),
Logger: options.Logger,
Dumper: options.Dumper,
+ DB: s.db,
}
return srv
}
c.Assert(err, check.Equals, nil)
s.config, err = cfg.GetCluster("")
c.Assert(err, check.Equals, nil)
+ s.db, err = sqlx.Open("postgres", s.config.PostgreSQL.Connection.String())
+ c.Assert(err, check.IsNil)
s.config.Collections.BalancePeriod = arvados.Duration(time.Second)
arvadostest.SetServiceURL(&s.config.Services.Keepbalance, "http://localhost:/")
}
func (s *runSuite) TestRefuseZeroCollections(c *check.C) {
+ defer arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil)
+ _, err := s.db.Exec(`delete from collections`)
+ c.Assert(err, check.IsNil)
opts := RunOptions{
CommitPulls: true,
CommitTrash: true,
trashReqs := s.stub.serveKeepstoreTrash()
pullReqs := s.stub.serveKeepstorePull()
srv := s.newServer(&opts)
- _, err := srv.runOnce()
+ _, err = srv.runOnce()
c.Check(err, check.ErrorMatches, "received zero collections")
c.Check(trashReqs.Count(), check.Equals, 4)
c.Check(pullReqs.Count(), check.Equals, 0)
c.Check(pullReqs.Count(), check.Equals, 0)
}
-func (s *runSuite) TestDetectSkippedCollections(c *check.C) {
+func (s *runSuite) TestRefuseSameDeviceDifferentVolumes(c *check.C) {
opts := RunOptions{
CommitPulls: true,
CommitTrash: true,
Logger: ctxlog.TestLogger(c),
}
s.stub.serveCurrentUserAdmin()
- s.stub.serveCollectionsButSkipOne()
+ s.stub.serveZeroCollections()
s.stub.serveKeepServices(stubServices)
- s.stub.serveKeepstoreMounts()
- s.stub.serveKeepstoreIndexFoo4Bar1()
+ s.stub.mux.HandleFunc("/mounts", func(w http.ResponseWriter, r *http.Request) {
+ hostid := r.Host[:5] // "keep0.zzzzz.arvadosapi.com:25107" => "keep0"
+ json.NewEncoder(w).Encode([]arvados.KeepMount{{
+ UUID: "zzzzz-ivpuk-0000000000" + hostid,
+ DeviceID: "keep0-vol0",
+ StorageClasses: map[string]bool{"default": true},
+ }})
+ })
trashReqs := s.stub.serveKeepstoreTrash()
pullReqs := s.stub.serveKeepstorePull()
srv := s.newServer(&opts)
_, err := srv.runOnce()
- c.Check(err, check.ErrorMatches, `Retrieved 2 collections with modtime <= .* but server now reports there are 3 collections.*`)
- c.Check(trashReqs.Count(), check.Equals, 4)
+ c.Check(err, check.ErrorMatches, "cannot continue with config errors.*")
+ c.Check(trashReqs.Count(), check.Equals, 0)
c.Check(pullReqs.Count(), check.Equals, 0)
}
c.Check(err, check.IsNil)
lost, err := ioutil.ReadFile(lostf.Name())
c.Assert(err, check.IsNil)
- c.Check(string(lost), check.Equals, "37b51d194a7513e45b56f6524f2d51f2 fa7aeb5140e2848d39b416daeef4ffc5+45\n")
+ c.Check(string(lost), check.Matches, `(?ms).*37b51d194a7513e45b56f6524f2d51f2.* fa7aeb5140e2848d39b416daeef4ffc5\+45.*`)
}
func (s *runSuite) TestDryRun(c *check.C) {
}
func (s *runSuite) TestCommit(c *check.C) {
- lostf, err := ioutil.TempFile("", "keep-balance-lost-blocks-test-")
- c.Assert(err, check.IsNil)
- s.config.Collections.BlobMissingReport = lostf.Name()
- defer os.Remove(lostf.Name())
-
+ s.config.Collections.BlobMissingReport = c.MkDir() + "/keep-balance-lost-blocks-test-"
s.config.ManagementToken = "xyzzy"
opts := RunOptions{
CommitPulls: true,
// in a poor rendezvous position
c.Check(bal.stats.pulls, check.Equals, 2)
- lost, err := ioutil.ReadFile(lostf.Name())
+ lost, err := ioutil.ReadFile(s.config.Collections.BlobMissingReport)
c.Assert(err, check.IsNil)
- c.Check(string(lost), check.Equals, "")
+ c.Check(string(lost), check.Not(check.Matches), `(?ms).*acbd18db4cc2f85cedef654fccc4a4d8.*`)
buf, err := s.getMetrics(c, srv)
c.Check(err, check.IsNil)
- c.Check(buf, check.Matches, `(?ms).*\narvados_keep_total_bytes 15\n.*`)
- c.Check(buf, check.Matches, `(?ms).*\narvados_keepbalance_changeset_compute_seconds_sum [0-9\.]+\n.*`)
- c.Check(buf, check.Matches, `(?ms).*\narvados_keepbalance_changeset_compute_seconds_count 1\n.*`)
- c.Check(buf, check.Matches, `(?ms).*\narvados_keep_dedup_byte_ratio 1\.5\n.*`)
- c.Check(buf, check.Matches, `(?ms).*\narvados_keep_dedup_block_ratio 1\.5\n.*`)
+ bufstr := buf.String()
+ c.Check(bufstr, check.Matches, `(?ms).*\narvados_keep_total_bytes 15\n.*`)
+ c.Check(bufstr, check.Matches, `(?ms).*\narvados_keepbalance_changeset_compute_seconds_sum [0-9\.]+\n.*`)
+ c.Check(bufstr, check.Matches, `(?ms).*\narvados_keepbalance_changeset_compute_seconds_count 1\n.*`)
+ c.Check(bufstr, check.Matches, `(?ms).*\narvados_keep_dedup_byte_ratio [1-9].*`)
+ c.Check(bufstr, check.Matches, `(?ms).*\narvados_keep_dedup_block_ratio [1-9].*`)
}
func (s *runSuite) TestRunForever(c *check.C) {
}
srv.mounts = []*KeepMount{{
KeepMount: arvados.KeepMount{
- UUID: fmt.Sprintf("zzzzz-mount-%015x", i),
+ UUID: fmt.Sprintf("zzzzz-mount-%015x", i),
+ StorageClasses: map[string]bool{"default": true},
},
KeepService: srv,
}}
srv.mounts[0].KeepMount.DeviceID = fmt.Sprintf("writable-by-srv-%x", i)
srv.mounts = append(srv.mounts, &KeepMount{
KeepMount: arvados.KeepMount{
- DeviceID: fmt.Sprintf("writable-by-srv-%x", (i+1)%len(bal.srvs)),
- UUID: fmt.Sprintf("zzzzz-mount-%015x", i<<16),
- ReadOnly: readonly,
- Replication: 1,
+ DeviceID: bal.srvs[(i+1)%len(bal.srvs)].mounts[0].KeepMount.DeviceID,
+ UUID: bal.srvs[(i+1)%len(bal.srvs)].mounts[0].KeepMount.UUID,
+ ReadOnly: readonly,
+ Replication: 1,
+ StorageClasses: map[string]bool{"default": true},
},
KeepService: srv,
})
func (bal *balancerSuite) TestCleanupMounts(c *check.C) {
bal.srvs[3].mounts[0].KeepMount.ReadOnly = true
bal.srvs[3].mounts[0].KeepMount.DeviceID = "abcdef"
+ bal.srvs[14].mounts[0].KeepMount.UUID = bal.srvs[3].mounts[0].KeepMount.UUID
bal.srvs[14].mounts[0].KeepMount.DeviceID = "abcdef"
c.Check(len(bal.srvs[3].mounts), check.Equals, 1)
bal.cleanupMounts()
}
func (bal *balancerSuite) TestDeviceRWMountedByMultipleServers(c *check.C) {
- bal.srvs[0].mounts[0].KeepMount.DeviceID = "abcdef"
- bal.srvs[9].mounts[0].KeepMount.DeviceID = "abcdef"
- bal.srvs[14].mounts[0].KeepMount.DeviceID = "abcdef"
+ dupUUID := bal.srvs[0].mounts[0].KeepMount.UUID
+ bal.srvs[9].mounts[0].KeepMount.UUID = dupUUID
+ bal.srvs[14].mounts[0].KeepMount.UUID = dupUUID
// block 0 belongs on servers 3 and e, which have different
- // device IDs.
+ // UUIDs.
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 2},
current: slots{1},
shouldPull: slots{0}})
// block 1 belongs on servers 0 and 9, which both report
- // having a replica, but the replicas are on the same device
- // ID -- so we should pull to the third position (7).
+ // having a replica, but the replicas are on the same volume
+ // -- so we should pull to the third position (7).
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 2},
current: slots{0, 1},
shouldPull: slots{2}})
- // block 1 can be pulled to the doubly-mounted device, but the
+ // block 1 can be pulled to the doubly-mounted volume, but the
// pull should only be done on the first of the two servers.
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 2},
current: slots{2},
shouldPull: slots{0}})
- // block 0 has one replica on a single device mounted on two
+ // block 0 has one replica on a single volume mounted on two
// servers (e,9 at positions 1,9). Trashing the replica on 9
// would lose the block.
bal.try(c, tester{
pulling: 1,
}})
// block 0 is overreplicated, but the second and third
- // replicas are the same replica according to DeviceID
+ // replicas are the same replica according to volume UUID
// (despite different Mtimes). Don't trash the third replica.
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 2, "special": 1},
current: slots{0, 1},
shouldPull: slots{9},
- shouldPullMounts: []string{"zzzzz-mount-special00000009"}})
+ shouldPullMounts: []string{"zzzzz-mount-special20000009"}})
// If some storage classes are not satisfied, don't trash any
// excess replicas. (E.g., if someone desires repl=1 on
// class=durable, and we have two copies on class=volatile, we
desired: map[string]int{"special": 1},
current: slots{0, 1},
shouldPull: slots{9},
- shouldPullMounts: []string{"zzzzz-mount-special00000009"}})
+ shouldPullMounts: []string{"zzzzz-mount-special20000009"}})
// Once storage classes are satisfied, trash excess replicas
// that appear earlier in probe order but aren't needed to
// satisfy the desired classes.
bsm.get(blkid).increaseDesired(pdh, classes, n)
}
}
+
+// GetConfirmedReplication returns the replication level of the given
+// blocks, considering only the specified storage classes.
+//
+// If len(classes)==0, returns the replication level without regard to
+// storage classes.
+//
+// Safe to call concurrently with other calls to GetCurrent, but not
+// with different BlockStateMap methods.
+func (bsm *BlockStateMap) GetConfirmedReplication(blkids []arvados.SizedDigest, classes []string) int {
+ defaultClasses := map[string]bool{"default": true}
+ min := 0
+ for _, blkid := range blkids {
+ total := 0
+ perclass := make(map[string]int, len(classes))
+ for _, c := range classes {
+ perclass[c] = 0
+ }
+ for _, r := range bsm.get(blkid).Replicas {
+ total += r.KeepMount.Replication
+ mntclasses := r.KeepMount.StorageClasses
+ if len(mntclasses) == 0 {
+ mntclasses = defaultClasses
+ }
+ for c := range mntclasses {
+ n, ok := perclass[c]
+ if !ok {
+ // Don't care about this storage class
+ continue
+ }
+ perclass[c] = n + r.KeepMount.Replication
+ }
+ }
+ if total == 0 {
+ return 0
+ }
+ for _, n := range perclass {
+ if n == 0 {
+ return 0
+ }
+ if n < min || min == 0 {
+ min = n
+ }
+ }
+ if len(perclass) == 0 && (min == 0 || min > total) {
+ min = total
+ }
+ }
+ return min
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package main
+
+import (
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&confirmedReplicationSuite{})
+
+type confirmedReplicationSuite struct {
+ blockStateMap *BlockStateMap
+ mtime int64
+}
+
+func (s *confirmedReplicationSuite) SetUpTest(c *check.C) {
+ t, _ := time.Parse(time.RFC3339Nano, time.RFC3339Nano)
+ s.mtime = t.UnixNano()
+ s.blockStateMap = NewBlockStateMap()
+ s.blockStateMap.AddReplicas(&KeepMount{KeepMount: arvados.KeepMount{
+ Replication: 1,
+ StorageClasses: map[string]bool{"default": true},
+ }}, []arvados.KeepServiceIndexEntry{
+ {SizedDigest: knownBlkid(10), Mtime: s.mtime},
+ })
+ s.blockStateMap.AddReplicas(&KeepMount{KeepMount: arvados.KeepMount{
+ Replication: 2,
+ StorageClasses: map[string]bool{"default": true},
+ }}, []arvados.KeepServiceIndexEntry{
+ {SizedDigest: knownBlkid(20), Mtime: s.mtime},
+ })
+}
+
+func (s *confirmedReplicationSuite) TestZeroReplication(c *check.C) {
+ n := s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(404), knownBlkid(409)}, []string{"default"})
+ c.Check(n, check.Equals, 0)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(10), knownBlkid(404)}, []string{"default"})
+ c.Check(n, check.Equals, 0)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(10), knownBlkid(404)}, nil)
+ c.Check(n, check.Equals, 0)
+}
+
+func (s *confirmedReplicationSuite) TestBlocksWithDifferentReplication(c *check.C) {
+ n := s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(10), knownBlkid(20)}, []string{"default"})
+ c.Check(n, check.Equals, 1)
+}
+
+func (s *confirmedReplicationSuite) TestBlocksInDifferentClasses(c *check.C) {
+ s.blockStateMap.AddReplicas(&KeepMount{KeepMount: arvados.KeepMount{
+ Replication: 3,
+ StorageClasses: map[string]bool{"three": true},
+ }}, []arvados.KeepServiceIndexEntry{
+ {SizedDigest: knownBlkid(30), Mtime: s.mtime},
+ })
+
+ n := s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(30)}, []string{"three"})
+ c.Check(n, check.Equals, 3)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(20), knownBlkid(30)}, []string{"default"})
+ c.Check(n, check.Equals, 0) // block 30 has repl 0 @ "default"
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(20), knownBlkid(30)}, []string{"three"})
+ c.Check(n, check.Equals, 0) // block 20 has repl 0 @ "three"
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(20), knownBlkid(30)}, nil)
+ c.Check(n, check.Equals, 2)
+}
+
+func (s *confirmedReplicationSuite) TestBlocksOnMultipleMounts(c *check.C) {
+ s.blockStateMap.AddReplicas(&KeepMount{KeepMount: arvados.KeepMount{
+ Replication: 2,
+ StorageClasses: map[string]bool{"default": true, "four": true},
+ }}, []arvados.KeepServiceIndexEntry{
+ {SizedDigest: knownBlkid(40), Mtime: s.mtime},
+ {SizedDigest: knownBlkid(41), Mtime: s.mtime},
+ })
+ s.blockStateMap.AddReplicas(&KeepMount{KeepMount: arvados.KeepMount{
+ Replication: 2,
+ StorageClasses: map[string]bool{"four": true},
+ }}, []arvados.KeepServiceIndexEntry{
+ {SizedDigest: knownBlkid(40), Mtime: s.mtime},
+ {SizedDigest: knownBlkid(41), Mtime: s.mtime},
+ })
+ n := s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(40), knownBlkid(41)}, []string{"default"})
+ c.Check(n, check.Equals, 2)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(40), knownBlkid(41)}, []string{"four"})
+ c.Check(n, check.Equals, 4)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(40), knownBlkid(41)}, []string{"default", "four"})
+ c.Check(n, check.Equals, 2)
+ n = s.blockStateMap.GetConfirmedReplication([]arvados.SizedDigest{knownBlkid(40), knownBlkid(41)}, nil)
+ c.Check(n, check.Equals, 4)
+}
import (
"context"
+ "encoding/json"
"fmt"
+ "runtime"
+ "sync"
+ "sync/atomic"
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "github.com/jmoiron/sqlx"
)
func countCollections(c *arvados.Client, params arvados.ResourceListParams) (int, error) {
// The progress function is called periodically with done (number of
// times f has been called) and total (number of times f is expected
// to be called).
-//
-// If pageSize > 0 it is used as the maximum page size in each API
-// call; otherwise the maximum allowed page size is requested.
-func EachCollection(ctx context.Context, c *arvados.Client, pageSize int, f func(arvados.Collection) error, progress func(done, total int)) error {
+func EachCollection(ctx context.Context, db *sqlx.DB, c *arvados.Client, f func(arvados.Collection) error, progress func(done, total int)) error {
if progress == nil {
progress = func(_, _ int) {}
}
if err != nil {
return err
}
+ var newestModifiedAt time.Time
- // Note the obvious way to get all collections (sorting by
- // UUID) would be much easier, but would lose data: If a
- // client were to move files from collection with uuid="zzz"
- // to a collection with uuid="aaa" around the time when we
- // were fetching the "mmm" page, we would never see those
- // files' block IDs at all -- even if the client is careful to
- // save "aaa" before saving "zzz".
- //
- // Instead, we get pages in modified_at order. Collections
- // that are modified during the run will be re-fetched in a
- // subsequent page.
-
- limit := pageSize
- if limit <= 0 {
- // Use the maximum page size the server allows
- limit = 1<<31 - 1
- }
- params := arvados.ResourceListParams{
- Limit: &limit,
- Order: "modified_at, uuid",
- Count: "none",
- Select: []string{"uuid", "unsigned_manifest_text", "modified_at", "portable_data_hash", "replication_desired"},
- IncludeTrash: true,
- IncludeOldVersions: true,
+ rows, err := db.QueryxContext(ctx, `SELECT
+ uuid, manifest_text, modified_at, portable_data_hash,
+ replication_desired, replication_confirmed, replication_confirmed_at,
+ storage_classes_desired, storage_classes_confirmed, storage_classes_confirmed_at,
+ is_trashed
+ FROM collections`)
+ if err != nil {
+ return err
}
- var last arvados.Collection
- var filterTime time.Time
+ defer rows.Close()
+ progressTicker := time.NewTicker(10 * time.Second)
+ defer progressTicker.Stop()
callCount := 0
- gettingExactTimestamp := false
- for {
- progress(callCount, expectCount)
- var page arvados.CollectionList
- err := c.RequestAndDecodeContext(ctx, &page, "GET", "arvados/v1/collections", nil, params)
+ for rows.Next() {
+ var coll arvados.Collection
+ var classesDesired, classesConfirmed []byte
+ err = rows.Scan(&coll.UUID, &coll.ManifestText, &coll.ModifiedAt, &coll.PortableDataHash,
+ &coll.ReplicationDesired, &coll.ReplicationConfirmed, &coll.ReplicationConfirmedAt,
+ &classesDesired, &classesConfirmed, &coll.StorageClassesConfirmedAt,
+ &coll.IsTrashed)
if err != nil {
return err
}
- for _, coll := range page.Items {
- if last.ModifiedAt == coll.ModifiedAt && last.UUID >= coll.UUID {
- continue
- }
- callCount++
- err = f(coll)
- if err != nil {
- return err
- }
- last = coll
+
+ err = json.Unmarshal(classesDesired, &coll.StorageClassesDesired)
+ if err != nil && len(classesDesired) > 0 {
+ return err
}
- if len(page.Items) == 0 && !gettingExactTimestamp {
- break
- } else if last.ModifiedAt.IsZero() {
- return fmt.Errorf("BUG: Last collection on the page (%s) has no modified_at timestamp; cannot make progress", last.UUID)
- } else if len(page.Items) > 0 && last.ModifiedAt == filterTime {
- // If we requested time>=X and never got a
- // time>X then we might not have received all
- // items with time==X yet. Switch to
- // gettingExactTimestamp mode (if we're not
- // there already), advancing our UUID
- // threshold with each request, until we get
- // an empty page.
- gettingExactTimestamp = true
- params.Filters = []arvados.Filter{{
- Attr: "modified_at",
- Operator: "=",
- Operand: filterTime,
- }, {
- Attr: "uuid",
- Operator: ">",
- Operand: last.UUID,
- }}
- } else if gettingExactTimestamp {
- // This must be an empty page (in this mode,
- // an unequal timestamp is impossible) so we
- // can start getting pages of newer
- // collections.
- gettingExactTimestamp = false
- params.Filters = []arvados.Filter{{
- Attr: "modified_at",
- Operator: ">",
- Operand: filterTime,
- }}
- } else {
- // In the normal case, we know we have seen
- // all collections with modtime<filterTime,
- // but we might not have seen all that have
- // modtime=filterTime. Hence we use >= instead
- // of > and skip the obvious overlapping item,
- // i.e., the last item on the previous
- // page. In some edge cases this can return
- // collections we have already seen, but
- // avoiding that would add overhead in the
- // overwhelmingly common cases, so we don't
- // bother.
- filterTime = last.ModifiedAt
- params.Filters = []arvados.Filter{{
- Attr: "modified_at",
- Operator: ">=",
- Operand: filterTime,
- }, {
- Attr: "uuid",
- Operator: "!=",
- Operand: last.UUID,
- }}
+ err = json.Unmarshal(classesConfirmed, &coll.StorageClassesConfirmed)
+ if err != nil && len(classesConfirmed) > 0 {
+ return err
+ }
+ if newestModifiedAt.IsZero() || newestModifiedAt.Before(coll.ModifiedAt) {
+ newestModifiedAt = coll.ModifiedAt
+ }
+ callCount++
+ err = f(coll)
+ if err != nil {
+ return err
+ }
+ select {
+ case <-progressTicker.C:
+ progress(callCount, expectCount)
+ default:
}
}
progress(callCount, expectCount)
-
+ err = rows.Close()
+ if err != nil {
+ return err
+ }
if checkCount, err := countCollections(c, arvados.ResourceListParams{
Filters: []arvados.Filter{{
Attr: "modified_at",
Operator: "<=",
- Operand: filterTime}},
+ Operand: newestModifiedAt}},
IncludeTrash: true,
IncludeOldVersions: true,
}); err != nil {
return err
} else if callCount < checkCount {
- return fmt.Errorf("Retrieved %d collections with modtime <= T=%q, but server now reports there are %d collections with modtime <= T", callCount, filterTime, checkCount)
+ return fmt.Errorf("Retrieved %d collections with modtime <= T=%q, but server now reports there are %d collections with modtime <= T", callCount, newestModifiedAt, checkCount)
}
return nil
}
+
+func (bal *Balancer) updateCollections(ctx context.Context, c *arvados.Client, cluster *arvados.Cluster) error {
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+
+ defer bal.time("update_collections", "wall clock time to update collections")()
+ threshold := time.Now()
+ thresholdStr := threshold.Format(time.RFC3339Nano)
+
+ updated := int64(0)
+
+ errs := make(chan error, 1)
+ collQ := make(chan arvados.Collection, cluster.Collections.BalanceCollectionBuffers)
+ go func() {
+ defer close(collQ)
+ err := EachCollection(ctx, bal.DB, c, func(coll arvados.Collection) error {
+ if atomic.LoadInt64(&updated) >= int64(cluster.Collections.BalanceUpdateLimit) {
+ bal.logf("reached BalanceUpdateLimit (%d)", cluster.Collections.BalanceUpdateLimit)
+ cancel()
+ return context.Canceled
+ }
+ collQ <- coll
+ return nil
+ }, func(done, total int) {
+ bal.logf("update collections: %d/%d (%d updated @ %.01f updates/s)", done, total, atomic.LoadInt64(&updated), float64(atomic.LoadInt64(&updated))/time.Since(threshold).Seconds())
+ })
+ if err != nil && err != context.Canceled {
+ select {
+ case errs <- err:
+ default:
+ }
+ }
+ }()
+
+ var wg sync.WaitGroup
+
+ // Use about 1 goroutine per 2 CPUs. Based on experiments with
+ // a 2-core host, using more concurrent database
+ // calls/transactions makes this process slower, not faster.
+ for i := 0; i < runtime.NumCPU()+1/2; i++ {
+ wg.Add(1)
+ goSendErr(errs, func() error {
+ defer wg.Done()
+ tx, err := bal.DB.Beginx()
+ if err != nil {
+ return err
+ }
+ txPending := 0
+ flush := func(final bool) error {
+ err := tx.Commit()
+ if err != nil && ctx.Err() == nil {
+ tx.Rollback()
+ return err
+ }
+ txPending = 0
+ if final {
+ return nil
+ }
+ tx, err = bal.DB.Beginx()
+ return err
+ }
+ txBatch := 100
+ for coll := range collQ {
+ if ctx.Err() != nil || len(errs) > 0 {
+ continue
+ }
+ blkids, err := coll.SizedDigests()
+ if err != nil {
+ bal.logf("%s: %s", coll.UUID, err)
+ continue
+ }
+ repl := bal.BlockStateMap.GetConfirmedReplication(blkids, coll.StorageClassesDesired)
+
+ desired := bal.DefaultReplication
+ if coll.ReplicationDesired != nil {
+ desired = *coll.ReplicationDesired
+ }
+ if repl > desired {
+ // If actual>desired, confirm
+ // the desired number rather
+ // than actual to avoid
+ // flapping updates when
+ // replication increases
+ // temporarily.
+ repl = desired
+ }
+ classes := emptyJSONArray
+ if repl > 0 {
+ classes, err = json.Marshal(coll.StorageClassesDesired)
+ if err != nil {
+ bal.logf("BUG? json.Marshal(%v) failed: %s", classes, err)
+ continue
+ }
+ }
+ needUpdate := coll.ReplicationConfirmed == nil || *coll.ReplicationConfirmed != repl || len(coll.StorageClassesConfirmed) != len(coll.StorageClassesDesired)
+ for i := range coll.StorageClassesDesired {
+ if !needUpdate && coll.StorageClassesDesired[i] != coll.StorageClassesConfirmed[i] {
+ needUpdate = true
+ }
+ }
+ if !needUpdate {
+ continue
+ }
+ _, err = tx.ExecContext(ctx, `update collections set
+ replication_confirmed=$1,
+ replication_confirmed_at=$2,
+ storage_classes_confirmed=$3,
+ storage_classes_confirmed_at=$2
+ where uuid=$4`,
+ repl, thresholdStr, classes, coll.UUID)
+ if err != nil {
+ if ctx.Err() == nil {
+ bal.logf("%s: update failed: %s", coll.UUID, err)
+ }
+ continue
+ }
+ atomic.AddInt64(&updated, 1)
+ if txPending++; txPending >= txBatch {
+ err = flush(false)
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return flush(true)
+ })
+ }
+ wg.Wait()
+ bal.logf("updated %d collections", updated)
+ if len(errs) > 0 {
+ return fmt.Errorf("error updating collections: %s", <-errs)
+ }
+ return nil
+}
+
+// Call f in a new goroutine. If it returns a non-nil error, send the
+// error to the errs channel (unless the channel is already full with
+// another error).
+func goSendErr(errs chan<- error, f func() error) {
+ go func() {
+ err := f()
+ if err != nil {
+ select {
+ case errs <- err:
+ default:
+ }
+ }
+ }()
+}
+
+var emptyJSONArray = []byte("[]")
import (
"context"
- "sync"
- "time"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/jmoiron/sqlx"
check "gopkg.in/check.v1"
)
-// TestIdenticalTimestamps ensures EachCollection returns the same
-// set of collections for various page sizes -- even page sizes so
-// small that we get entire pages full of collections with identical
-// timestamps and exercise our gettingExactTimestamp cases.
-func (s *integrationSuite) TestIdenticalTimestamps(c *check.C) {
- // pageSize==0 uses the default (large) page size.
- pageSizes := []int{0, 2, 3, 4, 5}
- got := make([][]string, len(pageSizes))
- var wg sync.WaitGroup
- for trial, pageSize := range pageSizes {
- wg.Add(1)
- go func(trial, pageSize int) {
- defer wg.Done()
- streak := 0
- longestStreak := 0
- var lastMod time.Time
- sawUUID := make(map[string]bool)
- err := EachCollection(context.Background(), s.client, pageSize, func(c arvados.Collection) error {
- if c.ModifiedAt.IsZero() {
- return nil
- }
- if sawUUID[c.UUID] {
- // dup
- return nil
- }
- got[trial] = append(got[trial], c.UUID)
- sawUUID[c.UUID] = true
- if lastMod == c.ModifiedAt {
- streak++
- if streak > longestStreak {
- longestStreak = streak
- }
- } else {
- streak = 0
- lastMod = c.ModifiedAt
- }
- return nil
- }, nil)
- c.Check(err, check.IsNil)
- c.Check(longestStreak > 25, check.Equals, true)
- }(trial, pageSize)
- }
- wg.Wait()
- for trial := 1; trial < len(pageSizes); trial++ {
- c.Check(got[trial], check.DeepEquals, got[0])
- }
+// TestMissedCollections exercises EachCollection's sanity check:
+// #collections processed >= #old collections that exist in database
+// after processing.
+func (s *integrationSuite) TestMissedCollections(c *check.C) {
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ cluster, err := cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ db, err := sqlx.Open("postgres", cluster.PostgreSQL.Connection.String())
+ c.Assert(err, check.IsNil)
+
+ defer db.Exec(`delete from collections where uuid = 'zzzzz-4zz18-404040404040404'`)
+ insertedOld := false
+ err = EachCollection(context.Background(), db, s.client, func(coll arvados.Collection) error {
+ if !insertedOld {
+ insertedOld = true
+ _, err := db.Exec(`insert into collections (uuid, created_at, updated_at, modified_at) values ('zzzzz-4zz18-404040404040404', '2002-02-02T02:02:02Z', '2002-02-02T02:02:02Z', '2002-02-02T02:02:02Z')`)
+ return err
+ }
+ return nil
+ }, nil)
+ c.Check(err, check.ErrorMatches, `Retrieved .* collections .* but server now reports .* collections.*`)
}
import (
"bytes"
+ "io"
"os"
"strings"
"testing"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/jmoiron/sqlx"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
type integrationSuite struct {
config *arvados.Cluster
+ db *sqlx.DB
client *arvados.Client
keepClient *keepclient.KeepClient
}
c.Skip("-short")
}
arvadostest.ResetEnv()
- arvadostest.StartAPI()
arvadostest.StartKeep(4, true)
arv, err := arvadosclient.MakeArvadosClient()
c.Skip("-short")
}
arvadostest.StopKeep(4)
- arvadostest.StopAPI()
}
func (s *integrationSuite) SetUpTest(c *check.C) {
c.Assert(err, check.Equals, nil)
s.config, err = cfg.GetCluster("")
c.Assert(err, check.Equals, nil)
+ s.db, err = sqlx.Open("postgres", s.config.PostgreSQL.Connection.String())
+ c.Assert(err, check.IsNil)
s.config.Collections.BalancePeriod = arvados.Duration(time.Second)
s.client = &arvados.Client{
for iter := 0; iter < 20; iter++ {
logBuf.Reset()
logger := logrus.New()
- logger.Out = &logBuf
+ logger.Out = io.MultiWriter(&logBuf, os.Stderr)
opts := RunOptions{
- CommitPulls: true,
- CommitTrash: true,
- Logger: logger,
+ CommitPulls: true,
+ CommitTrash: true,
+ CommitConfirmedFields: true,
+ Logger: logger,
}
bal := &Balancer{
+ DB: s.db,
Logger: logger,
Metrics: newMetrics(prometheus.NewRegistry()),
}
time.Sleep(200 * time.Millisecond)
}
c.Check(logBuf.String(), check.Not(check.Matches), `(?ms).*0 replicas (0 blocks, 0 bytes) underreplicated.*`)
+
+ for _, trial := range []struct {
+ uuid string
+ repl int
+ classes []string
+ }{
+ {arvadostest.EmptyCollectionUUID, 0, []string{}},
+ {arvadostest.FooCollection, 2, []string{"default"}}, // "foo" blk
+ {arvadostest.StorageClassesDesiredDefaultConfirmedDefault, 2, []string{"default"}}, // "bar" blk
+ {arvadostest.StorageClassesDesiredArchiveConfirmedDefault, 0, []string{}}, // "bar" blk
+ } {
+ c.Logf("%#v", trial)
+ var coll arvados.Collection
+ s.client.RequestAndDecode(&coll, "GET", "arvados/v1/collections/"+trial.uuid, nil, nil)
+ if c.Check(coll.ReplicationConfirmed, check.NotNil) {
+ c.Check(*coll.ReplicationConfirmed, check.Equals, trial.repl)
+ }
+ c.Check(coll.StorageClassesConfirmed, check.DeepEquals, trial.classes)
+ }
}
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
"flag"
"fmt"
"io"
+ "net/http"
+ _ "net/http/pprof"
"os"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/service"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/health"
+ "github.com/jmoiron/sqlx"
+ _ "github.com/lib/pq"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
logger := ctxlog.FromContext(context.Background())
var options RunOptions
- flags := flag.NewFlagSet(prog, flag.ExitOnError)
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
flags.BoolVar(&options.Once, "once", false,
"balance once and then exit")
flags.BoolVar(&options.CommitPulls, "commit-pulls", false,
"send pull requests (make more replicas of blocks that are underreplicated or are not in optimal rendezvous probe order)")
flags.BoolVar(&options.CommitTrash, "commit-trash", false,
"send trash requests (delete unreferenced old blocks, and excess replicas of overreplicated blocks)")
- flags.Bool("version", false, "Write version information to stdout and exit 0")
+ flags.BoolVar(&options.CommitConfirmedFields, "commit-confirmed-fields", true,
+ "update collection fields (replicas_confirmed, storage_classes_confirmed, etc.)")
dumpFlag := flags.Bool("dump", false, "dump details for each block to stdout")
+ pprofAddr := flags.String("pprof", "", "serve Go profile data at `[addr]:port`")
+ // "show version" is implemented by service.Command, so we
+ // don't need the var here -- we just need the -version flag
+ // to pass flags.Parse().
+ flags.Bool("version", false, "Write version information to stdout and exit 0")
+
+ if *pprofAddr != "" {
+ go func() {
+ logrus.Println(http.ListenAndServe(*pprofAddr, nil))
+ }()
+ }
loader := config.NewLoader(os.Stdin, logger)
loader.SetupFlags(flags)
munged := loader.MungeLegacyConfigArgs(logger, args, "-legacy-keepbalance-config")
- flags.Parse(munged)
+ if ok, code := cmd.ParseFlags(flags, prog, munged, "", stderr); !ok {
+ return code
+ }
if *dumpFlag {
dumper := logrus.New()
// service.Command
args = nil
dropFlag := map[string]bool{
- "once": true,
- "commit-pulls": true,
- "commit-trash": true,
- "dump": true,
+ "once": true,
+ "commit-pulls": true,
+ "commit-trash": true,
+ "commit-confirmed-fields": true,
+ "dump": true,
}
flags.Visit(func(f *flag.Flag) {
if !dropFlag[f.Name] {
- args = append(args, "-"+f.Name, f.Value.String())
+ args = append(args, "-"+f.Name+"="+f.Value.String())
}
})
return service.ErrorHandler(ctx, cluster, fmt.Errorf("error initializing client from cluster config: %s", err))
}
+ db, err := sqlx.Open("postgres", cluster.PostgreSQL.Connection.String())
+ if err != nil {
+ return service.ErrorHandler(ctx, cluster, fmt.Errorf("postgresql connection failed: %s", err))
+ }
+ if p := cluster.PostgreSQL.ConnectionPool; p > 0 {
+ db.SetMaxOpenConns(p)
+ }
+ err = db.Ping()
+ if err != nil {
+ return service.ErrorHandler(ctx, cluster, fmt.Errorf("postgresql connection succeeded but ping failed: %s", err))
+ }
+
if options.Logger == nil {
options.Logger = ctxlog.FromContext(ctx)
}
Metrics: newMetrics(registry),
Logger: options.Logger,
Dumper: options.Dumper,
+ DB: db,
}
srv.Handler = &health.Handler{
Token: cluster.ManagementToken,
"net/http"
"time"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "github.com/ghodss/yaml"
check "gopkg.in/check.v1"
)
runCommand("keep-balance", []string{"-version"}, nil, &stdout, &stderr)
c.Check(stderr.String(), check.Equals, "")
c.Log(stdout.String())
+ c.Check(stdout.String(), check.Matches, `keep-balance.*\(go1.*\)\n`)
}
func (s *mainSuite) TestHTTPServer(c *check.C) {
+ arvadostest.StartKeep(2, true)
+
ln, err := net.Listen("tcp", ":0")
if err != nil {
c.Fatal(err)
_, p, err := net.SplitHostPort(ln.Addr().String())
c.Check(err, check.IsNil)
ln.Close()
- config := "Clusters:\n zzzzz:\n ManagementToken: abcdefg\n Services: {Keepbalance: {InternalURLs: {'http://localhost:" + p + "/': {}}}}\n"
+ cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
+ c.Assert(err, check.IsNil)
+ cluster, err := cfg.GetCluster("")
+ c.Assert(err, check.IsNil)
+ cluster.Services.Keepbalance.InternalURLs[arvados.URL{Host: "localhost:" + p, Path: "/"}] = arvados.ServiceInstance{}
+ cfg.Clusters[cluster.ClusterID] = *cluster
+ config, err := yaml.Marshal(cfg)
+ c.Assert(err, check.IsNil)
var stdout bytes.Buffer
- go runCommand("keep-balance", []string{"-config", "-"}, bytes.NewBufferString(config), &stdout, &stdout)
+ go runCommand("keep-balance", []string{"-config", "-"}, bytes.NewBuffer(config), &stdout, &stdout)
done := make(chan struct{})
go func() {
defer close(done)
c.Fatal(err)
return
}
- req.Header.Set("Authorization", "Bearer abcdefg")
+ req.Header.Set("Authorization", "Bearer "+cluster.ManagementToken)
resp, err := http.DefaultClient.Do(req)
if err != nil {
c.Logf("error %s", err)
c.Log(stdout.String())
c.Fatal("timeout")
}
+ c.Log(stdout.String())
// Check non-metrics URL that gets passed through to us from
// service.Command
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "github.com/jmoiron/sqlx"
"github.com/sirupsen/logrus"
)
//
// RunOptions fields are controlled by command line flags.
type RunOptions struct {
- Once bool
- CommitPulls bool
- CommitTrash bool
- Logger logrus.FieldLogger
- Dumper logrus.FieldLogger
+ Once bool
+ CommitPulls bool
+ CommitTrash bool
+ CommitConfirmedFields bool
+ Logger logrus.FieldLogger
+ Dumper logrus.FieldLogger
// SafeRendezvousState from the most recent balance operation,
// or "" if unknown. If this changes from one run to the next,
Logger logrus.FieldLogger
Dumper logrus.FieldLogger
+
+ DB *sqlx.DB
}
// CheckHealth implements service.Handler.
func (srv *Server) CheckHealth() error {
- return nil
+ return srv.DB.Ping()
}
// Done implements service.Handler.
func (srv *Server) runOnce() (*Balancer, error) {
bal := &Balancer{
+ DB: srv.DB,
Logger: srv.Logger,
Dumper: srv.Dumper,
Metrics: srv.Metrics,
import (
"sync"
+ "sync/atomic"
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
- "github.com/hashicorp/golang-lru"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/sirupsen/logrus"
)
const metricsUpdateInterval = time.Second / 10
type cache struct {
- config *arvados.WebDAVCacheConfig
+ cluster *arvados.Cluster
+ config *arvados.WebDAVCacheConfig // TODO: use cluster.Collections.WebDAV instead
+ logger logrus.FieldLogger
registry *prometheus.Registry
metrics cacheMetrics
pdhs *lru.TwoQueueCache
collections *lru.TwoQueueCache
- permissions *lru.TwoQueueCache
+ sessions *lru.TwoQueueCache
setupOnce sync.Once
+
+ chPruneSessions chan struct{}
+ chPruneCollections chan struct{}
}
type cacheMetrics struct {
requests prometheus.Counter
collectionBytes prometheus.Gauge
collectionEntries prometheus.Gauge
+ sessionEntries prometheus.Gauge
collectionHits prometheus.Counter
pdhHits prometheus.Counter
- permissionHits prometheus.Counter
+ sessionHits prometheus.Counter
+ sessionMisses prometheus.Counter
apiCalls prometheus.Counter
}
Help: "Number of uuid-to-pdh cache hits.",
})
reg.MustRegister(m.pdhHits)
- m.permissionHits = prometheus.NewCounter(prometheus.CounterOpts{
- Namespace: "arvados",
- Subsystem: "keepweb_collectioncache",
- Name: "permission_hits",
- Help: "Number of targetID-to-permission cache hits.",
- })
- reg.MustRegister(m.permissionHits)
m.apiCalls = prometheus.NewCounter(prometheus.CounterOpts{
Namespace: "arvados",
Subsystem: "keepweb_collectioncache",
reg.MustRegister(m.apiCalls)
m.collectionBytes = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "arvados",
- Subsystem: "keepweb_collectioncache",
- Name: "cached_manifest_bytes",
- Help: "Total size of all manifests in cache.",
+ Subsystem: "keepweb_sessions",
+ Name: "cached_collection_bytes",
+ Help: "Total size of all cached manifests and sessions.",
})
reg.MustRegister(m.collectionBytes)
m.collectionEntries = prometheus.NewGauge(prometheus.GaugeOpts{
Help: "Number of manifests in cache.",
})
reg.MustRegister(m.collectionEntries)
+ m.sessionEntries = prometheus.NewGauge(prometheus.GaugeOpts{
+ Namespace: "arvados",
+ Subsystem: "keepweb_sessions",
+ Name: "active",
+ Help: "Number of active token sessions.",
+ })
+ reg.MustRegister(m.sessionEntries)
+ m.sessionHits = prometheus.NewCounter(prometheus.CounterOpts{
+ Namespace: "arvados",
+ Subsystem: "keepweb_sessions",
+ Name: "hits",
+ Help: "Number of token session cache hits.",
+ })
+ reg.MustRegister(m.sessionHits)
+ m.sessionMisses = prometheus.NewCounter(prometheus.CounterOpts{
+ Namespace: "arvados",
+ Subsystem: "keepweb_sessions",
+ Name: "misses",
+ Help: "Number of token session cache misses.",
+ })
+ reg.MustRegister(m.sessionMisses)
}
type cachedPDH struct {
- expire time.Time
- pdh string
+ expire time.Time
+ refresh time.Time
+ pdh string
}
type cachedCollection struct {
expire time.Time
}
+type cachedSession struct {
+ expire time.Time
+ fs atomic.Value
+ client *arvados.Client
+ arvadosclient *arvadosclient.ArvadosClient
+ keepclient *keepclient.KeepClient
+ user atomic.Value
+}
+
func (c *cache) setup() {
var err error
c.pdhs, err = lru.New2Q(c.config.MaxUUIDEntries)
if err != nil {
panic(err)
}
- c.permissions, err = lru.New2Q(c.config.MaxPermissionEntries)
+ c.sessions, err = lru.New2Q(c.config.MaxSessions)
if err != nil {
panic(err)
}
c.updateGauges()
}
}()
+ c.chPruneCollections = make(chan struct{}, 1)
+ go func() {
+ for range c.chPruneCollections {
+ c.pruneCollections()
+ }
+ }()
+ c.chPruneSessions = make(chan struct{}, 1)
+ go func() {
+ for range c.chPruneSessions {
+ c.pruneSessions()
+ }
+ }()
}
func (c *cache) updateGauges() {
c.metrics.collectionBytes.Set(float64(c.collectionBytes()))
c.metrics.collectionEntries.Set(float64(c.collections.Len()))
+ c.metrics.sessionEntries.Set(float64(c.sessions.Len()))
}
var selectPDH = map[string]interface{}{
}
coll.ManifestText = m
var updated arvados.Collection
- defer c.pdhs.Remove(coll.UUID)
err = client.RequestAndDecode(&updated, "PATCH", "arvados/v1/collections/"+coll.UUID, nil, map[string]interface{}{
"collection": map[string]string{
"manifest_text": coll.ManifestText,
},
})
- if err == nil {
- c.collections.Add(client.AuthToken+"\000"+coll.PortableDataHash, &cachedCollection{
- expire: time.Now().Add(time.Duration(c.config.TTL)),
- collection: &updated,
- })
+ if err != nil {
+ c.pdhs.Remove(coll.UUID)
+ return err
}
- return err
+ c.collections.Add(client.AuthToken+"\000"+updated.PortableDataHash, &cachedCollection{
+ expire: time.Now().Add(time.Duration(c.config.TTL)),
+ collection: &updated,
+ })
+ c.pdhs.Add(coll.UUID, &cachedPDH{
+ expire: time.Now().Add(time.Duration(c.config.TTL)),
+ refresh: time.Now().Add(time.Duration(c.config.UUIDTTL)),
+ pdh: updated.PortableDataHash,
+ })
+ return nil
}
-func (c *cache) Get(arv *arvadosclient.ArvadosClient, targetID string, forceReload bool) (*arvados.Collection, error) {
+// ResetSession unloads any potentially stale state. Should be called
+// after write operations, so subsequent reads don't return stale
+// data.
+func (c *cache) ResetSession(token string) {
c.setupOnce.Do(c.setup)
- c.metrics.requests.Inc()
+ c.sessions.Remove(token)
+}
- permOK := false
- permKey := arv.ApiToken + "\000" + targetID
- if forceReload {
- } else if ent, cached := c.permissions.Get(permKey); cached {
- ent := ent.(*cachedPermission)
- if ent.expire.Before(time.Now()) {
- c.permissions.Remove(permKey)
- } else {
- permOK = true
- c.metrics.permissionHits.Inc()
+// Get a long-lived CustomFileSystem suitable for doing a read operation
+// with the given token.
+func (c *cache) GetSession(token string) (arvados.CustomFileSystem, *cachedSession, error) {
+ c.setupOnce.Do(c.setup)
+ now := time.Now()
+ ent, _ := c.sessions.Get(token)
+ sess, _ := ent.(*cachedSession)
+ expired := false
+ if sess == nil {
+ c.metrics.sessionMisses.Inc()
+ sess = &cachedSession{
+ expire: now.Add(c.config.TTL.Duration()),
+ }
+ var err error
+ sess.client, err = arvados.NewClientFromConfig(c.cluster)
+ if err != nil {
+ return nil, nil, err
+ }
+ sess.client.AuthToken = token
+ sess.arvadosclient, err = arvadosclient.New(sess.client)
+ if err != nil {
+ return nil, nil, err
+ }
+ sess.keepclient = keepclient.New(sess.arvadosclient)
+ c.sessions.Add(token, sess)
+ } else if sess.expire.Before(now) {
+ c.metrics.sessionMisses.Inc()
+ expired = true
+ } else {
+ c.metrics.sessionHits.Inc()
+ }
+ select {
+ case c.chPruneSessions <- struct{}{}:
+ default:
+ }
+ fs, _ := sess.fs.Load().(arvados.CustomFileSystem)
+ if fs != nil && !expired {
+ return fs, sess, nil
+ }
+ fs = sess.client.SiteFileSystem(sess.keepclient)
+ fs.ForwardSlashNameSubstitution(c.cluster.Collections.ForwardSlashNameSubstitution)
+ sess.fs.Store(fs)
+ return fs, sess, nil
+}
+
+// Remove all expired session cache entries, then remove more entries
+// until approximate remaining size <= maxsize/2
+func (c *cache) pruneSessions() {
+ now := time.Now()
+ var size int64
+ keys := c.sessions.Keys()
+ for _, token := range keys {
+ ent, ok := c.sessions.Peek(token)
+ if !ok {
+ continue
+ }
+ s := ent.(*cachedSession)
+ if s.expire.Before(now) {
+ c.sessions.Remove(token)
+ continue
+ }
+ if fs, ok := s.fs.Load().(arvados.CustomFileSystem); ok {
+ size += fs.MemorySize()
}
}
+ // Remove tokens until reaching size limit, starting with the
+ // least frequently used entries (which Keys() returns last).
+ for i := len(keys) - 1; i >= 0; i-- {
+ token := keys[i]
+ if size <= c.cluster.Collections.WebDAVCache.MaxCollectionBytes/2 {
+ break
+ }
+ ent, ok := c.sessions.Peek(token)
+ if !ok {
+ continue
+ }
+ s := ent.(*cachedSession)
+ fs, _ := s.fs.Load().(arvados.CustomFileSystem)
+ if fs == nil {
+ continue
+ }
+ c.sessions.Remove(token)
+ size -= fs.MemorySize()
+ }
+}
+
+func (c *cache) Get(arv *arvadosclient.ArvadosClient, targetID string, forceReload bool) (*arvados.Collection, error) {
+ c.setupOnce.Do(c.setup)
+ c.metrics.requests.Inc()
+ var pdhRefresh bool
var pdh string
if arvadosclient.PDHMatch(targetID) {
pdh = targetID
c.pdhs.Remove(targetID)
} else {
pdh = ent.pdh
+ pdhRefresh = forceReload || time.Now().After(ent.refresh)
c.metrics.pdhHits.Inc()
}
}
- var collection *arvados.Collection
- if pdh != "" {
- collection = c.lookupCollection(arv.ApiToken + "\000" + pdh)
- }
-
- if collection != nil && permOK {
- return collection, nil
- } else if collection != nil {
- // Ask API for current PDH for this targetID. Most
- // likely, the cached PDH is still correct; if so,
- // _and_ the current token has permission, we can
- // use our cached manifest.
+ if pdh == "" {
+ // UUID->PDH mapping is not cached, might as well get
+ // the whole collection record and be done (below).
+ c.logger.Debugf("cache(%s): have no pdh", targetID)
+ } else if cached := c.lookupCollection(arv.ApiToken + "\000" + pdh); cached == nil {
+ // PDH->manifest is not cached, might as well get the
+ // whole collection record (below).
+ c.logger.Debugf("cache(%s): have pdh %s but manifest is not cached", targetID, pdh)
+ } else if !pdhRefresh {
+ // We looked up UUID->PDH very recently, and we still
+ // have the manifest for that PDH.
+ c.logger.Debugf("cache(%s): have pdh %s and refresh not needed", targetID, pdh)
+ return cached, nil
+ } else {
+ // Get current PDH for this UUID (and confirm we still
+ // have read permission). Most likely, the cached PDH
+ // is still correct, in which case we can use our
+ // cached manifest.
c.metrics.apiCalls.Inc()
var current arvados.Collection
err := arv.Get("collections", targetID, selectPDH, ¤t)
return nil, err
}
if current.PortableDataHash == pdh {
- c.permissions.Add(permKey, &cachedPermission{
- expire: time.Now().Add(time.Duration(c.config.TTL)),
- })
- if pdh != targetID {
- c.pdhs.Add(targetID, &cachedPDH{
- expire: time.Now().Add(time.Duration(c.config.UUIDTTL)),
- pdh: pdh,
- })
- }
- return collection, err
+ // PDH has not changed, cached manifest is
+ // correct.
+ c.logger.Debugf("cache(%s): verified cached pdh %s is still correct", targetID, pdh)
+ return cached, nil
}
- // PDH changed, but now we know we have
- // permission -- and maybe we already have the
- // new PDH in the cache.
- if coll := c.lookupCollection(arv.ApiToken + "\000" + current.PortableDataHash); coll != nil {
- return coll, nil
+ if cached := c.lookupCollection(arv.ApiToken + "\000" + current.PortableDataHash); cached != nil {
+ // PDH changed, and we already have the
+ // manifest for that new PDH.
+ c.logger.Debugf("cache(%s): cached pdh %s was stale, new pdh is %s and manifest is already in cache", targetID, pdh, current.PortableDataHash)
+ return cached, nil
}
}
- // Collection manifest is not cached.
+ // Either UUID->PDH is not cached, or PDH->manifest is not
+ // cached.
+ var retrieved arvados.Collection
c.metrics.apiCalls.Inc()
- err := arv.Get("collections", targetID, nil, &collection)
+ err := arv.Get("collections", targetID, nil, &retrieved)
if err != nil {
return nil, err
}
+ c.logger.Debugf("cache(%s): retrieved manifest, caching with pdh %s", targetID, retrieved.PortableDataHash)
exp := time.Now().Add(time.Duration(c.config.TTL))
- c.permissions.Add(permKey, &cachedPermission{
- expire: exp,
- })
- c.pdhs.Add(targetID, &cachedPDH{
- expire: time.Now().Add(time.Duration(c.config.UUIDTTL)),
- pdh: collection.PortableDataHash,
- })
- c.collections.Add(arv.ApiToken+"\000"+collection.PortableDataHash, &cachedCollection{
+ if targetID != retrieved.PortableDataHash {
+ c.pdhs.Add(targetID, &cachedPDH{
+ expire: exp,
+ refresh: time.Now().Add(time.Duration(c.config.UUIDTTL)),
+ pdh: retrieved.PortableDataHash,
+ })
+ }
+ c.collections.Add(arv.ApiToken+"\000"+retrieved.PortableDataHash, &cachedCollection{
expire: exp,
- collection: collection,
+ collection: &retrieved,
})
- if int64(len(collection.ManifestText)) > c.config.MaxCollectionBytes/int64(c.config.MaxCollectionEntries) {
- go c.pruneCollections()
+ if int64(len(retrieved.ManifestText)) > c.config.MaxCollectionBytes/int64(c.config.MaxCollectionEntries) {
+ select {
+ case c.chPruneCollections <- struct{}{}:
+ default:
+ }
}
- return collection, nil
+ return &retrieved, nil
}
// pruneCollections checks the total bytes occupied by manifest_text
}
}
for i, k := range keys {
- if size <= c.config.MaxCollectionBytes {
+ if size <= c.config.MaxCollectionBytes/2 {
break
}
if expired[i] {
}
}
-// collectionBytes returns the approximate memory size of the
-// collection cache.
+// collectionBytes returns the approximate combined memory size of the
+// collection cache and session filesystem cache.
func (c *cache) collectionBytes() uint64 {
var size uint64
for _, k := range c.collections.Keys() {
}
size += uint64(len(v.(*cachedCollection).collection.ManifestText))
}
+ for _, token := range c.sessions.Keys() {
+ ent, ok := c.sessions.Peek(token)
+ if !ok {
+ continue
+ }
+ if fs, ok := ent.(*cachedSession).fs.Load().(arvados.CustomFileSystem); ok {
+ size += uint64(fs.MemorySize())
+ }
+ }
return size
}
c.metrics.collectionHits.Inc()
return ent.collection
}
+
+func (c *cache) GetTokenUser(token string) (*arvados.User, error) {
+ // Get and cache user record associated with this
+ // token. We need to know their UUID for logging, and
+ // whether they are an admin or not for certain
+ // permission checks.
+
+ // Get/create session entry
+ _, sess, err := c.GetSession(token)
+ if err != nil {
+ return nil, err
+ }
+
+ // See if the user is already set, and if so, return it
+ user, _ := sess.user.Load().(*arvados.User)
+ if user != nil {
+ return user, nil
+ }
+
+ // Fetch the user record
+ c.metrics.apiCalls.Inc()
+ var current arvados.User
+
+ err = sess.client.RequestAndDecode(¤t, "GET", "/arvados/v1/users/current", nil, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ // Stash the user record for next time
+ sess.user.Store(¤t)
+ return ¤t, nil
+}
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/expfmt"
"gopkg.in/check.v1"
arv, err := arvadosclient.MakeArvadosClient()
c.Assert(err, check.Equals, nil)
- cache := newConfig(s.Config).Cache
+ cache := newConfig(ctxlog.TestLogger(c), s.Config).Cache
cache.registry = prometheus.NewRegistry()
// Hit the same collection 5 times using the same token. Only
s.checkCacheMetrics(c, cache.registry,
"requests 5",
"hits 4",
- "permission_hits 4",
"pdh_hits 4",
"api_calls 1")
s.checkCacheMetrics(c, cache.registry,
"requests 6",
"hits 4",
- "permission_hits 4",
"pdh_hits 4",
"api_calls 2")
s.checkCacheMetrics(c, cache.registry,
"requests 7",
"hits 5",
- "permission_hits 5",
"pdh_hits 4",
"api_calls 2")
s.checkCacheMetrics(c, cache.registry,
"requests 27",
"hits 23",
- "permission_hits 23",
"pdh_hits 22",
"api_calls 4")
}
arv, err := arvadosclient.MakeArvadosClient()
c.Assert(err, check.Equals, nil)
- cache := newConfig(s.Config).Cache
+ cache := newConfig(ctxlog.TestLogger(c), s.Config).Cache
cache.registry = prometheus.NewRegistry()
for _, forceReload := range []bool{false, true, false, true} {
s.checkCacheMetrics(c, cache.registry,
"requests 4",
"hits 3",
- "permission_hits 1",
"pdh_hits 0",
- "api_calls 3")
+ "api_calls 1")
}
func (s *UnitSuite) TestCacheForceReloadByUUID(c *check.C) {
arv, err := arvadosclient.MakeArvadosClient()
c.Assert(err, check.Equals, nil)
- cache := newConfig(s.Config).Cache
+ cache := newConfig(ctxlog.TestLogger(c), s.Config).Cache
cache.registry = prometheus.NewRegistry()
for _, forceReload := range []bool{false, true, false, true} {
s.checkCacheMetrics(c, cache.registry,
"requests 4",
"hits 3",
- "permission_hits 1",
"pdh_hits 3",
"api_calls 3")
}
match: `(?ms).*succeeded.*`,
data: testdata,
},
+ {
+ path: writePath,
+ cmd: "move testfile \"test &#!%20 file\"\n",
+ match: `(?ms).*Moving .* succeeded.*`,
+ },
+ {
+ path: writePath,
+ cmd: "move \"test &#!%20 file\" testfile\n",
+ match: `(?ms).*Moving .* succeeded.*`,
+ },
{
path: writePath,
cmd: "move testfile newdir0/\n",
import (
"encoding/json"
+ "fmt"
"html"
"html/template"
"io"
}
)
+func stripDefaultPort(host string) string {
+ // Will consider port 80 and port 443 to be the same vhost. I think that's fine.
+ u := &url.URL{Host: host}
+ if p := u.Port(); p == "80" || p == "443" {
+ return strings.ToLower(u.Hostname())
+ } else {
+ return strings.ToLower(host)
+ }
+}
+
// ServeHTTP implements http.Handler.
func (h *handler) ServeHTTP(wOrig http.ResponseWriter, r *http.Request) {
h.setupOnce.Do(h.setup)
var attachment bool
var useSiteFS bool
credentialsOK := h.Config.cluster.Collections.TrustAllContent
+ reasonNotAcceptingCredentials := ""
- if r.Host != "" && r.Host == h.Config.cluster.Services.WebDAVDownload.ExternalURL.Host {
+ if r.Host != "" && stripDefaultPort(r.Host) == stripDefaultPort(h.Config.cluster.Services.WebDAVDownload.ExternalURL.Host) {
credentialsOK = true
attachment = true
} else if r.FormValue("disposition") == "attachment" {
attachment = true
}
+ if !credentialsOK {
+ reasonNotAcceptingCredentials = fmt.Sprintf("vhost %q does not specify a single collection ID or match Services.WebDAVDownload.ExternalURL %q, and Collections.TrustAllContent is false",
+ r.Host, h.Config.cluster.Services.WebDAVDownload.ExternalURL)
+ }
+
if collectionID = parseCollectionIDFromDNSName(r.Host); collectionID != "" {
// http://ID.collections.example/PATH...
credentialsOK = true
// data. Tokens provided with the request are
// ignored.
credentialsOK = false
+ reasonNotAcceptingCredentials = "the '/collections/UUID/PATH' form only works for public data"
}
}
}
if tokens == nil {
- tokens = append(reqTokens, h.Config.cluster.Users.AnonymousUserToken)
+ tokens = reqTokens
+ if h.Config.cluster.Users.AnonymousUserToken != "" {
+ tokens = append(tokens, h.Config.cluster.Users.AnonymousUserToken)
+ }
+ }
+
+ if tokens == nil {
+ if !credentialsOK {
+ http.Error(w, fmt.Sprintf("Authorization tokens are not accepted here: %v, and no anonymous user token is configured.", reasonNotAcceptingCredentials), http.StatusUnauthorized)
+ } else {
+ http.Error(w, fmt.Sprintf("No authorization token in request, and no anonymous user token is configured."), http.StatusUnauthorized)
+ }
+ return
}
if len(targetPath) > 0 && targetPath[0] == "_" {
defer h.clientPool.Put(arv)
var collection *arvados.Collection
+ var tokenUser *arvados.User
tokenResult := make(map[string]int)
for _, arv.ApiToken = range tokens {
var err error
return
}
+ // Check configured permission
+ _, sess, err := h.Config.Cache.GetSession(arv.ApiToken)
+ tokenUser, err = h.Config.Cache.GetTokenUser(arv.ApiToken)
+
if webdavMethod[r.Method] {
+ if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+ http.Error(w, "Not permitted", http.StatusForbidden)
+ return
+ }
+ h.logUploadOrDownload(r, sess.arvadosclient, nil, strings.Join(targetPath, "/"), collection, tokenUser)
+
if writeMethod[r.Method] {
// Save the collection only if/when all
// webdav->filesystem operations succeed --
}
openPath := "/" + strings.Join(targetPath, "/")
- if f, err := fs.Open(openPath); os.IsNotExist(err) {
+ f, err := fs.Open(openPath)
+ if os.IsNotExist(err) {
// Requested non-existent path
http.Error(w, notFoundMessage, http.StatusNotFound)
+ return
} else if err != nil {
// Some other (unexpected) error
http.Error(w, "open: "+err.Error(), http.StatusInternalServerError)
- } else if stat, err := f.Stat(); err != nil {
+ return
+ }
+ defer f.Close()
+ if stat, err := f.Stat(); err != nil {
// Can't get Size/IsDir (shouldn't happen with a collectionFS!)
http.Error(w, "stat: "+err.Error(), http.StatusInternalServerError)
} else if stat.IsDir() && !strings.HasSuffix(r.URL.Path, "/") {
} else if stat.IsDir() {
h.serveDirectory(w, r, collection.Name, fs, openPath, true)
} else {
+ if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+ http.Error(w, "Not permitted", http.StatusForbidden)
+ return
+ }
+ h.logUploadOrDownload(r, sess.arvadosclient, nil, strings.Join(targetPath, "/"), collection, tokenUser)
+
http.ServeContent(w, r, basename, stat.ModTime(), f)
- if wrote := int64(w.WroteBodyBytes()); wrote != stat.Size() && r.Header.Get("Range") == "" {
+ if wrote := int64(w.WroteBodyBytes()); wrote != stat.Size() && w.WroteStatus() == http.StatusOK {
// If we wrote fewer bytes than expected, it's
// too late to change the real response code
// or send an error message to the client, but
// at least we can try to put some useful
// debugging info in the logs.
n, err := f.Read(make([]byte, 1024))
- ctxlog.FromContext(r.Context()).Errorf("stat.Size()==%d but only wrote %d bytes; read(1024) returns %d, %s", stat.Size(), wrote, n, err)
-
+ ctxlog.FromContext(r.Context()).Errorf("stat.Size()==%d but only wrote %d bytes; read(1024) returns %d, %v", stat.Size(), wrote, n, err)
}
}
}
func (h *handler) getClients(reqID, token string) (arv *arvadosclient.ArvadosClient, kc *keepclient.KeepClient, client *arvados.Client, release func(), err error) {
arv = h.clientPool.Get()
if arv == nil {
- return nil, nil, nil, nil, err
+ err = h.clientPool.Err()
+ return
}
release = func() { h.clientPool.Put(arv) }
arv.ApiToken = token
http.Error(w, errReadOnly.Error(), http.StatusMethodNotAllowed)
return
}
- _, kc, client, release, err := h.getClients(r.Header.Get("X-Request-Id"), tokens[0])
+
+ fs, sess, err := h.Config.Cache.GetSession(tokens[0])
if err != nil {
- http.Error(w, "Pool failed: "+h.clientPool.Err().Error(), http.StatusInternalServerError)
+ http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
- defer release()
-
- fs := client.SiteFileSystem(kc)
fs.ForwardSlashNameSubstitution(h.Config.cluster.Collections.ForwardSlashNameSubstitution)
f, err := fs.Open(r.URL.Path)
if os.IsNotExist(err) {
}
return
}
+
+ tokenUser, err := h.Config.Cache.GetTokenUser(tokens[0])
+ if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+ http.Error(w, "Not permitted", http.StatusForbidden)
+ return
+ }
+ h.logUploadOrDownload(r, sess.arvadosclient, fs, r.URL.Path, nil, tokenUser)
+
if r.Method == "GET" {
_, basename := filepath.Split(r.URL.Path)
applyContentDispositionHdr(w, r, basename, attachment)
io.WriteString(w, html.EscapeString(redir))
io.WriteString(w, `">Continue</A>`)
}
+
+func (h *handler) userPermittedToUploadOrDownload(method string, tokenUser *arvados.User) bool {
+ var permitDownload bool
+ var permitUpload bool
+ if tokenUser != nil && tokenUser.IsAdmin {
+ permitUpload = h.Config.cluster.Collections.WebDAVPermission.Admin.Upload
+ permitDownload = h.Config.cluster.Collections.WebDAVPermission.Admin.Download
+ } else {
+ permitUpload = h.Config.cluster.Collections.WebDAVPermission.User.Upload
+ permitDownload = h.Config.cluster.Collections.WebDAVPermission.User.Download
+ }
+ if (method == "PUT" || method == "POST") && !permitUpload {
+ // Disallow operations that upload new files.
+ // Permit webdav operations that move existing files around.
+ return false
+ } else if method == "GET" && !permitDownload {
+ // Disallow downloading file contents.
+ // Permit webdav operations like PROPFIND that retrieve metadata
+ // but not file contents.
+ return false
+ }
+ return true
+}
+
+func (h *handler) logUploadOrDownload(
+ r *http.Request,
+ client *arvadosclient.ArvadosClient,
+ fs arvados.CustomFileSystem,
+ filepath string,
+ collection *arvados.Collection,
+ user *arvados.User) {
+
+ log := ctxlog.FromContext(r.Context())
+ props := make(map[string]string)
+ props["reqPath"] = r.URL.Path
+ var useruuid string
+ if user != nil {
+ log = log.WithField("user_uuid", user.UUID).
+ WithField("user_full_name", user.FullName)
+ useruuid = user.UUID
+ } else {
+ useruuid = fmt.Sprintf("%s-tpzed-anonymouspublic", h.Config.cluster.ClusterID)
+ }
+ if collection == nil && fs != nil {
+ collection, filepath = h.determineCollection(fs, filepath)
+ }
+ if collection != nil {
+ log = log.WithField("collection_uuid", collection.UUID).
+ WithField("collection_file_path", filepath)
+ props["collection_uuid"] = collection.UUID
+ props["collection_file_path"] = filepath
+ }
+ if r.Method == "PUT" || r.Method == "POST" {
+ log.Info("File upload")
+ if h.Config.cluster.Collections.WebDAVLogEvents {
+ go func() {
+ lr := arvadosclient.Dict{"log": arvadosclient.Dict{
+ "object_uuid": useruuid,
+ "event_type": "file_upload",
+ "properties": props}}
+ err := client.Create("logs", lr, nil)
+ if err != nil {
+ log.WithError(err).Error("Failed to create upload log event on API server")
+ }
+ }()
+ }
+ } else if r.Method == "GET" {
+ if collection != nil && collection.PortableDataHash != "" {
+ log = log.WithField("portable_data_hash", collection.PortableDataHash)
+ props["portable_data_hash"] = collection.PortableDataHash
+ }
+ log.Info("File download")
+ if h.Config.cluster.Collections.WebDAVLogEvents {
+ go func() {
+ lr := arvadosclient.Dict{"log": arvadosclient.Dict{
+ "object_uuid": useruuid,
+ "event_type": "file_download",
+ "properties": props}}
+ err := client.Create("logs", lr, nil)
+ if err != nil {
+ log.WithError(err).Error("Failed to create download log event on API server")
+ }
+ }()
+ }
+ }
+}
+
+func (h *handler) determineCollection(fs arvados.CustomFileSystem, path string) (*arvados.Collection, string) {
+ segments := strings.Split(path, "/")
+ var i int
+ for i = 0; i < len(segments); i++ {
+ dir := append([]string{}, segments[0:i]...)
+ dir = append(dir, ".arvados#collection")
+ f, err := fs.OpenFile(strings.Join(dir, "/"), os.O_RDONLY, 0)
+ if f != nil {
+ defer f.Close()
+ }
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return nil, ""
+ }
+ continue
+ }
+ // err is nil so we found it.
+ decoder := json.NewDecoder(f)
+ var collection arvados.Collection
+ err = decoder.Decode(&collection)
+ if err != nil {
+ return nil, ""
+ }
+ return &collection, strings.Join(segments[i:], "/")
+ }
+ return nil, ""
+}
import (
"bytes"
+ "context"
"fmt"
"html"
+ "io"
"io/ioutil"
"net/http"
"net/http/httptest"
"path/filepath"
"regexp"
"strings"
+ "time"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
var _ = check.Suite(&UnitSuite{})
+func init() {
+ arvados.DebugLocksPanicMode = true
+}
+
type UnitSuite struct {
Config *arvados.Config
}
}
func (s *UnitSuite) TestCORSPreflight(c *check.C) {
- h := handler{Config: newConfig(s.Config)}
+ h := handler{Config: newConfig(ctxlog.TestLogger(c), s.Config)}
u := mustParseURL("http://keep-web.example/c=" + arvadostest.FooCollection + "/foo")
req := &http.Request{
Method: "OPTIONS",
c.Check(resp.Code, check.Equals, http.StatusMethodNotAllowed)
}
+func (s *UnitSuite) TestEmptyResponse(c *check.C) {
+ for _, trial := range []struct {
+ dataExists bool
+ sendIMSHeader bool
+ expectStatus int
+ logRegexp string
+ }{
+ // If we return no content due to a Keep read error,
+ // we should emit a log message.
+ {false, false, http.StatusOK, `(?ms).*only wrote 0 bytes.*`},
+
+ // If we return no content because the client sent an
+ // If-Modified-Since header, our response should be
+ // 304. We still expect a "File download" log since it
+ // counts as a file access for auditing.
+ {true, true, http.StatusNotModified, `(?ms).*msg="File download".*`},
+ } {
+ c.Logf("trial: %+v", trial)
+ arvadostest.StartKeep(2, true)
+ if trial.dataExists {
+ arv, err := arvadosclient.MakeArvadosClient()
+ c.Assert(err, check.IsNil)
+ arv.ApiToken = arvadostest.ActiveToken
+ kc, err := keepclient.MakeKeepClient(arv)
+ c.Assert(err, check.IsNil)
+ _, _, err = kc.PutB([]byte("foo"))
+ c.Assert(err, check.IsNil)
+ }
+
+ h := handler{Config: newConfig(ctxlog.TestLogger(c), s.Config)}
+ u := mustParseURL("http://" + arvadostest.FooCollection + ".keep-web.example/foo")
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.ActiveToken},
+ },
+ }
+ if trial.sendIMSHeader {
+ req.Header.Set("If-Modified-Since", strings.Replace(time.Now().UTC().Format(time.RFC1123), "UTC", "GMT", -1))
+ }
+
+ var logbuf bytes.Buffer
+ logger := logrus.New()
+ logger.Out = &logbuf
+ req = req.WithContext(ctxlog.Context(context.Background(), logger))
+
+ resp := httptest.NewRecorder()
+ h.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, trial.expectStatus)
+ c.Check(resp.Body.String(), check.Equals, "")
+
+ c.Log(logbuf.String())
+ c.Check(logbuf.String(), check.Matches, trial.logRegexp)
+ }
+}
+
func (s *UnitSuite) TestInvalidUUID(c *check.C) {
bogusID := strings.Replace(arvadostest.FooCollectionPDH, "+", "-", 1) + "-"
token := arvadostest.ActiveToken
RequestURI: u.RequestURI(),
}
resp := httptest.NewRecorder()
- cfg := newConfig(s.Config)
+ cfg := newConfig(ctxlog.TestLogger(c), s.Config)
cfg.cluster.Users.AnonymousUserToken = arvadostest.AnonymousToken
h := handler{Config: cfg}
h.ServeHTTP(resp, req)
// the token is invalid.
type authorizer func(*http.Request, string) int
-func (s *IntegrationSuite) TestVhostViaAuthzHeader(c *check.C) {
- s.doVhostRequests(c, authzViaAuthzHeader)
+func (s *IntegrationSuite) TestVhostViaAuthzHeaderOAuth2(c *check.C) {
+ s.doVhostRequests(c, authzViaAuthzHeaderOAuth2)
}
-func authzViaAuthzHeader(r *http.Request, tok string) int {
- r.Header.Add("Authorization", "OAuth2 "+tok)
+func authzViaAuthzHeaderOAuth2(r *http.Request, tok string) int {
+ r.Header.Add("Authorization", "Bearer "+tok)
+ return http.StatusUnauthorized
+}
+func (s *IntegrationSuite) TestVhostViaAuthzHeaderBearer(c *check.C) {
+ s.doVhostRequests(c, authzViaAuthzHeaderBearer)
+}
+func authzViaAuthzHeaderBearer(r *http.Request, tok string) int {
+ r.Header.Add("Authorization", "Bearer "+tok)
return http.StatusUnauthorized
}
if tok == arvadostest.ActiveToken {
c.Check(code, check.Equals, http.StatusOK)
c.Check(body, check.Equals, "foo")
-
} else {
c.Check(code >= 400, check.Equals, true)
c.Check(code < 500, check.Equals, true)
}
}
+func (s *IntegrationSuite) TestVhostPortMatch(c *check.C) {
+ for _, host := range []string{"download.example.com", "DOWNLOAD.EXAMPLE.COM"} {
+ for _, port := range []string{"80", "443", "8000"} {
+ s.testServer.Config.cluster.Services.WebDAVDownload.ExternalURL.Host = fmt.Sprintf("download.example.com:%v", port)
+ u := mustParseURL(fmt.Sprintf("http://%v/by_id/%v/foo", host, arvadostest.FooCollection))
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{"Authorization": []string{"Bearer " + arvadostest.ActiveToken}},
+ }
+ req, resp := s.doReq(req)
+ code, _ := resp.Code, resp.Body.String()
+
+ if port == "8000" {
+ c.Check(code, check.Equals, 401)
+ } else {
+ c.Check(code, check.Equals, 200)
+ }
+ }
+ }
+}
+
func (s *IntegrationSuite) doReq(req *http.Request) (*http.Request, *httptest.ResponseRecorder) {
resp := httptest.NewRecorder()
s.testServer.Handler.ServeHTTP(resp, req)
{
// URLs of this form ignore authHeader, and
// FooAndBarFilesInDirUUID isn't public, so
- // this returns 404.
+ // this returns 401.
uri: "download.example.com/collections/" + arvadostest.FooAndBarFilesInDirUUID + "/",
header: authHeader,
expect: nil,
c.Check(req.URL.Path, check.Equals, trial.redirect, comment)
}
if trial.expect == nil {
- c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ if s.testServer.Config.cluster.Users.AnonymousUserToken == "" {
+ c.Check(resp.Code, check.Equals, http.StatusUnauthorized, comment)
+ } else {
+ c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ }
} else {
c.Check(resp.Code, check.Equals, http.StatusOK, comment)
for _, e := range trial.expect {
resp = httptest.NewRecorder()
s.testServer.Handler.ServeHTTP(resp, req)
if trial.expect == nil {
- c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ if s.testServer.Config.cluster.Users.AnonymousUserToken == "" {
+ c.Check(resp.Code, check.Equals, http.StatusUnauthorized, comment)
+ } else {
+ c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ }
} else {
c.Check(resp.Code, check.Equals, http.StatusOK, comment)
}
resp = httptest.NewRecorder()
s.testServer.Handler.ServeHTTP(resp, req)
if trial.expect == nil {
- c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ if s.testServer.Config.cluster.Users.AnonymousUserToken == "" {
+ c.Check(resp.Code, check.Equals, http.StatusUnauthorized, comment)
+ } else {
+ c.Check(resp.Code, check.Equals, http.StatusNotFound, comment)
+ }
} else {
c.Check(resp.Code, check.Equals, http.StatusMultiStatus, comment)
for _, e := range trial.expect {
contentType string
}{
{"picture.txt", "BMX bikes are small this year\n", "text/plain; charset=utf-8"},
- {"picture.bmp", "BMX bikes are small this year\n", "image/x-ms-bmp"},
+ {"picture.bmp", "BMX bikes are small this year\n", "image/(x-ms-)?bmp"},
{"picture.jpg", "BMX bikes are small this year\n", "image/jpeg"},
{"picture1", "BMX bikes are small this year\n", "image/bmp"}, // content sniff; "BM" is the magic signature for .bmp
{"picture2", "Cars are small this year\n", "text/plain; charset=utf-8"}, // content sniff
resp := httptest.NewRecorder()
s.testServer.Handler.ServeHTTP(resp, req)
c.Check(resp.Code, check.Equals, http.StatusOK)
- c.Check(resp.Header().Get("Content-Type"), check.Equals, trial.contentType)
+ c.Check(resp.Header().Get("Content-Type"), check.Matches, trial.contentType)
c.Check(resp.Body.String(), check.Equals, trial.content)
}
}
c.Check(keepclient.DefaultBlockCache.MaxBlocks, check.Equals, 42)
}
+// Writing to a collection shouldn't affect its entry in the
+// PDH-to-manifest cache.
+func (s *IntegrationSuite) TestCacheWriteCollectionSamePDH(c *check.C) {
+ arv, err := arvadosclient.MakeArvadosClient()
+ c.Assert(err, check.Equals, nil)
+ arv.ApiToken = arvadostest.ActiveToken
+
+ u := mustParseURL("http://x.example/testfile")
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{"Authorization": {"Bearer " + arv.ApiToken}},
+ }
+
+ checkWithID := func(id string, status int) {
+ req.URL.Host = strings.Replace(id, "+", "-", -1) + ".example"
+ req.Host = req.URL.Host
+ resp := httptest.NewRecorder()
+ s.testServer.Handler.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, status)
+ }
+
+ var colls [2]arvados.Collection
+ for i := range colls {
+ err := arv.Create("collections",
+ map[string]interface{}{
+ "ensure_unique_name": true,
+ "collection": map[string]interface{}{
+ "name": "test collection",
+ },
+ }, &colls[i])
+ c.Assert(err, check.Equals, nil)
+ }
+
+ // Populate cache with empty collection
+ checkWithID(colls[0].PortableDataHash, http.StatusNotFound)
+
+ // write a file to colls[0]
+ reqPut := *req
+ reqPut.Method = "PUT"
+ reqPut.URL.Host = colls[0].UUID + ".example"
+ reqPut.Host = req.URL.Host
+ reqPut.Body = ioutil.NopCloser(bytes.NewBufferString("testdata"))
+ resp := httptest.NewRecorder()
+ s.testServer.Handler.ServeHTTP(resp, &reqPut)
+ c.Check(resp.Code, check.Equals, http.StatusCreated)
+
+ // new file should not appear in colls[1]
+ checkWithID(colls[1].PortableDataHash, http.StatusNotFound)
+ checkWithID(colls[1].UUID, http.StatusNotFound)
+
+ checkWithID(colls[0].UUID, http.StatusOK)
+}
+
func copyHeader(h http.Header) http.Header {
hc := http.Header{}
for k, v := range h {
}
return hc
}
+
+func (s *IntegrationSuite) checkUploadDownloadRequest(c *check.C, h *handler, req *http.Request,
+ successCode int, direction string, perm bool, userUuid string, collectionUuid string, filepath string) {
+
+ client := s.testServer.Config.Client
+ client.AuthToken = arvadostest.AdminToken
+ var logentries arvados.LogList
+ limit1 := 1
+ err := client.RequestAndDecode(&logentries, "GET", "arvados/v1/logs", nil,
+ arvados.ResourceListParams{
+ Limit: &limit1,
+ Order: "created_at desc"})
+ c.Check(err, check.IsNil)
+ c.Check(logentries.Items, check.HasLen, 1)
+ lastLogId := logentries.Items[0].ID
+
+ var logbuf bytes.Buffer
+ logger := logrus.New()
+ logger.Out = &logbuf
+ resp := httptest.NewRecorder()
+ req = req.WithContext(ctxlog.Context(context.Background(), logger))
+ h.ServeHTTP(resp, req)
+
+ if perm {
+ c.Check(resp.Result().StatusCode, check.Equals, successCode)
+ c.Check(logbuf.String(), check.Matches, `(?ms).*msg="File `+direction+`".*`)
+ c.Check(logbuf.String(), check.Not(check.Matches), `(?ms).*level=error.*`)
+
+ deadline := time.Now().Add(time.Second)
+ for {
+ c.Assert(time.Now().After(deadline), check.Equals, false, check.Commentf("timed out waiting for log entry"))
+ err = client.RequestAndDecode(&logentries, "GET", "arvados/v1/logs", nil,
+ arvados.ResourceListParams{
+ Filters: []arvados.Filter{
+ {Attr: "event_type", Operator: "=", Operand: "file_" + direction},
+ {Attr: "object_uuid", Operator: "=", Operand: userUuid},
+ },
+ Limit: &limit1,
+ Order: "created_at desc",
+ })
+ c.Assert(err, check.IsNil)
+ if len(logentries.Items) > 0 &&
+ logentries.Items[0].ID > lastLogId &&
+ logentries.Items[0].ObjectUUID == userUuid &&
+ logentries.Items[0].Properties["collection_uuid"] == collectionUuid &&
+ logentries.Items[0].Properties["collection_file_path"] == filepath {
+ break
+ }
+ c.Logf("logentries.Items: %+v", logentries.Items)
+ time.Sleep(50 * time.Millisecond)
+ }
+ } else {
+ c.Check(resp.Result().StatusCode, check.Equals, http.StatusForbidden)
+ c.Check(logbuf.String(), check.Equals, "")
+ }
+}
+
+func (s *IntegrationSuite) TestDownloadLoggingPermission(c *check.C) {
+ config := newConfig(ctxlog.TestLogger(c), s.ArvConfig)
+ h := handler{Config: config}
+ u := mustParseURL("http://" + arvadostest.FooCollection + ".keep-web.example/foo")
+
+ config.cluster.Collections.TrustAllContent = true
+
+ for _, adminperm := range []bool{true, false} {
+ for _, userperm := range []bool{true, false} {
+ config.cluster.Collections.WebDAVPermission.Admin.Download = adminperm
+ config.cluster.Collections.WebDAVPermission.User.Download = userperm
+
+ // Test admin permission
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.AdminToken},
+ },
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", adminperm,
+ arvadostest.AdminUserUUID, arvadostest.FooCollection, "foo")
+
+ // Test user permission
+ req = &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.ActiveToken},
+ },
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", userperm,
+ arvadostest.ActiveUserUUID, arvadostest.FooCollection, "foo")
+ }
+ }
+
+ config.cluster.Collections.WebDAVPermission.User.Download = true
+
+ for _, tryurl := range []string{"http://" + arvadostest.MultilevelCollection1 + ".keep-web.example/dir1/subdir/file1",
+ "http://keep-web/users/active/multilevel_collection_1/dir1/subdir/file1"} {
+
+ u = mustParseURL(tryurl)
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.ActiveToken},
+ },
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", true,
+ arvadostest.ActiveUserUUID, arvadostest.MultilevelCollection1, "dir1/subdir/file1")
+ }
+
+ u = mustParseURL("http://" + strings.Replace(arvadostest.FooCollectionPDH, "+", "-", 1) + ".keep-web.example/foo")
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.ActiveToken},
+ },
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", true,
+ arvadostest.ActiveUserUUID, arvadostest.FooCollection, "foo")
+}
+
+func (s *IntegrationSuite) TestUploadLoggingPermission(c *check.C) {
+ config := newConfig(ctxlog.TestLogger(c), s.ArvConfig)
+ h := handler{Config: config}
+
+ for _, adminperm := range []bool{true, false} {
+ for _, userperm := range []bool{true, false} {
+
+ arv := s.testServer.Config.Client
+ arv.AuthToken = arvadostest.ActiveToken
+
+ var coll arvados.Collection
+ err := arv.RequestAndDecode(&coll,
+ "POST",
+ "/arvados/v1/collections",
+ nil,
+ map[string]interface{}{
+ "ensure_unique_name": true,
+ "collection": map[string]interface{}{
+ "name": "test collection",
+ },
+ })
+ c.Assert(err, check.Equals, nil)
+
+ u := mustParseURL("http://" + coll.UUID + ".keep-web.example/bar")
+
+ config.cluster.Collections.WebDAVPermission.Admin.Upload = adminperm
+ config.cluster.Collections.WebDAVPermission.User.Upload = userperm
+
+ // Test admin permission
+ req := &http.Request{
+ Method: "PUT",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.AdminToken},
+ },
+ Body: io.NopCloser(bytes.NewReader([]byte("bar"))),
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusCreated, "upload", adminperm,
+ arvadostest.AdminUserUUID, coll.UUID, "bar")
+
+ // Test user permission
+ req = &http.Request{
+ Method: "PUT",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + arvadostest.ActiveToken},
+ },
+ Body: io.NopCloser(bytes.NewReader([]byte("bar"))),
+ }
+ s.checkUploadDownloadRequest(c, &h, req, http.StatusCreated, "upload", userperm,
+ arvadostest.ActiveUserUUID, coll.UUID, "bar")
+ }
+ }
+}
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
package main
import (
+ "context"
"flag"
"fmt"
"mime"
"os"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
cluster *arvados.Cluster
}
-func newConfig(arvCfg *arvados.Config) *Config {
+func newConfig(logger logrus.FieldLogger, arvCfg *arvados.Config) *Config {
cfg := Config{}
var cls *arvados.Cluster
var err error
}
cfg.cluster = cls
cfg.Cache.config = &cfg.cluster.Collections.WebDAVCache
+ cfg.Cache.cluster = cls
+ cfg.Cache.logger = logger
return &cfg
}
})
}
-func configure(logger log.FieldLogger, args []string) *Config {
- flags := flag.NewFlagSet(args[0], flag.ExitOnError)
+func configure(logger log.FieldLogger, args []string) (*Config, error) {
+ flags := flag.NewFlagSet(args[0], flag.ContinueOnError)
loader := config.NewLoader(os.Stdin, logger)
loader.SetupFlags(flags)
getVersion := flags.Bool("version", false,
"print version information and exit.")
+ prog := args[0]
args = loader.MungeLegacyConfigArgs(logger, args[1:], "-legacy-keepweb-config")
- flags.Parse(args)
-
- // Print version information if requested
- if *getVersion {
- fmt.Printf("keep-web %s\n", version)
- return nil
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
+ fmt.Printf("%s %s\n", args[0], version)
+ return nil, nil
}
arvCfg, err := loader.Load()
if err != nil {
- log.Fatal(err)
+ return nil, err
}
- cfg := newConfig(arvCfg)
+ cfg := newConfig(logger, arvCfg)
if *dumpConfig {
out, err := yaml.Marshal(cfg)
if err != nil {
- log.Fatal(err)
+ return nil, err
}
_, err = os.Stdout.Write(out)
- if err != nil {
- log.Fatal(err)
- }
- return nil
+ return nil, err
}
- return cfg
+ return cfg, nil
}
func main() {
- logger := log.New()
-
- cfg := configure(logger, os.Args)
- if cfg == nil {
+ initLogger := log.StandardLogger()
+ logger := initLogger.WithField("PID", os.Getpid())
+ cfg, err := configure(logger, os.Args)
+ if err != nil {
+ log.Fatal(err)
+ } else if cfg == nil {
return
}
-
- log.Printf("keep-web %s started", version)
+ logger = logger.WithField("ClusterID", cfg.cluster.ClusterID)
+ logger.Printf("keep-web %s started", version)
+ ctx := ctxlog.Context(context.Background(), logger)
if ext := ".txt"; mime.TypeByExtension(ext) == "" {
log.Warnf("cannot look up MIME type for %q -- this probably means /etc/mime.types is missing -- clients will see incorrect content types", ext)
os.Setenv("ARVADOS_API_HOST", cfg.cluster.Services.Controller.ExternalURL.Host)
srv := &server{Config: cfg}
- if err := srv.Start(logrus.StandardLogger()); err != nil {
- log.Fatal(err)
+ if err := srv.Start(ctx, initLogger); err != nil {
+ logger.Fatal(err)
}
if _, err := daemon.SdNotify(false, "READY=1"); err != nil {
- log.Printf("Error notifying init daemon: %v", err)
+ logger.Printf("Error notifying init daemon: %v", err)
}
- log.Println("Listening at", srv.Addr)
+ logger.Println("Listening at", srv.Addr)
if err := srv.Wait(); err != nil {
- log.Fatal(err)
+ logger.Fatal(err)
}
}
import (
"crypto/hmac"
"crypto/sha256"
+ "encoding/base64"
"encoding/xml"
"errors"
"fmt"
"time"
"git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/AdRoll/goamz/s3"
)
s3MaxClockSkew = 5 * time.Minute
)
+type commonPrefix struct {
+ Prefix string
+}
+
+type listV1Resp struct {
+ XMLName string `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
+ s3.ListResp
+ // s3.ListResp marshals an empty tag when
+ // CommonPrefixes is nil, which confuses some clients.
+ // Fix by using this nested struct instead.
+ CommonPrefixes []commonPrefix
+ // Similarly, we need omitempty here, because an empty
+ // tag confuses some clients (e.g.,
+ // github.com/aws/aws-sdk-net never terminates its
+ // paging loop).
+ NextMarker string `xml:"NextMarker,omitempty"`
+ // ListObjectsV2 has a KeyCount response field.
+ KeyCount int
+}
+
+type listV2Resp struct {
+ XMLName string `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
+ IsTruncated bool
+ Contents []s3.Key
+ Name string
+ Prefix string
+ Delimiter string
+ MaxKeys int
+ CommonPrefixes []commonPrefix
+ EncodingType string `xml:",omitempty"`
+ KeyCount int
+ ContinuationToken string `xml:",omitempty"`
+ NextContinuationToken string `xml:",omitempty"`
+ StartAfter string `xml:",omitempty"`
+}
+
func hmacstring(msg string, key []byte) []byte {
h := hmac.New(sha256.New, key)
io.WriteString(h, msg)
}
}
- normalizedURL := *r.URL
- normalizedURL.RawPath = ""
- normalizedURL.Path = reMultipleSlashChars.ReplaceAllString(normalizedURL.Path, "/")
- canonicalRequest := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", r.Method, normalizedURL.EscapedPath(), s3querystring(r.URL), canonicalHeaders, signedHeaders, r.Header.Get("X-Amz-Content-Sha256"))
+ normalizedPath := normalizePath(r.URL.Path)
+ ctxlog.FromContext(r.Context()).Debugf("normalizedPath %q", normalizedPath)
+ canonicalRequest := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", r.Method, normalizedPath, s3querystring(r.URL), canonicalHeaders, signedHeaders, r.Header.Get("X-Amz-Content-Sha256"))
ctxlog.FromContext(r.Context()).Debugf("s3stringToSign: canonicalRequest %s", canonicalRequest)
return fmt.Sprintf("%s\n%s\n%s\n%s", alg, r.Header.Get("X-Amz-Date"), scope, hashdigest(sha256.New(), canonicalRequest)), nil
}
+func normalizePath(s string) string {
+ // (url.URL).EscapedPath() would be incorrect here. AWS
+ // documentation specifies the URL path should be normalized
+ // according to RFC 3986, i.e., unescaping ALPHA / DIGIT / "-"
+ // / "." / "_" / "~". The implication is that everything other
+ // than those chars (and "/") _must_ be percent-encoded --
+ // even chars like ";" and "," that are not normally
+ // percent-encoded in paths.
+ out := ""
+ for _, c := range []byte(reMultipleSlashChars.ReplaceAllString(s, "/")) {
+ if (c >= 'a' && c <= 'z') ||
+ (c >= 'A' && c <= 'Z') ||
+ (c >= '0' && c <= '9') ||
+ c == '-' ||
+ c == '.' ||
+ c == '_' ||
+ c == '~' ||
+ c == '/' {
+ out += string(c)
+ } else {
+ out += fmt.Sprintf("%%%02X", c)
+ }
+ }
+ return out
+}
+
func s3signature(secretKey, scope, signedHeaders, stringToSign string) (string, error) {
// scope is {datestamp}/{region}/{service}/aws4_request
drs := strings.Split(scope, "/")
var InvalidRequest = "InvalidRequest"
var SignatureDoesNotMatch = "SignatureDoesNotMatch"
+var reRawQueryIndicatesAPI = regexp.MustCompile(`^[a-z]+(&|$)`)
+
// serveS3 handles r and returns true if r is a request from an S3
// client, otherwise it returns false.
func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
return false
}
- _, kc, client, release, err := h.getClients(r.Header.Get("X-Request-Id"), token)
- if err != nil {
- s3ErrorResponse(w, InternalError, "Pool failed: "+h.clientPool.Err().Error(), r.URL.Path, http.StatusInternalServerError)
- return true
+ var err error
+ var fs arvados.CustomFileSystem
+ var arvclient *arvadosclient.ArvadosClient
+ if r.Method == http.MethodGet || r.Method == http.MethodHead {
+ // Use a single session (cached FileSystem) across
+ // multiple read requests.
+ var sess *cachedSession
+ fs, sess, err = h.Config.Cache.GetSession(token)
+ if err != nil {
+ s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
+ return true
+ }
+ arvclient = sess.arvadosclient
+ } else {
+ // Create a FileSystem for this request, to avoid
+ // exposing incomplete write operations to concurrent
+ // requests.
+ var kc *keepclient.KeepClient
+ var release func()
+ var client *arvados.Client
+ arvclient, kc, client, release, err = h.getClients(r.Header.Get("X-Request-Id"), token)
+ if err != nil {
+ s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
+ return true
+ }
+ defer release()
+ fs = client.SiteFileSystem(kc)
+ fs.ForwardSlashNameSubstitution(h.Config.cluster.Collections.ForwardSlashNameSubstitution)
}
- defer release()
-
- fs := client.SiteFileSystem(kc)
- fs.ForwardSlashNameSubstitution(h.Config.cluster.Collections.ForwardSlashNameSubstitution)
var objectNameGiven bool
var bucketName string
w.Header().Set("Content-Type", "application/xml")
io.WriteString(w, xml.Header)
fmt.Fprintln(w, `<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"/>`)
+ } else if _, ok = r.URL.Query()["location"]; ok {
+ // GetBucketLocation
+ w.Header().Set("Content-Type", "application/xml")
+ io.WriteString(w, xml.Header)
+ fmt.Fprintln(w, `<LocationConstraint><LocationConstraint xmlns="http://s3.amazonaws.com/doc/2006-03-01/">`+
+ h.Config.cluster.ClusterID+
+ `</LocationConstraint></LocationConstraint>`)
+ } else if reRawQueryIndicatesAPI.MatchString(r.URL.RawQuery) {
+ // GetBucketWebsite ("GET /bucketid/?website"), GetBucketTagging, etc.
+ s3ErrorResponse(w, InvalidRequest, "API not supported", r.URL.Path+"?"+r.URL.RawQuery, http.StatusBadRequest)
} else {
// ListObjects
h.s3list(bucketName, w, r, fs)
}
return true
case r.Method == http.MethodGet || r.Method == http.MethodHead:
+ if reRawQueryIndicatesAPI.MatchString(r.URL.RawQuery) {
+ // GetObjectRetention ("GET /bucketid/objectid?retention&versionID=..."), etc.
+ s3ErrorResponse(w, InvalidRequest, "API not supported", r.URL.Path+"?"+r.URL.RawQuery, http.StatusBadRequest)
+ return true
+ }
fi, err := fs.Stat(fspath)
if r.Method == "HEAD" && !objectNameGiven {
// HeadBucket
s3ErrorResponse(w, NoSuchKey, "The specified key does not exist.", r.URL.Path, http.StatusNotFound)
return true
}
+
+ tokenUser, err := h.Config.Cache.GetTokenUser(token)
+ if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+ http.Error(w, "Not permitted", http.StatusForbidden)
+ return true
+ }
+ h.logUploadOrDownload(r, arvclient, fs, fspath, nil, tokenUser)
+
// shallow copy r, and change URL path
r := *r
r.URL.Path = fspath
http.FileServer(fs).ServeHTTP(w, &r)
return true
case r.Method == http.MethodPut:
+ if reRawQueryIndicatesAPI.MatchString(r.URL.RawQuery) {
+ // PutObjectAcl ("PUT /bucketid/objectid?acl&versionID=..."), etc.
+ s3ErrorResponse(w, InvalidRequest, "API not supported", r.URL.Path+"?"+r.URL.RawQuery, http.StatusBadRequest)
+ return true
+ }
if !objectNameGiven {
s3ErrorResponse(w, InvalidArgument, "Missing object name in PUT request.", r.URL.Path, http.StatusBadRequest)
return true
return true
}
defer f.Close()
+
+ tokenUser, err := h.Config.Cache.GetTokenUser(token)
+ if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+ http.Error(w, "Not permitted", http.StatusForbidden)
+ return true
+ }
+ h.logUploadOrDownload(r, arvclient, fs, fspath, nil, tokenUser)
+
_, err = io.Copy(f, r.Body)
if err != nil {
err = fmt.Errorf("write to %q failed: %w", r.URL.Path, err)
s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
return true
}
+ // Ensure a subsequent read operation will see the changes.
+ h.Config.Cache.ResetSession(token)
w.WriteHeader(http.StatusOK)
return true
case r.Method == http.MethodDelete:
+ if reRawQueryIndicatesAPI.MatchString(r.URL.RawQuery) {
+ // DeleteObjectTagging ("DELETE /bucketid/objectid?tagging&versionID=..."), etc.
+ s3ErrorResponse(w, InvalidRequest, "API not supported", r.URL.Path+"?"+r.URL.RawQuery, http.StatusBadRequest)
+ return true
+ }
if !objectNameGiven || r.URL.Path == "/" {
s3ErrorResponse(w, InvalidArgument, "missing object name in DELETE request", r.URL.Path, http.StatusBadRequest)
return true
s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
return true
}
+ // Ensure a subsequent read operation will see the changes.
+ h.Config.Cache.ResetSession(token)
w.WriteHeader(http.StatusNoContent)
return true
default:
s3ErrorResponse(w, InvalidRequest, "method not allowed", r.URL.Path, http.StatusMethodNotAllowed)
-
return true
}
}
func (h *handler) s3list(bucket string, w http.ResponseWriter, r *http.Request, fs arvados.CustomFileSystem) {
var params struct {
- delimiter string
- marker string
- maxKeys int
- prefix string
+ v2 bool
+ delimiter string
+ maxKeys int
+ prefix string
+ marker string // decoded continuationToken (v2) or provided by client (v1)
+ startAfter string // v2
+ continuationToken string // v2
+ encodingTypeURL bool // v2
}
params.delimiter = r.FormValue("delimiter")
- params.marker = r.FormValue("marker")
if mk, _ := strconv.ParseInt(r.FormValue("max-keys"), 10, 64); mk > 0 && mk < s3MaxKeys {
params.maxKeys = int(mk)
} else {
params.maxKeys = s3MaxKeys
}
params.prefix = r.FormValue("prefix")
+ switch r.FormValue("list-type") {
+ case "":
+ case "2":
+ params.v2 = true
+ default:
+ http.Error(w, "invalid list-type parameter", http.StatusBadRequest)
+ return
+ }
+ if params.v2 {
+ params.continuationToken = r.FormValue("continuation-token")
+ marker, err := base64.StdEncoding.DecodeString(params.continuationToken)
+ if err != nil {
+ http.Error(w, "invalid continuation token", http.StatusBadRequest)
+ return
+ }
+ params.marker = string(marker)
+ params.startAfter = r.FormValue("start-after")
+ switch r.FormValue("encoding-type") {
+ case "":
+ case "url":
+ params.encodingTypeURL = true
+ default:
+ http.Error(w, "invalid encoding-type parameter", http.StatusBadRequest)
+ return
+ }
+ } else {
+ params.marker = r.FormValue("marker")
+ }
bucketdir := "by_id/" + bucket
// walkpath is the directory (relative to bucketdir) we need
walkpath = ""
}
- type commonPrefix struct {
- Prefix string
- }
- type listResp struct {
- XMLName string `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
- s3.ListResp
- // s3.ListResp marshals an empty tag when
- // CommonPrefixes is nil, which confuses some clients.
- // Fix by using this nested struct instead.
- CommonPrefixes []commonPrefix
- // Similarly, we need omitempty here, because an empty
- // tag confuses some clients (e.g.,
- // github.com/aws/aws-sdk-net never terminates its
- // paging loop).
- NextMarker string `xml:"NextMarker,omitempty"`
- // ListObjectsV2 has a KeyCount response field.
- KeyCount int
- }
- resp := listResp{
- ListResp: s3.ListResp{
- Name: bucket,
- Prefix: params.prefix,
- Delimiter: params.delimiter,
- Marker: params.marker,
- MaxKeys: params.maxKeys,
- },
- }
+ resp := listV2Resp{
+ Name: bucket,
+ Prefix: params.prefix,
+ Delimiter: params.delimiter,
+ MaxKeys: params.maxKeys,
+ ContinuationToken: r.FormValue("continuation-token"),
+ StartAfter: params.startAfter,
+ }
+ nextMarker := ""
+
commonPrefixes := map[string]bool{}
err := walkFS(fs, strings.TrimSuffix(bucketdir+"/"+walkpath, "/"), true, func(path string, fi os.FileInfo) error {
if path == bucketdir {
return errDone
}
}
- if path < params.marker || path < params.prefix {
+ if path < params.marker || path < params.prefix || path <= params.startAfter {
return nil
}
if fi.IsDir() && !h.Config.cluster.Collections.S3FolderObjects {
// finding a regular file inside it.
return nil
}
+ if len(resp.Contents)+len(commonPrefixes) >= params.maxKeys {
+ resp.IsTruncated = true
+ if params.delimiter != "" || params.v2 {
+ nextMarker = path
+ }
+ return errDone
+ }
if params.delimiter != "" {
idx := strings.Index(path[len(params.prefix):], params.delimiter)
if idx >= 0 {
return filepath.SkipDir
}
}
- if len(resp.Contents)+len(commonPrefixes) >= params.maxKeys {
- resp.IsTruncated = true
- if params.delimiter != "" {
- resp.NextMarker = path
- }
- return errDone
- }
resp.Contents = append(resp.Contents, s3.Key{
Key: path,
LastModified: fi.ModTime().UTC().Format("2006-01-02T15:04:05.999") + "Z",
sort.Slice(resp.CommonPrefixes, func(i, j int) bool { return resp.CommonPrefixes[i].Prefix < resp.CommonPrefixes[j].Prefix })
}
resp.KeyCount = len(resp.Contents)
+ var respV1orV2 interface{}
+
+ if params.encodingTypeURL {
+ // https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
+ // "If you specify the encoding-type request
+ // parameter, Amazon S3 includes this element in the
+ // response, and returns encoded key name values in
+ // the following response elements:
+ //
+ // Delimiter, Prefix, Key, and StartAfter.
+ //
+ // Type: String
+ //
+ // Valid Values: url"
+ //
+ // This is somewhat vague but in practice it appears
+ // to mean x-www-form-urlencoded as in RFC1866 8.2.1
+ // para 1 (encode space as "+") rather than straight
+ // percent-encoding as in RFC1738 2.2. Presumably,
+ // the intent is to allow the client to decode XML and
+ // then paste the strings directly into another URI
+ // query or POST form like "https://host/path?foo=" +
+ // foo + "&bar=" + bar.
+ resp.EncodingType = "url"
+ resp.Delimiter = url.QueryEscape(resp.Delimiter)
+ resp.Prefix = url.QueryEscape(resp.Prefix)
+ resp.StartAfter = url.QueryEscape(resp.StartAfter)
+ for i, ent := range resp.Contents {
+ ent.Key = url.QueryEscape(ent.Key)
+ resp.Contents[i] = ent
+ }
+ for i, ent := range resp.CommonPrefixes {
+ ent.Prefix = url.QueryEscape(ent.Prefix)
+ resp.CommonPrefixes[i] = ent
+ }
+ }
+
+ if params.v2 {
+ resp.NextContinuationToken = base64.StdEncoding.EncodeToString([]byte(nextMarker))
+ respV1orV2 = resp
+ } else {
+ respV1orV2 = listV1Resp{
+ CommonPrefixes: resp.CommonPrefixes,
+ NextMarker: nextMarker,
+ KeyCount: resp.KeyCount,
+ ListResp: s3.ListResp{
+ IsTruncated: resp.IsTruncated,
+ Name: bucket,
+ Prefix: params.prefix,
+ Delimiter: params.delimiter,
+ Marker: params.marker,
+ MaxKeys: params.maxKeys,
+ Contents: resp.Contents,
+ },
+ }
+ }
+
w.Header().Set("Content-Type", "application/xml")
io.WriteString(w, xml.Header)
- if err := xml.NewEncoder(w).Encode(resp); err != nil {
+ if err := xml.NewEncoder(w).Encode(respV1orV2); err != nil {
ctxlog.FromContext(r.Context()).WithError(err).Error("error writing xml response")
}
}
import (
"bytes"
+ "context"
"crypto/rand"
"crypto/sha256"
"fmt"
"git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/AdRoll/goamz/aws"
"github.com/AdRoll/goamz/s3"
+ aws_aws "github.com/aws/aws-sdk-go/aws"
+ aws_credentials "github.com/aws/aws-sdk-go/aws/credentials"
+ aws_session "github.com/aws/aws-sdk-go/aws/session"
+ aws_s3 "github.com/aws/aws-sdk-go/service/s3"
check "gopkg.in/check.v1"
)
auth := aws.NewAuth(arvadostest.ActiveTokenUUID, arvadostest.ActiveToken, "", time.Now().Add(time.Hour))
region := aws.Region{
- Name: s.testServer.Addr,
+ Name: "zzzzz",
S3Endpoint: "http://" + s.testServer.Addr,
}
client := s3.New(*auth, region)
}
func (s *IntegrationSuite) sign(c *check.C, req *http.Request, key, secret string) {
- scope := "20200202/region/service/aws4_request"
+ scope := "20200202/zzzzz/service/aws4_request"
signedHeaders := "date"
req.Header.Set("Date", time.Now().UTC().Format(time.RFC1123))
stringToSign, err := s3stringToSign(s3SignAlgorithm, scope, signedHeaders, req)
rawPath string
normalizedPath string
}{
- {"/foo", "/foo"}, // boring case
- {"/foo%5fbar", "/foo_bar"}, // _ must not be escaped
- {"/foo%2fbar", "/foo/bar"}, // / must not be escaped
- {"/(foo)", "/%28foo%29"}, // () must be escaped
- {"/foo%5bbar", "/foo%5Bbar"}, // %XX must be uppercase
+ {"/foo", "/foo"}, // boring case
+ {"/foo%5fbar", "/foo_bar"}, // _ must not be escaped
+ {"/foo%2fbar", "/foo/bar"}, // / must not be escaped
+ {"/(foo)/[];,", "/%28foo%29/%5B%5D%3B%2C"}, // ()[];, must be escaped
+ {"/foo%5bbar", "/foo%5Bbar"}, // %XX must be uppercase
+ {"//foo///.bar", "/foo/.bar"}, // "//" and "///" must be squashed to "/"
} {
+ c.Logf("trial %q", trial)
+
date := time.Now().UTC().Format("20060102T150405Z")
- scope := "20200202/fakeregion/S3/aws4_request"
+ scope := "20200202/zzzzz/S3/aws4_request"
canonicalRequest := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", "GET", trial.normalizedPath, "", "host:host.example.com\n", "host", "")
c.Logf("canonicalRequest %q", canonicalRequest)
expect := fmt.Sprintf("%s\n%s\n%s\n%s", s3SignAlgorithm, date, scope, hashdigest(sha256.New(), canonicalRequest))
}
}
+func (s *IntegrationSuite) TestS3GetBucketLocation(c *check.C) {
+ stage := s.s3setup(c)
+ defer stage.teardown(c)
+ for _, bucket := range []*s3.Bucket{stage.collbucket, stage.projbucket} {
+ req, err := http.NewRequest("GET", bucket.URL("/"), nil)
+ c.Check(err, check.IsNil)
+ req.Header.Set("Authorization", "AWS "+arvadostest.ActiveTokenV2+":none")
+ req.URL.RawQuery = "location"
+ resp, err := http.DefaultClient.Do(req)
+ c.Assert(err, check.IsNil)
+ c.Check(resp.Header.Get("Content-Type"), check.Equals, "application/xml")
+ buf, err := ioutil.ReadAll(resp.Body)
+ c.Assert(err, check.IsNil)
+ c.Check(string(buf), check.Equals, "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<LocationConstraint><LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">zzzzz</LocationConstraint></LocationConstraint>\n")
+ }
+}
+
func (s *IntegrationSuite) TestS3GetBucketVersioning(c *check.C) {
stage := s.s3setup(c)
defer stage.teardown(c)
}
}
+func (s *IntegrationSuite) TestS3UnsupportedAPIs(c *check.C) {
+ stage := s.s3setup(c)
+ defer stage.teardown(c)
+ for _, trial := range []struct {
+ method string
+ path string
+ rawquery string
+ }{
+ {"GET", "/", "acl&versionId=1234"}, // GetBucketAcl
+ {"GET", "/foo", "acl&versionId=1234"}, // GetObjectAcl
+ {"PUT", "/", "acl"}, // PutBucketAcl
+ {"PUT", "/foo", "acl"}, // PutObjectAcl
+ {"DELETE", "/", "tagging"}, // DeleteBucketTagging
+ {"DELETE", "/foo", "tagging"}, // DeleteObjectTagging
+ } {
+ for _, bucket := range []*s3.Bucket{stage.collbucket, stage.projbucket} {
+ c.Logf("trial %v bucket %v", trial, bucket)
+ req, err := http.NewRequest(trial.method, bucket.URL(trial.path), nil)
+ c.Check(err, check.IsNil)
+ req.Header.Set("Authorization", "AWS "+arvadostest.ActiveTokenV2+":none")
+ req.URL.RawQuery = trial.rawquery
+ resp, err := http.DefaultClient.Do(req)
+ c.Assert(err, check.IsNil)
+ c.Check(resp.Header.Get("Content-Type"), check.Equals, "application/xml")
+ buf, err := ioutil.ReadAll(resp.Body)
+ c.Assert(err, check.IsNil)
+ c.Check(string(buf), check.Matches, "(?ms).*InvalidRequest.*API not supported.*")
+ }
+ }
+}
+
// If there are no CommonPrefixes entries, the CommonPrefixes XML tag
// should not appear at all.
func (s *IntegrationSuite) TestS3ListNoCommonPrefixes(c *check.C) {
}
}
+func (s *IntegrationSuite) TestS3ListObjectsV2(c *check.C) {
+ stage := s.s3setup(c)
+ defer stage.teardown(c)
+ dirs := 2
+ filesPerDir := 40
+ stage.writeBigDirs(c, dirs, filesPerDir)
+
+ sess := aws_session.Must(aws_session.NewSession(&aws_aws.Config{
+ Region: aws_aws.String("auto"),
+ Endpoint: aws_aws.String("http://" + s.testServer.Addr),
+ Credentials: aws_credentials.NewStaticCredentials(url.QueryEscape(arvadostest.ActiveTokenV2), url.QueryEscape(arvadostest.ActiveTokenV2), ""),
+ S3ForcePathStyle: aws_aws.Bool(true),
+ }))
+
+ stringOrNil := func(s string) *string {
+ if s == "" {
+ return nil
+ } else {
+ return &s
+ }
+ }
+
+ client := aws_s3.New(sess)
+ ctx := context.Background()
+
+ for _, trial := range []struct {
+ prefix string
+ delimiter string
+ startAfter string
+ maxKeys int
+ expectKeys int
+ expectCommonPrefixes map[string]bool
+ }{
+ {
+ // Expect {filesPerDir plus the dir itself}
+ // for each dir, plus emptydir, emptyfile, and
+ // sailboat.txt.
+ expectKeys: (filesPerDir+1)*dirs + 3,
+ },
+ {
+ maxKeys: 15,
+ expectKeys: (filesPerDir+1)*dirs + 3,
+ },
+ {
+ startAfter: "dir0/z",
+ maxKeys: 15,
+ // Expect {filesPerDir plus the dir itself}
+ // for each dir except dir0, plus emptydir,
+ // emptyfile, and sailboat.txt.
+ expectKeys: (filesPerDir+1)*(dirs-1) + 3,
+ },
+ {
+ maxKeys: 1,
+ delimiter: "/",
+ expectKeys: 2, // emptyfile, sailboat.txt
+ expectCommonPrefixes: map[string]bool{"dir0/": true, "dir1/": true, "emptydir/": true},
+ },
+ {
+ startAfter: "dir0/z",
+ maxKeys: 15,
+ delimiter: "/",
+ expectKeys: 2, // emptyfile, sailboat.txt
+ expectCommonPrefixes: map[string]bool{"dir1/": true, "emptydir/": true},
+ },
+ {
+ startAfter: "dir0/file10.txt",
+ maxKeys: 15,
+ delimiter: "/",
+ expectKeys: 2,
+ expectCommonPrefixes: map[string]bool{"dir0/": true, "dir1/": true, "emptydir/": true},
+ },
+ {
+ startAfter: "dir0/file10.txt",
+ maxKeys: 15,
+ prefix: "d",
+ delimiter: "/",
+ expectKeys: 0,
+ expectCommonPrefixes: map[string]bool{"dir0/": true, "dir1/": true},
+ },
+ } {
+ c.Logf("[trial %+v]", trial)
+ params := aws_s3.ListObjectsV2Input{
+ Bucket: aws_aws.String(stage.collbucket.Name),
+ Prefix: stringOrNil(trial.prefix),
+ Delimiter: stringOrNil(trial.delimiter),
+ StartAfter: stringOrNil(trial.startAfter),
+ MaxKeys: aws_aws.Int64(int64(trial.maxKeys)),
+ }
+ keySeen := map[string]bool{}
+ prefixSeen := map[string]bool{}
+ for {
+ result, err := client.ListObjectsV2WithContext(ctx, ¶ms)
+ if !c.Check(err, check.IsNil) {
+ break
+ }
+ c.Check(result.Name, check.DeepEquals, aws_aws.String(stage.collbucket.Name))
+ c.Check(result.Prefix, check.DeepEquals, aws_aws.String(trial.prefix))
+ c.Check(result.Delimiter, check.DeepEquals, aws_aws.String(trial.delimiter))
+ // The following two fields are expected to be
+ // nil (i.e., no tag in XML response) rather
+ // than "" when the corresponding request
+ // field was empty or nil.
+ c.Check(result.StartAfter, check.DeepEquals, stringOrNil(trial.startAfter))
+ c.Check(result.ContinuationToken, check.DeepEquals, params.ContinuationToken)
+
+ if trial.maxKeys > 0 {
+ c.Check(result.MaxKeys, check.DeepEquals, aws_aws.Int64(int64(trial.maxKeys)))
+ c.Check(len(result.Contents)+len(result.CommonPrefixes) <= trial.maxKeys, check.Equals, true)
+ } else {
+ c.Check(result.MaxKeys, check.DeepEquals, aws_aws.Int64(int64(s3MaxKeys)))
+ }
+
+ for _, ent := range result.Contents {
+ c.Assert(ent.Key, check.NotNil)
+ c.Check(*ent.Key > trial.startAfter, check.Equals, true)
+ c.Check(keySeen[*ent.Key], check.Equals, false, check.Commentf("dup key %q", *ent.Key))
+ keySeen[*ent.Key] = true
+ }
+ for _, ent := range result.CommonPrefixes {
+ c.Assert(ent.Prefix, check.NotNil)
+ c.Check(strings.HasSuffix(*ent.Prefix, trial.delimiter), check.Equals, true, check.Commentf("bad CommonPrefix %q", *ent.Prefix))
+ if strings.HasPrefix(trial.startAfter, *ent.Prefix) {
+ // If we asked for
+ // startAfter=dir0/file10.txt,
+ // we expect dir0/ to be
+ // returned as a common prefix
+ } else {
+ c.Check(*ent.Prefix > trial.startAfter, check.Equals, true)
+ }
+ c.Check(prefixSeen[*ent.Prefix], check.Equals, false, check.Commentf("dup common prefix %q", *ent.Prefix))
+ prefixSeen[*ent.Prefix] = true
+ }
+ if *result.IsTruncated && c.Check(result.NextContinuationToken, check.Not(check.Equals), "") {
+ params.ContinuationToken = aws_aws.String(*result.NextContinuationToken)
+ } else {
+ break
+ }
+ }
+ c.Check(keySeen, check.HasLen, trial.expectKeys)
+ c.Check(prefixSeen, check.HasLen, len(trial.expectCommonPrefixes))
+ if len(trial.expectCommonPrefixes) > 0 {
+ c.Check(prefixSeen, check.DeepEquals, trial.expectCommonPrefixes)
+ }
+ }
+}
+
+func (s *IntegrationSuite) TestS3ListObjectsV2EncodingTypeURL(c *check.C) {
+ stage := s.s3setup(c)
+ defer stage.teardown(c)
+ dirs := 2
+ filesPerDir := 40
+ stage.writeBigDirs(c, dirs, filesPerDir)
+
+ sess := aws_session.Must(aws_session.NewSession(&aws_aws.Config{
+ Region: aws_aws.String("auto"),
+ Endpoint: aws_aws.String("http://" + s.testServer.Addr),
+ Credentials: aws_credentials.NewStaticCredentials(url.QueryEscape(arvadostest.ActiveTokenV2), url.QueryEscape(arvadostest.ActiveTokenV2), ""),
+ S3ForcePathStyle: aws_aws.Bool(true),
+ }))
+
+ client := aws_s3.New(sess)
+ ctx := context.Background()
+
+ result, err := client.ListObjectsV2WithContext(ctx, &aws_s3.ListObjectsV2Input{
+ Bucket: aws_aws.String(stage.collbucket.Name),
+ Prefix: aws_aws.String("dir0/"),
+ Delimiter: aws_aws.String("/"),
+ StartAfter: aws_aws.String("dir0/"),
+ EncodingType: aws_aws.String("url"),
+ })
+ c.Assert(err, check.IsNil)
+ c.Check(*result.Prefix, check.Equals, "dir0%2F")
+ c.Check(*result.Delimiter, check.Equals, "%2F")
+ c.Check(*result.StartAfter, check.Equals, "dir0%2F")
+ for _, ent := range result.Contents {
+ c.Check(*ent.Key, check.Matches, "dir0%2F.*")
+ }
+ result, err = client.ListObjectsV2WithContext(ctx, &aws_s3.ListObjectsV2Input{
+ Bucket: aws_aws.String(stage.collbucket.Name),
+ Delimiter: aws_aws.String("/"),
+ EncodingType: aws_aws.String("url"),
+ })
+ c.Assert(err, check.IsNil)
+ c.Check(*result.Delimiter, check.Equals, "%2F")
+ c.Check(result.CommonPrefixes, check.HasLen, dirs+1)
+ for _, ent := range result.CommonPrefixes {
+ c.Check(*ent.Prefix, check.Matches, ".*%2F")
+ }
+}
+
// TestS3cmd checks compatibility with the s3cmd command line tool, if
// it's installed. As of Debian buster, s3cmd is only in backports, so
// `arvados-server install` don't install it, and this test skips if
buf, err := cmd.CombinedOutput()
c.Check(err, check.IsNil)
c.Check(string(buf), check.Matches, `.* 3 +s3://`+arvadostest.FooCollection+`/foo\n`)
+
+ // This tests whether s3cmd's path normalization agrees with
+ // keep-web's signature verification wrt chars like "|"
+ // (neither reserved nor unreserved) and "," (not normally
+ // percent-encoded in a path).
+ cmd = exec.Command("s3cmd", "--no-ssl", "--host="+s.testServer.Addr, "--host-bucket="+s.testServer.Addr, "--access_key="+arvadostest.ActiveTokenUUID, "--secret_key="+arvadostest.ActiveToken, "get", "s3://"+arvadostest.FooCollection+"/foo,;$[|]bar")
+ buf, err = cmd.CombinedOutput()
+ c.Check(err, check.NotNil)
+ c.Check(string(buf), check.Matches, `(?ms).*NoSuchKey.*\n`)
}
func (s *IntegrationSuite) TestS3BucketInHost(c *check.C) {
import (
"context"
+ "net"
"net/http"
"git.arvados.org/arvados.git/sdk/go/arvados"
- "git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
Config *Config
}
-func (srv *server) Start(logger *logrus.Logger) error {
+func (srv *server) Start(ctx context.Context, logger *logrus.Logger) error {
h := &handler{Config: srv.Config}
reg := prometheus.NewRegistry()
h.Config.Cache.registry = reg
- ctx := ctxlog.Context(context.Background(), logger)
- mh := httpserver.Instrument(reg, logger, httpserver.HandlerWithContext(ctx, httpserver.AddRequestIDs(httpserver.LogRequests(h))))
+ // Warning: when updating this to use Command() from
+ // lib/service, make sure to implement an exemption in
+ // httpserver.HandlerWithDeadline() so large file uploads are
+ // allowed to take longer than the usual API.RequestTimeout.
+ // See #13697.
+ mh := httpserver.Instrument(reg, logger, httpserver.AddRequestIDs(httpserver.LogRequests(h)))
h.MetricsAPI = mh.ServeAPI(h.Config.cluster.ManagementToken, http.NotFoundHandler())
srv.Handler = mh
+ srv.BaseContext = func(net.Listener) context.Context { return ctx }
var listen arvados.URL
for listen = range srv.Config.cluster.Services.WebDAV.InternalURLs {
break
import (
"bytes"
+ "context"
"crypto/md5"
"encoding/json"
"fmt"
// IntegrationSuite tests need an API server and a keep-web server
type IntegrationSuite struct {
testServer *server
+ ArvConfig *arvados.Config
}
func (s *IntegrationSuite) TestNoToken(c *check.C) {
c.Check(summaries["request_duration_seconds/get/404"].SampleCount, check.Equals, "1")
c.Check(summaries["time_to_status_seconds/get/404"].SampleCount, check.Equals, "1")
c.Check(counters["arvados_keepweb_collectioncache_requests//"].Value, check.Equals, int64(2))
- c.Check(counters["arvados_keepweb_collectioncache_api_calls//"].Value, check.Equals, int64(1))
+ c.Check(counters["arvados_keepweb_collectioncache_api_calls//"].Value, check.Equals, int64(2))
c.Check(counters["arvados_keepweb_collectioncache_hits//"].Value, check.Equals, int64(1))
c.Check(counters["arvados_keepweb_collectioncache_pdh_hits//"].Value, check.Equals, int64(1))
- c.Check(counters["arvados_keepweb_collectioncache_permission_hits//"].Value, check.Equals, int64(1))
c.Check(gauges["arvados_keepweb_collectioncache_cached_manifests//"].Value, check.Equals, float64(1))
// FooCollection's cached manifest size is 45 ("1f4b0....+45") plus one 51-byte blob signature
- c.Check(gauges["arvados_keepweb_collectioncache_cached_manifest_bytes//"].Value, check.Equals, float64(45+51))
+ c.Check(gauges["arvados_keepweb_sessions_cached_collection_bytes//"].Value, check.Equals, float64(45+51))
// If the Host header indicates a collection, /metrics.json
// refers to a file in the collection -- the metrics handler
}
func (s *IntegrationSuite) SetUpSuite(c *check.C) {
- arvadostest.StartAPI()
+ arvadostest.ResetDB(c)
arvadostest.StartKeep(2, true)
arv, err := arvadosclient.MakeArvadosClient()
func (s *IntegrationSuite) TearDownSuite(c *check.C) {
arvadostest.StopKeep(2)
- arvadostest.StopAPI()
}
func (s *IntegrationSuite) SetUpTest(c *check.C) {
ldr.Path = "-"
arvCfg, err := ldr.Load()
c.Check(err, check.IsNil)
- cfg := newConfig(arvCfg)
+ cfg := newConfig(ctxlog.TestLogger(c), arvCfg)
c.Assert(err, check.IsNil)
cfg.Client = arvados.Client{
APIHost: testAPIHost,
cfg.cluster.ManagementToken = arvadostest.ManagementToken
cfg.cluster.SystemRootToken = arvadostest.SystemRootToken
cfg.cluster.Users.AnonymousUserToken = arvadostest.AnonymousToken
+ s.ArvConfig = arvCfg
s.testServer = &server{Config: cfg}
- err = s.testServer.Start(ctxlog.TestLogger(c))
+ logger := ctxlog.TestLogger(c)
+ ctx := ctxlog.Context(context.Background(), logger)
+ err = s.testServer.Start(ctx, logger)
c.Assert(err, check.Equals, nil)
}
"net/url"
"git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"gopkg.in/check.v1"
)
func (s *UnitSuite) TestStatus(c *check.C) {
- h := handler{Config: newConfig(s.Config)}
+ h := handler{Config: newConfig(ctxlog.TestLogger(c), s.Config)}
u, _ := url.Parse("http://keep-web.example/status.json")
req := &http.Request{
Method: "GET",
package main
import (
+ "context"
"errors"
"flag"
"fmt"
"os/signal"
"regexp"
"strings"
- "sync"
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/health"
"git.arvados.org/arvados.git/sdk/go/httpserver"
"git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
"github.com/gorilla/mux"
- log "github.com/sirupsen/logrus"
+ lru "github.com/hashicorp/golang-lru"
+ "github.com/sirupsen/logrus"
)
var version = "dev"
const rfc3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
-func configure(logger log.FieldLogger, args []string) (*arvados.Cluster, error) {
- flags := flag.NewFlagSet(args[0], flag.ExitOnError)
+func configure(args []string) (*arvados.Cluster, logrus.FieldLogger, error) {
+ prog := args[0]
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
dumpConfig := flags.Bool("dump-config", false, "write current configuration to stdout and exit")
getVersion := flags.Bool("version", false, "Print version information and exit.")
+ initLogger := logrus.New()
+ initLogger.Formatter = &logrus.JSONFormatter{
+ TimestampFormat: rfc3339NanoFixed,
+ }
+ var logger logrus.FieldLogger = initLogger
+
loader := config.NewLoader(os.Stdin, logger)
loader.SetupFlags(flags)
-
args = loader.MungeLegacyConfigArgs(logger, args[1:], "-legacy-keepproxy-config")
- flags.Parse(args)
- // Print version information if requested
- if *getVersion {
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
fmt.Printf("keepproxy %s\n", version)
- return nil, nil
+ return nil, logger, nil
}
cfg, err := loader.Load()
if err != nil {
- return nil, err
+ return nil, logger, err
}
cluster, err := cfg.GetCluster("")
if err != nil {
- return nil, err
+ return nil, logger, err
}
+ logger = ctxlog.New(os.Stderr, cluster.SystemLogs.Format, cluster.SystemLogs.LogLevel).WithFields(logrus.Fields{
+ "ClusterID": cluster.ClusterID,
+ "PID": os.Getpid(),
+ })
+
if *dumpConfig {
out, err := yaml.Marshal(cfg)
if err != nil {
- return nil, err
+ return nil, logger, err
}
if _, err := os.Stdout.Write(out); err != nil {
- return nil, err
+ return nil, logger, err
}
- return nil, nil
+ return nil, logger, nil
}
- return cluster, nil
+
+ return cluster, logger, nil
}
func main() {
- logger := log.New()
- logger.Formatter = &log.JSONFormatter{
- TimestampFormat: rfc3339NanoFixed,
- }
-
- cluster, err := configure(logger, os.Args)
+ cluster, logger, err := configure(os.Args)
if err != nil {
- log.Fatal(err)
+ logger.Fatal(err)
}
if cluster == nil {
return
}
- log.Printf("keepproxy %s started", version)
+ logger.Printf("keepproxy %s started", version)
if err := run(logger, cluster); err != nil {
- log.Fatal(err)
+ logger.Fatal(err)
}
- log.Println("shutting down")
+ logger.Println("shutting down")
}
-func run(logger log.FieldLogger, cluster *arvados.Cluster) error {
+func run(logger logrus.FieldLogger, cluster *arvados.Cluster) error {
client, err := arvados.NewClientFromConfig(cluster)
if err != nil {
return err
}
if cluster.SystemLogs.LogLevel == "debug" {
- keepclient.DebugPrintf = log.Printf
+ keepclient.DebugPrintf = logger.Printf
}
kc, err := keepclient.MakeKeepClient(arv)
if err != nil {
}
if _, err := daemon.SdNotify(false, "READY=1"); err != nil {
- log.Printf("Error notifying init daemon: %v", err)
+ logger.Printf("Error notifying init daemon: %v", err)
}
- log.Println("listening at", listener.Addr())
+ logger.Println("listening at", listener.Addr())
// Shut down the server gracefully (by closing the listener)
// if SIGTERM is received.
term := make(chan os.Signal, 1)
go func(sig <-chan os.Signal) {
s := <-sig
- log.Println("caught signal:", s)
+ logger.Println("caught signal:", s)
listener.Close()
}(term)
signal.Notify(term, syscall.SIGTERM)
signal.Notify(term, syscall.SIGINT)
// Start serving requests.
- router = MakeRESTRouter(kc, time.Duration(keepclient.DefaultProxyRequestTimeout), cluster.ManagementToken)
- return http.Serve(listener, httpserver.AddRequestIDs(httpserver.LogRequests(router)))
+ router, err = MakeRESTRouter(kc, time.Duration(keepclient.DefaultProxyRequestTimeout), cluster, logger)
+ if err != nil {
+ return err
+ }
+ server := http.Server{
+ Handler: httpserver.AddRequestIDs(httpserver.LogRequests(router)),
+ BaseContext: func(net.Listener) context.Context {
+ return ctxlog.Context(context.Background(), logger)
+ },
+ }
+ return server.Serve(listener)
+}
+
+type TokenCacheEntry struct {
+ expire int64
+ user *arvados.User
}
type APITokenCache struct {
- tokens map[string]int64
- lock sync.Mutex
+ tokens *lru.TwoQueueCache
expireTime int64
}
-// RememberToken caches the token and set an expire time. If we already have
-// an expire time on the token, it is not updated.
-func (cache *APITokenCache) RememberToken(token string) {
- cache.lock.Lock()
- defer cache.lock.Unlock()
-
+// RememberToken caches the token and set an expire time. If the
+// token is already in the cache, it is not updated.
+func (cache *APITokenCache) RememberToken(token string, user *arvados.User) {
now := time.Now().Unix()
- if cache.tokens[token] == 0 {
- cache.tokens[token] = now + cache.expireTime
+ _, ok := cache.tokens.Get(token)
+ if !ok {
+ cache.tokens.Add(token, TokenCacheEntry{
+ expire: now + cache.expireTime,
+ user: user,
+ })
}
}
// RecallToken checks if the cached token is known and still believed to be
// valid.
-func (cache *APITokenCache) RecallToken(token string) bool {
- cache.lock.Lock()
- defer cache.lock.Unlock()
+func (cache *APITokenCache) RecallToken(token string) (bool, *arvados.User) {
+ val, ok := cache.tokens.Get(token)
+ if !ok {
+ return false, nil
+ }
+ cacheEntry := val.(TokenCacheEntry)
now := time.Now().Unix()
- if cache.tokens[token] == 0 {
- // Unknown token
- return false
- } else if now < cache.tokens[token] {
+ if now < cacheEntry.expire {
// Token is known and still valid
- return true
+ return true, cacheEntry.user
} else {
// Token is expired
- cache.tokens[token] = 0
- return false
+ cache.tokens.Remove(token)
+ return false, nil
}
}
return req.RemoteAddr
}
-func CheckAuthorizationHeader(kc *keepclient.KeepClient, cache *APITokenCache, req *http.Request) (pass bool, tok string) {
+func (h *proxyHandler) CheckAuthorizationHeader(req *http.Request) (pass bool, tok string, user *arvados.User) {
parts := strings.SplitN(req.Header.Get("Authorization"), " ", 2)
if len(parts) < 2 || !(parts[0] == "OAuth2" || parts[0] == "Bearer") || len(parts[1]) == 0 {
- return false, ""
+ return false, "", nil
}
tok = parts[1]
op = "write"
}
- if cache.RecallToken(op + ":" + tok) {
+ if ok, user := h.APITokenCache.RecallToken(op + ":" + tok); ok {
// Valid in the cache, short circuit
- return true, tok
+ return true, tok, user
}
var err error
- arv := *kc.Arvados
+ arv := *h.KeepClient.Arvados
arv.ApiToken = tok
arv.RequestID = req.Header.Get("X-Request-Id")
- if op == "read" {
- err = arv.Call("HEAD", "keep_services", "", "accessible", nil, nil)
- } else {
- err = arv.Call("HEAD", "users", "", "current", nil, nil)
+ user = &arvados.User{}
+ userCurrentError := arv.Call("GET", "users", "", "current", nil, user)
+ err = userCurrentError
+ if err != nil && op == "read" {
+ apiError, ok := err.(arvadosclient.APIServerError)
+ if ok && apiError.HttpStatusCode == http.StatusForbidden {
+ // If it was a scoped "sharing" token it will
+ // return 403 instead of 401 for the current
+ // user check. If it is a download operation
+ // and they have permission to read the
+ // keep_services table, we can allow it.
+ err = arv.Call("HEAD", "keep_services", "", "accessible", nil, nil)
+ }
}
if err != nil {
- log.Printf("%s: CheckAuthorizationHeader error: %v", GetRemoteAddress(req), err)
- return false, ""
+ ctxlog.FromContext(req.Context()).Printf("%s: CheckAuthorizationHeader error: %v", GetRemoteAddress(req), err)
+ return false, "", nil
+ }
+
+ if userCurrentError == nil && user.IsAdmin {
+ // checking userCurrentError is probably redundant,
+ // IsAdmin would be false anyway. But can't hurt.
+ if op == "read" && !h.cluster.Collections.KeepproxyPermission.Admin.Download {
+ return false, "", nil
+ }
+ if op == "write" && !h.cluster.Collections.KeepproxyPermission.Admin.Upload {
+ return false, "", nil
+ }
+ } else {
+ if op == "read" && !h.cluster.Collections.KeepproxyPermission.User.Download {
+ return false, "", nil
+ }
+ if op == "write" && !h.cluster.Collections.KeepproxyPermission.User.Upload {
+ return false, "", nil
+ }
}
// Success! Update cache
- cache.RememberToken(op + ":" + tok)
+ h.APITokenCache.RememberToken(op+":"+tok, user)
- return true, tok
+ return true, tok, user
}
// We need to make a private copy of the default http transport early
*APITokenCache
timeout time.Duration
transport *http.Transport
+ logger logrus.FieldLogger
+ cluster *arvados.Cluster
}
// MakeRESTRouter returns an http.Handler that passes GET and PUT
// requests to the appropriate handlers.
-func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, mgmtToken string) http.Handler {
+func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, cluster *arvados.Cluster, logger logrus.FieldLogger) (http.Handler, error) {
rest := mux.NewRouter()
transport := defaultTransport
transport.TLSClientConfig = arvadosclient.MakeTLSConfig(kc.Arvados.ApiInsecure)
transport.TLSHandshakeTimeout = keepclient.DefaultTLSHandshakeTimeout
+ cacheQ, err := lru.New2Q(500)
+ if err != nil {
+ return nil, fmt.Errorf("Error from lru.New2Q: %v", err)
+ }
+
h := &proxyHandler{
Handler: rest,
KeepClient: kc,
timeout: timeout,
transport: &transport,
APITokenCache: &APITokenCache{
- tokens: make(map[string]int64),
+ tokens: cacheQ,
expireTime: 300,
},
+ logger: logger,
+ cluster: cluster,
}
rest.HandleFunc(`/{locator:[0-9a-f]{32}\+.*}`, h.Get).Methods("GET", "HEAD")
rest.HandleFunc(`/`, h.Options).Methods("OPTIONS")
rest.Handle("/_health/{check}", &health.Handler{
- Token: mgmtToken,
+ Token: cluster.ManagementToken,
Prefix: "/_health/",
}).Methods("GET")
rest.NotFoundHandler = InvalidPathHandler{}
- return h
+ return h, nil
}
var errLoopDetected = errors.New("loop detected")
-func (*proxyHandler) checkLoop(resp http.ResponseWriter, req *http.Request) error {
+func (h *proxyHandler) checkLoop(resp http.ResponseWriter, req *http.Request) error {
if via := req.Header.Get("Via"); strings.Index(via, " "+viaAlias) >= 0 {
- log.Printf("proxy loop detected (request has Via: %q): perhaps keepproxy is misidentified by gateway config as an external client, or its keep_services record does not have service_type=proxy?", via)
+ h.logger.Printf("proxy loop detected (request has Via: %q): perhaps keepproxy is misidentified by gateway config as an external client, or its keep_services record does not have service_type=proxy?", via)
http.Error(resp, errLoopDetected.Error(), http.StatusInternalServerError)
return errLoopDetected
}
type InvalidPathHandler struct{}
func (InvalidPathHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
- log.Printf("%s: %s %s unroutable", GetRemoteAddress(req), req.Method, req.URL.Path)
+ ctxlog.FromContext(req.Context()).Printf("%s: %s %s unroutable", GetRemoteAddress(req), req.Method, req.URL.Path)
http.Error(resp, "Bad request", http.StatusBadRequest)
}
func (h *proxyHandler) Options(resp http.ResponseWriter, req *http.Request) {
- log.Printf("%s: %s %s", GetRemoteAddress(req), req.Method, req.URL.Path)
+ ctxlog.FromContext(req.Context()).Printf("%s: %s %s", GetRemoteAddress(req), req.Method, req.URL.Path)
SetCorsHeaders(resp)
}
-var errBadAuthorizationHeader = errors.New("Missing or invalid Authorization header")
+var errBadAuthorizationHeader = errors.New("Missing or invalid Authorization header, or method not allowed")
var errContentLengthMismatch = errors.New("Actual length != expected content length")
var errMethodNotSupported = errors.New("Method not supported")
var proxiedURI = "-"
defer func() {
- log.Println(GetRemoteAddress(req), req.Method, req.URL.Path, status, expectLength, responseLength, proxiedURI, err)
+ h.logger.Println(GetRemoteAddress(req), req.Method, req.URL.Path, status, expectLength, responseLength, proxiedURI, err)
if status != http.StatusOK {
http.Error(resp, err.Error(), status)
}
var pass bool
var tok string
- if pass, tok = CheckAuthorizationHeader(kc, h.APITokenCache, req); !pass {
+ var user *arvados.User
+ if pass, tok, user = h.CheckAuthorizationHeader(req); !pass {
status, err = http.StatusForbidden, errBadAuthorizationHeader
return
}
locator = removeHint.ReplaceAllString(locator, "$1")
+ if locator != "" {
+ parts := strings.SplitN(locator, "+", 3)
+ if len(parts) >= 2 {
+ logger := h.logger
+ if user != nil {
+ logger = logger.WithField("user_uuid", user.UUID).
+ WithField("user_full_name", user.FullName)
+ }
+ logger.WithField("locator", fmt.Sprintf("%s+%s", parts[0], parts[1])).Infof("Block download")
+ }
+ }
+
switch req.Method {
case "HEAD":
expectLength, proxiedURI, err = kc.Ask(locator)
}
if expectLength == -1 {
- log.Println("Warning:", GetRemoteAddress(req), req.Method, proxiedURI, "Content-Length not provided")
+ h.logger.Println("Warning:", GetRemoteAddress(req), req.Method, proxiedURI, "Content-Length not provided")
}
switch respErr := err.(type) {
var locatorOut string = "-"
defer func() {
- log.Println(GetRemoteAddress(req), req.Method, req.URL.Path, status, expectLength, kc.Want_replicas, wroteReplicas, locatorOut, err)
+ h.logger.Println(GetRemoteAddress(req), req.Method, req.URL.Path, status, expectLength, kc.Want_replicas, wroteReplicas, locatorOut, err)
if status != http.StatusOK {
http.Error(resp, err.Error(), status)
}
for _, sc := range strings.Split(req.Header.Get("X-Keep-Storage-Classes"), ",") {
scl = append(scl, strings.Trim(sc, " "))
}
- kc.StorageClasses = scl
+ kc.SetStorageClasses(scl)
}
_, err = fmt.Sscanf(req.Header.Get("Content-Length"), "%d", &expectLength)
var pass bool
var tok string
- if pass, tok = CheckAuthorizationHeader(kc, h.APITokenCache, req); !pass {
+ var user *arvados.User
+ if pass, tok, user = h.CheckAuthorizationHeader(req); !pass {
err = errBadAuthorizationHeader
status = http.StatusForbidden
return
kc.Arvados = &arvclient
// Check if the client specified the number of replicas
- if req.Header.Get("X-Keep-Desired-Replicas") != "" {
+ if desiredReplicas := req.Header.Get(keepclient.XKeepDesiredReplicas); desiredReplicas != "" {
var r int
- _, err := fmt.Sscanf(req.Header.Get(keepclient.XKeepDesiredReplicas), "%d", &r)
+ _, err := fmt.Sscanf(desiredReplicas, "%d", &r)
if err == nil {
kc.Want_replicas = r
}
locatorOut, wroteReplicas, err = kc.PutHR(locatorIn, req.Body, expectLength)
}
+ if locatorOut != "" {
+ parts := strings.SplitN(locatorOut, "+", 3)
+ if len(parts) >= 2 {
+ logger := h.logger
+ if user != nil {
+ logger = logger.WithField("user_uuid", user.UUID).
+ WithField("user_full_name", user.FullName)
+ }
+ logger.WithField("locator", fmt.Sprintf("%s+%s", parts[0], parts[1])).Infof("Block upload")
+ }
+ }
+
// Tell the client how many successful PUTs we accomplished
resp.Header().Set(keepclient.XKeepReplicasStored, fmt.Sprintf("%d", wroteReplicas))
switch err.(type) {
case nil:
status = http.StatusOK
+ if len(kc.StorageClasses) > 0 {
+ // A successful PUT request with storage classes means that all
+ // storage classes were fulfilled, so the client will get a
+ // confirmation via the X-Storage-Classes-Confirmed header.
+ hdr := ""
+ isFirst := true
+ for _, sc := range kc.StorageClasses {
+ if isFirst {
+ hdr = fmt.Sprintf("%s=%d", sc, wroteReplicas)
+ isFirst = false
+ } else {
+ hdr += fmt.Sprintf(", %s=%d", sc, wroteReplicas)
+ }
+ }
+ resp.Header().Set(keepclient.XKeepStorageClassesConfirmed, hdr)
+ }
_, err = io.WriteString(resp, locatorOut)
-
case keepclient.OversizeBlockError:
// Too much data
status = http.StatusRequestEntityTooLarge
-
case keepclient.InsufficientReplicasError:
- if wroteReplicas > 0 {
- // At least one write is considered success. The
- // client can decide if getting less than the number of
- // replications it asked for is a fatal error.
- status = http.StatusOK
- _, err = io.WriteString(resp, locatorOut)
- } else {
- status = http.StatusServiceUnavailable
- }
-
+ status = http.StatusServiceUnavailable
default:
status = http.StatusBadGateway
}
}()
kc := h.makeKeepClient(req)
- ok, token := CheckAuthorizationHeader(kc, h.APITokenCache, req)
+ ok, token, _ := h.CheckAuthorizationHeader(req)
if !ok {
status, err = http.StatusForbidden, errBadAuthorizationHeader
return
Documentation=https://doc.arvados.org/
After=network.target
-# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
-StartLimitInterval=0
-
# systemd>=230 (debian:9) obeys StartLimitIntervalSec in the [Unit] section
StartLimitIntervalSec=0
import (
"bytes"
"crypto/md5"
- "errors"
"fmt"
"io/ioutil"
"math/rand"
"git.arvados.org/arvados.git/sdk/go/keepclient"
log "github.com/sirupsen/logrus"
+ "gopkg.in/check.v1"
. "gopkg.in/check.v1"
)
}
func (s *ServerRequiredSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
arvadostest.StartKeep(2, false)
}
func (s *ServerRequiredSuite) TearDownSuite(c *C) {
arvadostest.StopKeep(2)
- arvadostest.StopAPI()
}
func (s *ServerRequiredConfigYmlSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
// config.yml defines 4 keepstores
arvadostest.StartKeep(4, false)
}
func (s *ServerRequiredConfigYmlSuite) TearDownSuite(c *C) {
arvadostest.StopKeep(4)
- arvadostest.StopAPI()
}
func (s *NoKeepServerSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
// We need API to have some keep services listed, but the
// services themselves should be unresponsive.
arvadostest.StartKeep(2, false)
arvadostest.ResetEnv()
}
-func (s *NoKeepServerSuite) TearDownSuite(c *C) {
- arvadostest.StopAPI()
-}
-
-func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool) *keepclient.KeepClient {
+func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool, kp *arvados.UploadDownloadRolePermissions) (*keepclient.KeepClient, *bytes.Buffer) {
cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
c.Assert(err, Equals, nil)
cluster, err := cfg.GetCluster("")
cluster.Services.Keepproxy.InternalURLs = map[arvados.URL]arvados.ServiceInstance{{Host: ":0"}: {}}
+ if kp != nil {
+ cluster.Collections.KeepproxyPermission = *kp
+ }
+
listener = nil
+ logbuf := &bytes.Buffer{}
+ logger := log.New()
+ logger.Out = logbuf
go func() {
- run(log.New(), cluster)
+ run(logger, cluster)
defer closeListener()
}()
waitForListener()
kc.SetServiceRoots(sr, sr, sr)
kc.Arvados.External = true
- return kc
+ return kc, logbuf
}
func (s *ServerRequiredSuite) TestResponseViaHeader(c *C) {
- runProxy(c, false, false)
+ runProxy(c, false, false, nil)
defer closeListener()
req, err := http.NewRequest("POST",
}
func (s *ServerRequiredSuite) TestLoopDetection(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
sr := map[string]string{
}
func (s *ServerRequiredSuite) TestStorageClassesHeader(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
// Set up fake keepstore to record request headers
c.Check(hdr.Get("X-Keep-Storage-Classes"), Equals, "secure")
}
+func (s *ServerRequiredSuite) TestStorageClassesConfirmedHeader(c *C) {
+ runProxy(c, false, false, nil)
+ defer closeListener()
+
+ content := []byte("foo")
+ hash := fmt.Sprintf("%x", md5.Sum(content))
+ client := &http.Client{}
+
+ req, err := http.NewRequest("PUT",
+ fmt.Sprintf("http://%s/%s", listener.Addr().String(), hash),
+ bytes.NewReader(content))
+ c.Assert(err, IsNil)
+ req.Header.Set("X-Keep-Storage-Classes", "default")
+ req.Header.Set("Authorization", "OAuth2 "+arvadostest.ActiveToken)
+ req.Header.Set("Content-Type", "application/octet-stream")
+
+ resp, err := client.Do(req)
+ c.Assert(err, IsNil)
+ c.Assert(resp.StatusCode, Equals, http.StatusOK)
+ c.Assert(resp.Header.Get("X-Keep-Storage-Classes-Confirmed"), Equals, "default=2")
+}
+
func (s *ServerRequiredSuite) TestDesiredReplicas(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
content := []byte("TestDesiredReplicas")
hash := fmt.Sprintf("%x", md5.Sum(content))
- for _, kc.Want_replicas = range []int{0, 1, 2} {
+ for _, kc.Want_replicas = range []int{0, 1, 2, 3} {
locator, rep, err := kc.PutB(content)
- c.Check(err, Equals, nil)
- c.Check(rep, Equals, kc.Want_replicas)
- if rep > 0 {
- c.Check(locator, Matches, fmt.Sprintf(`^%s\+%d(\+.+)?$`, hash, len(content)))
+ if kc.Want_replicas < 3 {
+ c.Check(err, Equals, nil)
+ c.Check(rep, Equals, kc.Want_replicas)
+ if rep > 0 {
+ c.Check(locator, Matches, fmt.Sprintf(`^%s\+%d(\+.+)?$`, hash, len(content)))
+ }
+ } else {
+ c.Check(err, ErrorMatches, ".*503.*")
}
}
}
func (s *ServerRequiredSuite) TestPutWrongContentLength(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
content := []byte("TestPutWrongContentLength")
// fixes the invalid Content-Length header. In order to test
// our server behavior, we have to call the handler directly
// using an httptest.ResponseRecorder.
- rtr := MakeRESTRouter(kc, 10*time.Second, "")
+ rtr, err := MakeRESTRouter(kc, 10*time.Second, &arvados.Cluster{}, log.New())
+ c.Assert(err, check.IsNil)
type testcase struct {
sendLength string
}
func (s *ServerRequiredSuite) TestManyFailedPuts(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
router.(*proxyHandler).timeout = time.Nanosecond
}
func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
- kc := runProxy(c, false, false)
+ kc, logbuf := runProxy(c, false, false, nil)
defer closeListener()
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
c.Check(rep, Equals, 2)
c.Check(err, Equals, nil)
c.Log("Finished PutB (expected success)")
+
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+ logbuf.Reset()
}
{
c.Assert(err, Equals, nil)
c.Check(blocklen, Equals, int64(3))
c.Log("Finished Ask (expected success)")
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+ logbuf.Reset()
}
{
c.Check(all, DeepEquals, []byte("foo"))
c.Check(blocklen, Equals, int64(3))
c.Log("Finished Get (expected success)")
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+ logbuf.Reset()
}
{
}
func (s *ServerRequiredSuite) TestPutAskGetForbidden(c *C) {
- kc := runProxy(c, true, false)
+ kc, _ := runProxy(c, true, false, nil)
defer closeListener()
hash := fmt.Sprintf("%x+3", md5.Sum([]byte("bar")))
hash2, rep, err := kc.PutB([]byte("bar"))
c.Check(hash2, Equals, "")
c.Check(rep, Equals, 0)
- c.Check(err, FitsTypeOf, keepclient.InsufficientReplicasError(errors.New("")))
+ c.Check(err, FitsTypeOf, keepclient.InsufficientReplicasError{})
blocklen, _, err := kc.Ask(hash)
c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
- c.Check(err, ErrorMatches, ".*not found.*")
+ c.Check(err, ErrorMatches, ".*HTTP 403.*")
c.Check(blocklen, Equals, int64(0))
_, blocklen, _, err = kc.Get(hash)
c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
- c.Check(err, ErrorMatches, ".*not found.*")
+ c.Check(err, ErrorMatches, ".*HTTP 403.*")
c.Check(blocklen, Equals, int64(0))
+}
+func testPermission(c *C, admin bool, perm arvados.UploadDownloadPermission) {
+ kp := arvados.UploadDownloadRolePermissions{}
+ if admin {
+ kp.Admin = perm
+ kp.User = arvados.UploadDownloadPermission{Upload: true, Download: true}
+ } else {
+ kp.Admin = arvados.UploadDownloadPermission{Upload: true, Download: true}
+ kp.User = perm
+ }
+
+ kc, logbuf := runProxy(c, false, false, &kp)
+ defer closeListener()
+ if admin {
+ kc.Arvados.ApiToken = arvadostest.AdminToken
+ } else {
+ kc.Arvados.ApiToken = arvadostest.ActiveToken
+ }
+
+ hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
+ var hash2 string
+
+ {
+ var rep int
+ var err error
+ hash2, rep, err = kc.PutB([]byte("foo"))
+
+ if perm.Upload {
+ c.Check(hash2, Matches, fmt.Sprintf(`^%s\+3(\+.+)?$`, hash))
+ c.Check(rep, Equals, 2)
+ c.Check(err, Equals, nil)
+ c.Log("Finished PutB (expected success)")
+ if admin {
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+ } else {
+
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="Active User" user_uuid=zzzzz-tpzed-xurymjxw79nv3jz.*`)
+ }
+ } else {
+ c.Check(hash2, Equals, "")
+ c.Check(rep, Equals, 0)
+ c.Check(err, FitsTypeOf, keepclient.InsufficientReplicasError{})
+ }
+ logbuf.Reset()
+ }
+ if perm.Upload {
+ // can't test download without upload.
+
+ reader, blocklen, _, err := kc.Get(hash2)
+ if perm.Download {
+ c.Assert(err, Equals, nil)
+ all, err := ioutil.ReadAll(reader)
+ c.Check(err, IsNil)
+ c.Check(all, DeepEquals, []byte("foo"))
+ c.Check(blocklen, Equals, int64(3))
+ c.Log("Finished Get (expected success)")
+ if admin {
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+ } else {
+ c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="Active User" user_uuid=zzzzz-tpzed-xurymjxw79nv3jz.*`)
+ }
+ } else {
+ c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
+ c.Check(err, ErrorMatches, ".*Missing or invalid Authorization header, or method not allowed.*")
+ c.Check(blocklen, Equals, int64(0))
+ }
+ logbuf.Reset()
+ }
+
+}
+
+func (s *ServerRequiredSuite) TestPutGetPermission(c *C) {
+
+ for _, adminperm := range []bool{true, false} {
+ for _, userperm := range []bool{true, false} {
+
+ testPermission(c, true,
+ arvados.UploadDownloadPermission{
+ Upload: adminperm,
+ Download: true,
+ })
+ testPermission(c, true,
+ arvados.UploadDownloadPermission{
+ Upload: true,
+ Download: adminperm,
+ })
+ testPermission(c, false,
+ arvados.UploadDownloadPermission{
+ Upload: true,
+ Download: userperm,
+ })
+ testPermission(c, false,
+ arvados.UploadDownloadPermission{
+ Upload: true,
+ Download: userperm,
+ })
+ }
+ }
}
func (s *ServerRequiredSuite) TestCorsHeaders(c *C) {
- runProxy(c, false, false)
+ runProxy(c, false, false, nil)
defer closeListener()
{
}
func (s *ServerRequiredSuite) TestPostWithoutHash(c *C) {
- runProxy(c, false, false)
+ runProxy(c, false, false, nil)
defer closeListener()
{
}
func getIndexWorker(c *C, useConfig bool) {
- kc := runProxy(c, false, useConfig)
+ kc, _ := runProxy(c, false, useConfig, nil)
defer closeListener()
// Put "index-data" blocks
}
func (s *ServerRequiredSuite) TestCollectionSharingToken(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
hash, _, err := kc.PutB([]byte("shareddata"))
c.Check(err, IsNil)
}
func (s *ServerRequiredSuite) TestPutAskGetInvalidToken(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
// Put a test block
_, _, _, err = kc.Get(hash)
c.Assert(err, FitsTypeOf, &keepclient.ErrNotFound{})
c.Check(err.(*keepclient.ErrNotFound).Temporary(), Equals, false)
- c.Check(err, ErrorMatches, ".*HTTP 403 \"Missing or invalid Authorization header\".*")
+ c.Check(err, ErrorMatches, ".*HTTP 403 \"Missing or invalid Authorization header, or method not allowed\".*")
}
_, _, err = kc.PutB([]byte("foo"))
- c.Check(err, ErrorMatches, ".*403.*Missing or invalid Authorization header")
+ c.Check(err, ErrorMatches, ".*403.*Missing or invalid Authorization header, or method not allowed")
}
}
func (s *ServerRequiredSuite) TestAskGetKeepProxyConnectionError(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
// Point keepproxy at a non-existent keepstore
}
func (s *NoKeepServerSuite) TestAskGetNoKeepServerError(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
}
func (s *ServerRequiredSuite) TestPing(c *C) {
- kc := runProxy(c, false, false)
+ kc, _ := runProxy(c, false, false, nil)
defer closeListener()
- rtr := MakeRESTRouter(kc, 10*time.Second, arvadostest.ManagementToken)
+ rtr, err := MakeRESTRouter(kc, 10*time.Second, &arvados.Cluster{ManagementToken: arvadostest.ManagementToken}, log.New())
+ c.Assert(err, check.IsNil)
req, err := http.NewRequest("GET",
"http://"+listener.Addr().String()+"/_health/ping",
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
case strings.Contains(err.Error(), "Not Found"):
// "storage: service returned without a response body (404 Not Found)"
return os.ErrNotExist
+ case strings.Contains(err.Error(), "ErrorCode=BlobNotFound"):
+ // "storage: service returned error: StatusCode=404, ErrorCode=BlobNotFound, ErrorMessage=The specified blob does not exist.\n..."
+ return os.ErrNotExist
default:
return err
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"sync"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
"os"
"sync"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/service"
"git.arvados.org/arvados.git/sdk/go/arvados"
)
var (
- version = "dev"
Command = service.Command(arvados.ServiceNameKeepstore, newHandlerOrErrorHandler)
)
-func main() {
- os.Exit(runCommand(os.Args[0], os.Args[1:], os.Stdin, os.Stdout, os.Stderr))
-}
-
func runCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
- args, ok := convertKeepstoreFlagsToServiceFlags(args, ctxlog.FromContext(context.Background()))
+ args, ok, code := convertKeepstoreFlagsToServiceFlags(prog, args, ctxlog.FromContext(context.Background()), stderr)
if !ok {
- return 2
+ return code
}
return Command.RunCommand(prog, args, stdin, stdout, stderr)
}
// Parse keepstore command line flags, and return equivalent
-// service.Command flags. The second return value ("ok") is true if
-// all provided flags were successfully converted.
-func convertKeepstoreFlagsToServiceFlags(args []string, lgr logrus.FieldLogger) ([]string, bool) {
+// service.Command flags. If the second return value ("ok") is false,
+// the program should exit, and the third return value is a suitable
+// exit code.
+func convertKeepstoreFlagsToServiceFlags(prog string, args []string, lgr logrus.FieldLogger, stderr io.Writer) ([]string, bool, int) {
flags := flag.NewFlagSet("", flag.ContinueOnError)
flags.String("listen", "", "Services.Keepstore.InternalURLs")
flags.Int("max-buffers", 0, "API.MaxKeepBlobBuffers")
flags.String("s3-bucket-volume", "", "Volumes.*.DriverParameters.Bucket")
flags.String("s3-region", "", "Volumes.*.DriverParameters.Region")
flags.String("s3-endpoint", "", "Volumes.*.DriverParameters.Endpoint")
- flags.String("s3-access-key-file", "", "Volumes.*.DriverParameters.AccessKey")
- flags.String("s3-secret-key-file", "", "Volumes.*.DriverParameters.SecretKey")
+ flags.String("s3-access-key-file", "", "Volumes.*.DriverParameters.AccessKeyID")
+ flags.String("s3-secret-key-file", "", "Volumes.*.DriverParameters.SecretAccessKey")
flags.String("s3-race-window", "", "Volumes.*.DriverParameters.RaceWindow")
flags.String("s3-replication", "", "Volumes.*.Replication")
flags.String("s3-unsafe-delete", "", "Volumes.*.DriverParameters.UnsafeDelete")
flags.String("config", "", "")
flags.String("legacy-keepstore-config", "", "")
- err := flags.Parse(args)
- if err == flag.ErrHelp {
- return []string{"-help"}, true
- } else if err != nil {
- return nil, false
+ if ok, code := cmd.ParseFlags(flags, prog, args, "", stderr); !ok {
+ return nil, false, code
}
args = nil
}
})
if !ok {
- return nil, false
+ return nil, false, 2
}
- flags = flag.NewFlagSet("", flag.ExitOnError)
+ flags = flag.NewFlagSet("", flag.ContinueOnError)
loader := config.NewLoader(nil, lgr)
loader.SetupFlags(flags)
- return loader.MungeLegacyConfigArgs(lgr, args, "-legacy-keepstore-config"), true
+ return loader.MungeLegacyConfigArgs(lgr, args, "-legacy-keepstore-config"), true, 0
}
type handler struct {
return errors.New("no volumes configured")
}
- h.Logger.Printf("keepstore %s starting, pid %d", version, os.Getpid())
+ h.Logger.Printf("keepstore %s starting, pid %d", cmd.Version.String(), os.Getpid())
// Start a round-robin VolumeManager with the configured volumes.
vm, err := makeRRVolumeManager(h.Logger, h.Cluster, serviceURL, newVolumeMetricsVecs(reg))
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"io"
}
}
+func NewCountingReaderAtSeeker(r readerAtSeeker, f func(uint64)) *countingReaderAtSeeker {
+ return &countingReaderAtSeeker{readerAtSeeker: r, counter: f}
+}
+
type countingReadWriter struct {
reader io.Reader
writer io.Writer
}
return nil
}
+
+type readerAtSeeker interface {
+ io.ReadSeeker
+ io.ReaderAt
+}
+
+type countingReaderAtSeeker struct {
+ readerAtSeeker
+ counter func(uint64)
+}
+
+func (crw *countingReaderAtSeeker) Read(buf []byte) (int, error) {
+ n, err := crw.readerAtSeeker.Read(buf)
+ crw.counter(uint64(n))
+ return n, err
+}
+
+func (crw *countingReaderAtSeeker) ReadAt(buf []byte, off int64) (int, error) {
+ n, err := crw.readerAtSeeker.ReadAt(buf, off)
+ crw.counter(uint64(n))
+ return n, err
+}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"gopkg.in/check.v1"
// The HTTP handlers are responsible for enforcing permission policy,
// so these tests must exercise all possible permission permutations.
-package main
+package keepstore
import (
"bytes"
"net/http"
"net/http/httptest"
"os"
- "regexp"
+ "sort"
"strings"
+ "sync/atomic"
"time"
"git.arvados.org/arvados.git/lib/config"
// A RequestTester represents the parameters for an HTTP request to
// be issued on behalf of a unit test.
type RequestTester struct {
- uri string
- apiToken string
- method string
- requestBody []byte
+ uri string
+ apiToken string
+ method string
+ requestBody []byte
+ storageClasses string
}
// Test GetBlockHandler on the following situations:
}
}
+func (s *HandlerSuite) TestReadsOrderedByStorageClassPriority(c *check.C) {
+ s.cluster.Volumes = map[string]arvados.Volume{
+ "zzzzz-nyw5e-111111111111111": {
+ Driver: "mock",
+ Replication: 1,
+ StorageClasses: map[string]bool{"class1": true}},
+ "zzzzz-nyw5e-222222222222222": {
+ Driver: "mock",
+ Replication: 1,
+ StorageClasses: map[string]bool{"class2": true, "class3": true}},
+ }
+
+ for _, trial := range []struct {
+ priority1 int // priority of class1, thus vol1
+ priority2 int // priority of class2
+ priority3 int // priority of class3 (vol2 priority will be max(priority2, priority3))
+ get1 int // expected number of "get" ops on vol1
+ get2 int // expected number of "get" ops on vol2
+ }{
+ {100, 50, 50, 1, 0}, // class1 has higher priority => try vol1 first, no need to try vol2
+ {100, 100, 100, 1, 0}, // same priority, vol1 is first lexicographically => try vol1 first and succeed
+ {66, 99, 33, 1, 1}, // class2 has higher priority => try vol2 first, then try vol1
+ {66, 33, 99, 1, 1}, // class3 has highest priority => vol2 has highest => try vol2 first, then try vol1
+ } {
+ c.Logf("%+v", trial)
+ s.cluster.StorageClasses = map[string]arvados.StorageClassConfig{
+ "class1": {Priority: trial.priority1},
+ "class2": {Priority: trial.priority2},
+ "class3": {Priority: trial.priority3},
+ }
+ c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
+ IssueRequest(s.handler,
+ &RequestTester{
+ method: "PUT",
+ uri: "/" + TestHash,
+ requestBody: TestBlock,
+ storageClasses: "class1",
+ })
+ IssueRequest(s.handler,
+ &RequestTester{
+ method: "GET",
+ uri: "/" + TestHash,
+ })
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-111111111111111"].Volume.(*MockVolume).CallCount("Get"), check.Equals, trial.get1)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-222222222222222"].Volume.(*MockVolume).CallCount("Get"), check.Equals, trial.get2)
+ }
+}
+
+func (s *HandlerSuite) TestPutWithNoWritableVolumes(c *check.C) {
+ s.cluster.Volumes = map[string]arvados.Volume{
+ "zzzzz-nyw5e-111111111111111": {
+ Driver: "mock",
+ Replication: 1,
+ ReadOnly: true,
+ StorageClasses: map[string]bool{"class1": true}},
+ }
+ c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
+ resp := IssueRequest(s.handler,
+ &RequestTester{
+ method: "PUT",
+ uri: "/" + TestHash,
+ requestBody: TestBlock,
+ storageClasses: "class1",
+ })
+ c.Check(resp.Code, check.Equals, FullError.HTTPCode)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-111111111111111"].Volume.(*MockVolume).CallCount("Put"), check.Equals, 0)
+}
+
+func (s *HandlerSuite) TestConcurrentWritesToMultipleStorageClasses(c *check.C) {
+ s.cluster.Volumes = map[string]arvados.Volume{
+ "zzzzz-nyw5e-111111111111111": {
+ Driver: "mock",
+ Replication: 1,
+ StorageClasses: map[string]bool{"class1": true}},
+ "zzzzz-nyw5e-121212121212121": {
+ Driver: "mock",
+ Replication: 1,
+ StorageClasses: map[string]bool{"class1": true, "class2": true}},
+ "zzzzz-nyw5e-222222222222222": {
+ Driver: "mock",
+ Replication: 1,
+ StorageClasses: map[string]bool{"class2": true}},
+ }
+
+ for _, trial := range []struct {
+ setCounter uint32 // value to stuff vm.counter, to control offset
+ classes string // desired classes
+ put111 int // expected number of "put" ops on 11111... after 2x put reqs
+ put121 int // expected number of "put" ops on 12121...
+ put222 int // expected number of "put" ops on 22222...
+ cmp111 int // expected number of "compare" ops on 11111... after 2x put reqs
+ cmp121 int // expected number of "compare" ops on 12121...
+ cmp222 int // expected number of "compare" ops on 22222...
+ }{
+ {0, "class1",
+ 1, 0, 0,
+ 2, 1, 0}, // first put compares on all vols with class2; second put succeeds after checking 121
+ {0, "class2",
+ 0, 1, 0,
+ 0, 2, 1}, // first put compares on all vols with class2; second put succeeds after checking 121
+ {0, "class1,class2",
+ 1, 1, 0,
+ 2, 2, 1}, // first put compares on all vols; second put succeeds after checking 111 and 121
+ {1, "class1,class2",
+ 0, 1, 0, // vm.counter offset is 1 so the first volume attempted is 121
+ 2, 2, 1}, // first put compares on all vols; second put succeeds after checking 111 and 121
+ {0, "class1,class2,class404",
+ 1, 1, 0,
+ 2, 2, 1}, // first put compares on all vols; second put doesn't compare on 222 because it already satisfied class2 on 121
+ } {
+ c.Logf("%+v", trial)
+ s.cluster.StorageClasses = map[string]arvados.StorageClassConfig{
+ "class1": {},
+ "class2": {},
+ "class3": {},
+ }
+ c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
+ atomic.StoreUint32(&s.handler.volmgr.counter, trial.setCounter)
+ for i := 0; i < 2; i++ {
+ IssueRequest(s.handler,
+ &RequestTester{
+ method: "PUT",
+ uri: "/" + TestHash,
+ requestBody: TestBlock,
+ storageClasses: trial.classes,
+ })
+ }
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-111111111111111"].Volume.(*MockVolume).CallCount("Put"), check.Equals, trial.put111)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-121212121212121"].Volume.(*MockVolume).CallCount("Put"), check.Equals, trial.put121)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-222222222222222"].Volume.(*MockVolume).CallCount("Put"), check.Equals, trial.put222)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-111111111111111"].Volume.(*MockVolume).CallCount("Compare"), check.Equals, trial.cmp111)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-121212121212121"].Volume.(*MockVolume).CallCount("Compare"), check.Equals, trial.cmp121)
+ c.Check(s.handler.volmgr.mountMap["zzzzz-nyw5e-222222222222222"].Volume.(*MockVolume).CallCount("Compare"), check.Equals, trial.cmp222)
+ }
+}
+
// Test TOUCH requests.
func (s *HandlerSuite) TestTouchHandler(c *check.C) {
c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
expected := `^` + TestHash + `\+\d+ \d+\n` +
TestHash2 + `\+\d+ \d+\n\n$`
- match, _ := regexp.MatchString(expected, response.Body.String())
- if !match {
- c.Errorf(
- "permissions on, superuser request: expected %s, got:\n%s",
- expected, response.Body.String())
- }
+ c.Check(response.Body.String(), check.Matches, expected, check.Commentf(
+ "permissions on, superuser request"))
// superuser /index/prefix request
// => OK
response)
expected = `^` + TestHash + `\+\d+ \d+\n\n$`
- match, _ = regexp.MatchString(expected, response.Body.String())
- if !match {
- c.Errorf(
- "permissions on, superuser /index/prefix request: expected %s, got:\n%s",
- expected, response.Body.String())
- }
+ c.Check(response.Body.String(), check.Matches, expected, check.Commentf(
+ "permissions on, superuser /index/prefix request"))
// superuser /index/{no-such-prefix} request
// => OK
var testcases = []pullTest{
{
"Valid pull list from an ordinary user",
- RequestTester{"/pull", userToken, "PUT", goodJSON},
+ RequestTester{"/pull", userToken, "PUT", goodJSON, ""},
http.StatusUnauthorized,
"Unauthorized\n",
},
{
"Invalid pull request from an ordinary user",
- RequestTester{"/pull", userToken, "PUT", badJSON},
+ RequestTester{"/pull", userToken, "PUT", badJSON, ""},
http.StatusUnauthorized,
"Unauthorized\n",
},
{
"Valid pull request from the data manager",
- RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", goodJSON},
+ RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", goodJSON, ""},
http.StatusOK,
"Received 3 pull requests\n",
},
{
"Invalid pull request from the data manager",
- RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", badJSON},
+ RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", badJSON, ""},
http.StatusBadRequest,
"",
},
var testcases = []trashTest{
{
"Valid trash list from an ordinary user",
- RequestTester{"/trash", userToken, "PUT", goodJSON},
+ RequestTester{"/trash", userToken, "PUT", goodJSON, ""},
http.StatusUnauthorized,
"Unauthorized\n",
},
{
"Invalid trash list from an ordinary user",
- RequestTester{"/trash", userToken, "PUT", badJSON},
+ RequestTester{"/trash", userToken, "PUT", badJSON, ""},
http.StatusUnauthorized,
"Unauthorized\n",
},
{
"Valid trash list from the data manager",
- RequestTester{"/trash", s.cluster.SystemRootToken, "PUT", goodJSON},
+ RequestTester{"/trash", s.cluster.SystemRootToken, "PUT", goodJSON, ""},
http.StatusOK,
"Received 3 trash requests\n",
},
{
"Invalid trash list from the data manager",
- RequestTester{"/trash", s.cluster.SystemRootToken, "PUT", badJSON},
+ RequestTester{"/trash", s.cluster.SystemRootToken, "PUT", badJSON, ""},
http.StatusBadRequest,
"",
},
if rt.apiToken != "" {
req.Header.Set("Authorization", "OAuth2 "+rt.apiToken)
}
+ if rt.storageClasses != "" {
+ req.Header.Set("X-Keep-Storage-Classes", rt.storageClasses)
+ }
handler.ServeHTTP(response, req)
return response
}
}
}
-type notifyingResponseRecorder struct {
- *httptest.ResponseRecorder
- closer chan bool
-}
-
-func (r *notifyingResponseRecorder) CloseNotify() <-chan bool {
- return r.closer
-}
-
func (s *HandlerSuite) TestGetHandlerClientDisconnect(c *check.C) {
s.cluster.Collections.BlobSigning = false
c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
bufs = newBufferPool(ctxlog.TestLogger(c), 1, BlockSize)
defer bufs.Put(bufs.Get(BlockSize))
- if err := s.handler.volmgr.AllWritable()[0].Put(context.Background(), TestHash, TestBlock); err != nil {
- c.Error(err)
- }
-
- resp := ¬ifyingResponseRecorder{
- ResponseRecorder: httptest.NewRecorder(),
- closer: make(chan bool, 1),
- }
- if _, ok := http.ResponseWriter(resp).(http.CloseNotifier); !ok {
- c.Fatal("notifyingResponseRecorder is broken")
- }
- // If anyone asks, the client has disconnected.
- resp.closer <- true
+ err := s.handler.volmgr.AllWritable()[0].Put(context.Background(), TestHash, TestBlock)
+ c.Assert(err, check.IsNil)
+ resp := httptest.NewRecorder()
ok := make(chan struct{})
go func() {
- req, _ := http.NewRequest("GET", fmt.Sprintf("/%s+%d", TestHash, len(TestBlock)), nil)
+ ctx, cancel := context.WithCancel(context.Background())
+ req, _ := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("/%s+%d", TestHash, len(TestBlock)), nil)
+ cancel()
s.handler.ServeHTTP(resp, req)
ok <- struct{}{}
}()
case <-ok:
}
- ExpectStatusCode(c, "client disconnect", http.StatusServiceUnavailable, resp.ResponseRecorder)
+ ExpectStatusCode(c, "client disconnect", http.StatusServiceUnavailable, resp)
for i, v := range s.handler.volmgr.AllWritable() {
if calls := v.Volume.(*MockVolume).called["GET"]; calls != 0 {
c.Errorf("volume %d got %d calls, expected 0", i, calls)
}
}
-func (s *HandlerSuite) TestPutReplicationHeader(c *check.C) {
+func (s *HandlerSuite) TestPutStorageClasses(c *check.C) {
+ s.cluster.Volumes = map[string]arvados.Volume{
+ "zzzzz-nyw5e-000000000000000": {Replication: 1, Driver: "mock"}, // "default" is implicit
+ "zzzzz-nyw5e-111111111111111": {Replication: 1, Driver: "mock", StorageClasses: map[string]bool{"special": true, "extra": true}},
+ "zzzzz-nyw5e-222222222222222": {Replication: 1, Driver: "mock", StorageClasses: map[string]bool{"readonly": true}, ReadOnly: true},
+ }
+ c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
+ rt := RequestTester{
+ method: "PUT",
+ uri: "/" + TestHash,
+ requestBody: TestBlock,
+ }
+
+ for _, trial := range []struct {
+ ask string
+ expect string
+ }{
+ {"", ""},
+ {"default", "default=1"},
+ {" , default , default , ", "default=1"},
+ {"special", "extra=1, special=1"},
+ {"special, readonly", "extra=1, special=1"},
+ {"special, nonexistent", "extra=1, special=1"},
+ {"extra, special", "extra=1, special=1"},
+ {"default, special", "default=1, extra=1, special=1"},
+ } {
+ c.Logf("success case %#v", trial)
+ rt.storageClasses = trial.ask
+ resp := IssueRequest(s.handler, &rt)
+ if trial.expect == "" {
+ // any non-empty value is correct
+ c.Check(resp.Header().Get("X-Keep-Storage-Classes-Confirmed"), check.Not(check.Equals), "")
+ } else {
+ c.Check(sortCommaSeparated(resp.Header().Get("X-Keep-Storage-Classes-Confirmed")), check.Equals, trial.expect)
+ }
+ }
+
+ for _, trial := range []struct {
+ ask string
+ }{
+ {"doesnotexist"},
+ {"doesnotexist, readonly"},
+ {"readonly"},
+ } {
+ c.Logf("failure case %#v", trial)
+ rt.storageClasses = trial.ask
+ resp := IssueRequest(s.handler, &rt)
+ c.Check(resp.Code, check.Equals, http.StatusServiceUnavailable)
+ }
+}
+
+func sortCommaSeparated(s string) string {
+ slice := strings.Split(s, ", ")
+ sort.Strings(slice)
+ return strings.Join(slice, ", ")
+}
+
+func (s *HandlerSuite) TestPutResponseHeader(c *check.C) {
c.Assert(s.handler.setup(context.Background(), s.cluster, "", prometheus.NewRegistry(), testServiceURL), check.IsNil)
resp := IssueRequest(s.handler, &RequestTester{
uri: "/" + TestHash,
requestBody: TestBlock,
})
- if r := resp.Header().Get("X-Keep-Replicas-Stored"); r != "1" {
- c.Logf("%#v", resp)
- c.Errorf("Got X-Keep-Replicas-Stored: %q, expected %q", r, "1")
- }
+ c.Logf("%#v", resp)
+ c.Check(resp.Header().Get("X-Keep-Replicas-Stored"), check.Equals, "1")
+ c.Check(resp.Header().Get("X-Keep-Storage-Classes-Confirmed"), check.Equals, "default=1")
}
func (s *HandlerSuite) TestUntrashHandler(c *check.C) {
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"container/list"
"strconv"
"strings"
"sync"
+ "sync/atomic"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/health"
}
func (rtr *router) handleGET(resp http.ResponseWriter, req *http.Request) {
- ctx, cancel := contextForResponse(context.TODO(), resp)
- defer cancel()
-
locator := req.URL.Path[1:]
if strings.Contains(locator, "+R") && !strings.Contains(locator, "+A") {
- rtr.remoteProxy.Get(ctx, resp, req, rtr.cluster, rtr.volmgr)
+ rtr.remoteProxy.Get(req.Context(), resp, req, rtr.cluster, rtr.volmgr)
return
}
// isn't here, we can return 404 now instead of waiting for a
// buffer.
- buf, err := getBufferWithContext(ctx, bufs, BlockSize)
+ buf, err := getBufferWithContext(req.Context(), bufs, BlockSize)
if err != nil {
http.Error(resp, err.Error(), http.StatusServiceUnavailable)
return
}
defer bufs.Put(buf)
- size, err := GetBlock(ctx, rtr.volmgr, mux.Vars(req)["hash"], buf, resp)
+ size, err := GetBlock(req.Context(), rtr.volmgr, mux.Vars(req)["hash"], buf, resp)
if err != nil {
code := http.StatusInternalServerError
if err, ok := err.(*KeepError); ok {
resp.Write(buf[:size])
}
-// Return a new context that gets cancelled by resp's CloseNotifier.
-func contextForResponse(parent context.Context, resp http.ResponseWriter) (context.Context, context.CancelFunc) {
- ctx, cancel := context.WithCancel(parent)
- if cn, ok := resp.(http.CloseNotifier); ok {
- go func(c <-chan bool) {
- select {
- case <-c:
- cancel()
- case <-ctx.Done():
- }
- }(cn.CloseNotify())
- }
- return ctx, cancel
-}
-
// Get a buffer from the pool -- but give up and return a non-nil
// error if ctx ends before we get a buffer.
func getBufferWithContext(ctx context.Context, bufs *bufferPool, bufSize int) ([]byte, error) {
}
func (rtr *router) handlePUT(resp http.ResponseWriter, req *http.Request) {
- ctx, cancel := contextForResponse(context.TODO(), resp)
- defer cancel()
-
hash := mux.Vars(req)["hash"]
// Detect as many error conditions as possible before reading
return
}
- buf, err := getBufferWithContext(ctx, bufs, int(req.ContentLength))
+ var wantStorageClasses []string
+ if hdr := req.Header.Get("X-Keep-Storage-Classes"); hdr != "" {
+ wantStorageClasses = strings.Split(hdr, ",")
+ for i, sc := range wantStorageClasses {
+ wantStorageClasses[i] = strings.TrimSpace(sc)
+ }
+ } else {
+ // none specified -- use configured default
+ for class, cfg := range rtr.cluster.StorageClasses {
+ if cfg.Default {
+ wantStorageClasses = append(wantStorageClasses, class)
+ }
+ }
+ }
+
+ buf, err := getBufferWithContext(req.Context(), bufs, int(req.ContentLength))
if err != nil {
http.Error(resp, err.Error(), http.StatusServiceUnavailable)
return
return
}
- replication, err := PutBlock(ctx, rtr.volmgr, buf, hash)
+ result, err := PutBlock(req.Context(), rtr.volmgr, buf, hash, wantStorageClasses)
bufs.Put(buf)
if err != nil {
expiry := time.Now().Add(rtr.cluster.Collections.BlobSigningTTL.Duration())
returnHash = SignLocator(rtr.cluster, returnHash, apiToken, expiry)
}
- resp.Header().Set("X-Keep-Replicas-Stored", strconv.Itoa(replication))
+ resp.Header().Set("X-Keep-Replicas-Stored", result.TotalReplication())
+ resp.Header().Set("X-Keep-Storage-Classes-Confirmed", result.ClassReplication())
resp.Write([]byte(returnHash + "\n"))
}
// populate the given NodeStatus struct with current values.
func (rtr *router) readNodeStatus(st *NodeStatus) {
- st.Version = version
+ st.Version = strings.SplitN(cmd.Version.String(), " ", 2)[0]
vols := rtr.volmgr.AllReadable()
if cap(st.Volumes) < len(vols) {
st.Volumes = make([]*volumeStatusEnt, len(vols))
if filehash != hash {
// TODO: Try harder to tell a sysadmin about
// this.
- log.Error("checksum mismatch for block %s (actual %s) on %s", hash, filehash, vol)
+ log.Errorf("checksum mismatch for block %s (actual %s), size %d on %s", hash, filehash, size, vol)
errorToCaller = DiskHashError
continue
}
return 0, errorToCaller
}
-// PutBlock Stores the BLOCK (identified by the content id HASH) in Keep.
-//
-// PutBlock(ctx, block, hash)
-// Stores the BLOCK (identified by the content id HASH) in Keep.
-//
-// The MD5 checksum of the block must be identical to the content id HASH.
-// If not, an error is returned.
+type putProgress struct {
+ classNeeded map[string]bool
+ classTodo map[string]bool
+ mountUsed map[*VolumeMount]bool
+ totalReplication int
+ classDone map[string]int
+}
+
+// Number of distinct replicas stored. "2" can mean the block was
+// stored on 2 different volumes with replication 1, or on 1 volume
+// with replication 2.
+func (pr putProgress) TotalReplication() string {
+ return strconv.Itoa(pr.totalReplication)
+}
+
+// Number of replicas satisfying each storage class, formatted like
+// "default=2; special=1".
+func (pr putProgress) ClassReplication() string {
+ s := ""
+ for k, v := range pr.classDone {
+ if len(s) > 0 {
+ s += ", "
+ }
+ s += k + "=" + strconv.Itoa(v)
+ }
+ return s
+}
+
+func (pr *putProgress) Add(mnt *VolumeMount) {
+ if pr.mountUsed[mnt] {
+ logrus.Warnf("BUG? superfluous extra write to mount %s", mnt.UUID)
+ return
+ }
+ pr.mountUsed[mnt] = true
+ pr.totalReplication += mnt.Replication
+ for class := range mnt.StorageClasses {
+ pr.classDone[class] += mnt.Replication
+ delete(pr.classTodo, class)
+ }
+}
+
+func (pr *putProgress) Sub(mnt *VolumeMount) {
+ if !pr.mountUsed[mnt] {
+ logrus.Warnf("BUG? Sub called with no prior matching Add: %s", mnt.UUID)
+ return
+ }
+ pr.mountUsed[mnt] = false
+ pr.totalReplication -= mnt.Replication
+ for class := range mnt.StorageClasses {
+ pr.classDone[class] -= mnt.Replication
+ if pr.classNeeded[class] {
+ pr.classTodo[class] = true
+ }
+ }
+}
+
+func (pr *putProgress) Done() bool {
+ return len(pr.classTodo) == 0 && pr.totalReplication > 0
+}
+
+func (pr *putProgress) Want(mnt *VolumeMount) bool {
+ if pr.Done() || pr.mountUsed[mnt] {
+ return false
+ }
+ if len(pr.classTodo) == 0 {
+ // none specified == "any"
+ return true
+ }
+ for class := range mnt.StorageClasses {
+ if pr.classTodo[class] {
+ return true
+ }
+ }
+ return false
+}
+
+func (pr *putProgress) Copy() *putProgress {
+ cp := putProgress{
+ classNeeded: pr.classNeeded,
+ classTodo: make(map[string]bool, len(pr.classTodo)),
+ classDone: make(map[string]int, len(pr.classDone)),
+ mountUsed: make(map[*VolumeMount]bool, len(pr.mountUsed)),
+ totalReplication: pr.totalReplication,
+ }
+ for k, v := range pr.classTodo {
+ cp.classTodo[k] = v
+ }
+ for k, v := range pr.classDone {
+ cp.classDone[k] = v
+ }
+ for k, v := range pr.mountUsed {
+ cp.mountUsed[k] = v
+ }
+ return &cp
+}
+
+func newPutProgress(classes []string) putProgress {
+ pr := putProgress{
+ classNeeded: make(map[string]bool, len(classes)),
+ classTodo: make(map[string]bool, len(classes)),
+ classDone: map[string]int{},
+ mountUsed: map[*VolumeMount]bool{},
+ }
+ for _, c := range classes {
+ if c != "" {
+ pr.classNeeded[c] = true
+ pr.classTodo[c] = true
+ }
+ }
+ return pr
+}
+
+// PutBlock stores the given block on one or more volumes.
//
-// PutBlock stores the BLOCK on the first Keep volume with free space.
-// A failure code is returned to the user only if all volumes fail.
+// The MD5 checksum of the block must match the given hash.
//
-// On success, PutBlock returns nil.
-// On failure, it returns a KeepError with one of the following codes:
+// The block is written to each writable volume (ordered by priority
+// and then UUID, see volume.go) until at least one replica has been
+// stored in each of the requested storage classes.
//
-// 500 Collision
-// A different block with the same hash already exists on this
-// Keep server.
-// 422 MD5Fail
-// The MD5 hash of the BLOCK does not match the argument HASH.
-// 503 Full
-// There was not enough space left in any Keep volume to store
-// the object.
-// 500 Fail
-// The object could not be stored for some other reason (e.g.
-// all writes failed). The text of the error message should
-// provide as much detail as possible.
+// The returned error, if any, is a KeepError with one of the
+// following codes:
//
-func PutBlock(ctx context.Context, volmgr *RRVolumeManager, block []byte, hash string) (int, error) {
+// 500 Collision
+// A different block with the same hash already exists on this
+// Keep server.
+// 422 MD5Fail
+// The MD5 hash of the BLOCK does not match the argument HASH.
+// 503 Full
+// There was not enough space left in any Keep volume to store
+// the object.
+// 500 Fail
+// The object could not be stored for some other reason (e.g.
+// all writes failed). The text of the error message should
+// provide as much detail as possible.
+func PutBlock(ctx context.Context, volmgr *RRVolumeManager, block []byte, hash string, wantStorageClasses []string) (putProgress, error) {
log := ctxlog.FromContext(ctx)
// Check that BLOCK's checksum matches HASH.
blockhash := fmt.Sprintf("%x", md5.Sum(block))
if blockhash != hash {
log.Printf("%s: MD5 checksum %s did not match request", hash, blockhash)
- return 0, RequestHashError
+ return putProgress{}, RequestHashError
}
+ result := newPutProgress(wantStorageClasses)
+
// If we already have this data, it's intact on disk, and we
// can update its timestamp, return success. If we have
// different data with the same hash, return failure.
- if n, err := CompareAndTouch(ctx, volmgr, hash, block); err == nil || err == CollisionError {
- return n, err
- } else if ctx.Err() != nil {
- return 0, ErrClientDisconnect
- }
-
- // Choose a Keep volume to write to.
- // If this volume fails, try all of the volumes in order.
- if mnt := volmgr.NextWritable(); mnt != nil {
- if err := mnt.Put(ctx, hash, block); err != nil {
- log.WithError(err).Errorf("%s: Put(%s) failed", mnt.Volume, hash)
- } else {
- return mnt.Replication, nil // success!
- }
+ if err := CompareAndTouch(ctx, volmgr, hash, block, &result); err != nil || result.Done() {
+ return result, err
}
if ctx.Err() != nil {
- return 0, ErrClientDisconnect
+ return result, ErrClientDisconnect
}
- writables := volmgr.AllWritable()
+ writables := volmgr.NextWritable()
if len(writables) == 0 {
log.Error("no writable volumes")
- return 0, FullError
+ return result, FullError
}
- allFull := true
- for _, vol := range writables {
- err := vol.Put(ctx, hash, block)
- if ctx.Err() != nil {
- return 0, ErrClientDisconnect
+ var wg sync.WaitGroup
+ var mtx sync.Mutex
+ cond := sync.Cond{L: &mtx}
+ // pending predicts what result will be if all pending writes
+ // succeed.
+ pending := result.Copy()
+ var allFull atomic.Value
+ allFull.Store(true)
+
+ // We hold the lock for the duration of the "each volume" loop
+ // below, except when it is released during cond.Wait().
+ mtx.Lock()
+
+ for _, mnt := range writables {
+ // Wait until our decision to use this mount does not
+ // depend on the outcome of pending writes.
+ for result.Want(mnt) && !pending.Want(mnt) {
+ cond.Wait()
}
- switch err {
- case nil:
- return vol.Replication, nil // success!
- case FullError:
+ if !result.Want(mnt) {
continue
- default:
- // The volume is not full but the
- // write did not succeed. Report the
- // error and continue trying.
- allFull = false
- log.WithError(err).Errorf("%s: Put(%s) failed", vol, hash)
}
+ mnt := mnt
+ pending.Add(mnt)
+ wg.Add(1)
+ go func() {
+ log.Debugf("PutBlock: start write to %s", mnt.UUID)
+ defer wg.Done()
+ err := mnt.Put(ctx, hash, block)
+
+ mtx.Lock()
+ if err != nil {
+ log.Debugf("PutBlock: write to %s failed", mnt.UUID)
+ pending.Sub(mnt)
+ } else {
+ log.Debugf("PutBlock: write to %s succeeded", mnt.UUID)
+ result.Add(mnt)
+ }
+ cond.Broadcast()
+ mtx.Unlock()
+
+ if err != nil && err != FullError && ctx.Err() == nil {
+ // The volume is not full but the
+ // write did not succeed. Report the
+ // error and continue trying.
+ allFull.Store(false)
+ log.WithError(err).Errorf("%s: Put(%s) failed", mnt.Volume, hash)
+ }
+ }()
+ }
+ mtx.Unlock()
+ wg.Wait()
+ if ctx.Err() != nil {
+ return result, ErrClientDisconnect
+ }
+ if result.Done() {
+ return result, nil
}
- if allFull {
- log.Error("all volumes are full")
- return 0, FullError
+ if result.totalReplication > 0 {
+ // Some, but not all, of the storage classes were
+ // satisfied. This qualifies as success.
+ return result, nil
+ } else if allFull.Load().(bool) {
+ log.Error("all volumes with qualifying storage classes are full")
+ return putProgress{}, FullError
+ } else {
+ // Already logged the non-full errors.
+ return putProgress{}, GenericError
}
- // Already logged the non-full errors.
- return 0, GenericError
}
-// CompareAndTouch returns the current replication level if one of the
-// volumes already has the given content and it successfully updates
-// the relevant block's modification time in order to protect it from
-// premature garbage collection. Otherwise, it returns a non-nil
-// error.
-func CompareAndTouch(ctx context.Context, volmgr *RRVolumeManager, hash string, buf []byte) (int, error) {
+// CompareAndTouch looks for volumes where the given content already
+// exists and its modification time can be updated (i.e., it is
+// protected from garbage collection), and updates result accordingly.
+// It returns when the result is Done() or all volumes have been
+// checked.
+func CompareAndTouch(ctx context.Context, volmgr *RRVolumeManager, hash string, buf []byte, result *putProgress) error {
log := ctxlog.FromContext(ctx)
- var bestErr error = NotFoundError
for _, mnt := range volmgr.AllWritable() {
+ if !result.Want(mnt) {
+ continue
+ }
err := mnt.Compare(ctx, hash, buf)
if ctx.Err() != nil {
- return 0, ctx.Err()
+ return nil
} else if err == CollisionError {
// Stop if we have a block with same hash but
// different content. (It will be impossible
// to tell which one is wanted if we have
// both, so there's no point writing it even
// on a different volume.)
- log.Error("collision in Compare(%s) on volume %s", hash, mnt.Volume)
- return 0, err
+ log.Errorf("collision in Compare(%s) on volume %s", hash, mnt.Volume)
+ return CollisionError
} else if os.IsNotExist(err) {
// Block does not exist. This is the only
// "normal" error: we don't log anything.
}
if err := mnt.Touch(hash); err != nil {
log.WithError(err).Errorf("error in Touch(%s) on volume %s", hash, mnt.Volume)
- bestErr = err
continue
}
// Compare and Touch both worked --> done.
- return mnt.Replication, nil
+ result.Add(mnt)
+ if result.Done() {
+ return nil
+ }
}
- return 0, bestErr
+ return nil
}
var validLocatorRe = regexp.MustCompile(`^[0-9a-f]{32}$`)
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"time"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"fmt"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
type MockMutex struct {
AllowLock chan struct{}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"time"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"strconv"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
rrc.ResponseWriter.Write(rrc.Buffer)
return nil
}
- _, err := PutBlock(rrc.Context, rrc.VolumeManager, rrc.Buffer, rrc.Locator[:32])
+ _, err := PutBlock(rrc.Context, rrc.VolumeManager, rrc.Buffer, rrc.Locator[:32], nil)
if rrc.Context.Err() != nil {
// If caller hung up, log that instead of subsequent/misleading errors.
http.Error(rrc.ResponseWriter, rrc.Context.Err().Error(), http.StatusGatewayTimeout)
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
if volume != nil {
return volume.Put(context.Background(), locator, data)
}
- _, err := PutBlock(context.Background(), volmgr, data, locator)
+ _, err := PutBlock(context.Background(), volmgr, data, locator, nil)
return err
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
}
pullRequest := s.setupPullWorkerIntegrationTest(c, testData, false)
- defer arvadostest.StopAPI()
defer arvadostest.StopKeep(2)
s.performPullWorkerIntegrationTest(testData, pullRequest, c)
}
pullRequest := s.setupPullWorkerIntegrationTest(c, testData, true)
- defer arvadostest.StopAPI()
defer arvadostest.StopKeep(2)
s.performPullWorkerIntegrationTest(testData, pullRequest, c)
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
func (s *PullWorkerTestSuite) TestPullWorkerPullList_with_two_locators(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorkerPullList_with_two_locators",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 2 pull requests\n",
readContent: "hello",
func (s *PullWorkerTestSuite) TestPullWorkerPullList_with_one_locator(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorkerPullList_with_one_locator",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 1 pull requests\n",
readContent: "hola",
func (s *PullWorkerTestSuite) TestPullWorker_error_on_get_one_locator(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorker_error_on_get_one_locator",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 1 pull requests\n",
readContent: "unused",
func (s *PullWorkerTestSuite) TestPullWorker_error_on_get_two_locators(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorker_error_on_get_two_locators",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 2 pull requests\n",
readContent: "unused",
func (s *PullWorkerTestSuite) TestPullWorker_error_on_put_one_locator(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorker_error_on_put_one_locator",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", secondPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 1 pull requests\n",
readContent: "hello hello",
func (s *PullWorkerTestSuite) TestPullWorker_error_on_put_two_locators(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorker_error_on_put_two_locators",
- req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList},
+ req: RequestTester{"/pull", s.cluster.SystemRootToken, "PUT", firstPullList, ""},
responseCode: http.StatusOK,
responseBody: "Received 2 pull requests\n",
readContent: "hello again",
func (s *PullWorkerTestSuite) TestPullWorker_invalidToken(c *C) {
testData := PullWorkerTestData{
name: "TestPullWorkerPullList_with_two_locators",
- req: RequestTester{"/pull", "invalidToken", "PUT", firstPullList},
+ req: RequestTester{"/pull", "invalidToken", "PUT", firstPullList, ""},
responseCode: http.StatusUnauthorized,
responseBody: "Unauthorized\n",
readContent: "hello",
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bufio"
}
func (v *S3Volume) bootstrapIAMCredentials() error {
- if v.AccessKey != "" || v.SecretKey != "" {
+ if v.AccessKeyID != "" || v.SecretAccessKey != "" {
if v.IAMRole != "" {
- return errors.New("invalid DriverParameters: AccessKey and SecretKey must be blank if IAMRole is specified")
+ return errors.New("invalid DriverParameters: AccessKeyID and SecretAccessKey must be blank if IAMRole is specified")
}
return nil
}
}
func (v *S3Volume) newS3Client() *s3.S3 {
- auth := aws.NewAuth(v.AccessKey, v.SecretKey, v.AuthToken, v.AuthExpiration)
+ auth := aws.NewAuth(v.AccessKeyID, v.SecretAccessKey, v.AuthToken, v.AuthExpiration)
client := s3.New(*auth, v.region)
if !v.V2Signature {
client.Signature = aws.V4Signature
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNotFound {
- return 0, fmt.Errorf("this instance does not have an IAM role assigned -- either assign a role, or configure AccessKey and SecretKey explicitly in DriverParameters (error getting %s: HTTP status %s)", url, resp.Status)
+ return 0, fmt.Errorf("this instance does not have an IAM role assigned -- either assign a role, or configure AccessKeyID and SecretAccessKey explicitly in DriverParameters (error getting %s: HTTP status %s)", url, resp.Status)
} else if resp.StatusCode != http.StatusOK {
return 0, fmt.Errorf("error getting %s: HTTP status %s", url, resp.Status)
}
if err != nil {
return 0, fmt.Errorf("error decoding credentials from %s: %s", url, err)
}
- v.AccessKey, v.SecretKey, v.AuthToken, v.AuthExpiration = cred.AccessKeyID, cred.SecretAccessKey, cred.Token, cred.Expiration
+ v.AccessKeyID, v.SecretAccessKey, v.AuthToken, v.AuthExpiration = cred.AccessKeyID, cred.SecretAccessKey, cred.Token, cred.Expiration
v.bucket.SetBucket(&s3.Bucket{
S3: v.newS3Client(),
Name: v.Bucket,
return ttl, nil
}
-func (v *S3Volume) getReaderWithContext(ctx context.Context, loc string) (rdr io.ReadCloser, err error) {
+func (v *S3Volume) getReaderWithContext(ctx context.Context, key string) (rdr io.ReadCloser, err error) {
ready := make(chan bool)
go func() {
- rdr, err = v.getReader(loc)
+ rdr, err = v.getReader(key)
close(ready)
}()
select {
case <-ready:
return
case <-ctx.Done():
- v.logger.Debugf("s3: abandoning getReader(): %s", ctx.Err())
+ v.logger.Debugf("s3: abandoning getReader(%s): %s", key, ctx.Err())
go func() {
<-ready
if err == nil {
// In situations where (Bucket)GetReader would fail because the block
// disappeared in a Trash race, getReader calls fixRace to recover the
// data, and tries again.
-func (v *S3Volume) getReader(loc string) (rdr io.ReadCloser, err error) {
- rdr, err = v.bucket.GetReader(loc)
+func (v *S3Volume) getReader(key string) (rdr io.ReadCloser, err error) {
+ rdr, err = v.bucket.GetReader(key)
err = v.translateError(err)
if err == nil || !os.IsNotExist(err) {
return
}
- _, err = v.bucket.Head("recent/"+loc, nil)
+ _, err = v.bucket.Head("recent/"+key, nil)
err = v.translateError(err)
if err != nil {
// If we can't read recent/X, there's no point in
// trying fixRace. Give up.
return
}
- if !v.fixRace(loc) {
+ if !v.fixRace(key) {
err = os.ErrNotExist
return
}
- rdr, err = v.bucket.GetReader(loc)
+ rdr, err = v.bucket.GetReader(key)
if err != nil {
- v.logger.Warnf("reading %s after successful fixRace: %s", loc, err)
+ v.logger.Warnf("reading %s after successful fixRace: %s", key, err)
err = v.translateError(err)
}
return
// Get a block: copy the block data into buf, and return the number of
// bytes copied.
func (v *S3Volume) Get(ctx context.Context, loc string, buf []byte) (int, error) {
- rdr, err := v.getReaderWithContext(ctx, loc)
+ key := v.key(loc)
+ rdr, err := v.getReaderWithContext(ctx, key)
if err != nil {
return 0, err
}
// Compare the given data with the stored data.
func (v *S3Volume) Compare(ctx context.Context, loc string, expect []byte) error {
+ key := v.key(loc)
errChan := make(chan error, 1)
go func() {
- _, err := v.bucket.Head("recent/"+loc, nil)
+ _, err := v.bucket.Head("recent/"+key, nil)
errChan <- err
}()
var err error
// problem on to our clients.
return v.translateError(err)
}
- rdr, err := v.getReaderWithContext(ctx, loc)
+ rdr, err := v.getReaderWithContext(ctx, key)
if err != nil {
return err
}
opts.ContentSHA256 = fmt.Sprintf("%x", sha256.Sum256(block))
}
+ key := v.key(loc)
+
// Send the block data through a pipe, so that (if we need to)
// we can close the pipe early and abandon our PutReader()
// goroutine, without worrying about PutReader() accessing our
}
}()
defer close(ready)
- err = v.bucket.PutReader(loc, bufr, int64(size), "application/octet-stream", s3ACL, opts)
+ err = v.bucket.PutReader(key, bufr, int64(size), "application/octet-stream", s3ACL, opts)
if err != nil {
return
}
- err = v.bucket.PutReader("recent/"+loc, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
+ err = v.bucket.PutReader("recent/"+key, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
}()
select {
case <-ctx.Done():
if v.volume.ReadOnly {
return MethodDisabledError
}
- _, err := v.bucket.Head(loc, nil)
+ key := v.key(loc)
+ _, err := v.bucket.Head(key, nil)
err = v.translateError(err)
- if os.IsNotExist(err) && v.fixRace(loc) {
+ if os.IsNotExist(err) && v.fixRace(key) {
// The data object got trashed in a race, but fixRace
// rescued it.
} else if err != nil {
return err
}
- err = v.bucket.PutReader("recent/"+loc, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
+ err = v.bucket.PutReader("recent/"+key, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
return v.translateError(err)
}
// Mtime returns the stored timestamp for the given locator.
func (v *S3Volume) Mtime(loc string) (time.Time, error) {
- _, err := v.bucket.Head(loc, nil)
+ key := v.key(loc)
+ _, err := v.bucket.Head(key, nil)
if err != nil {
return zeroTime, v.translateError(err)
}
- resp, err := v.bucket.Head("recent/"+loc, nil)
+ resp, err := v.bucket.Head("recent/"+key, nil)
err = v.translateError(err)
if os.IsNotExist(err) {
// The data object X exists, but recent/X is missing.
- err = v.bucket.PutReader("recent/"+loc, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
+ err = v.bucket.PutReader("recent/"+key, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
if err != nil {
- v.logger.WithError(err).Errorf("error creating %q", "recent/"+loc)
+ v.logger.WithError(err).Errorf("error creating %q", "recent/"+key)
return zeroTime, v.translateError(err)
}
- v.logger.Infof("created %q to migrate existing block to new storage scheme", "recent/"+loc)
- resp, err = v.bucket.Head("recent/"+loc, nil)
+ v.logger.Infof("created %q to migrate existing block to new storage scheme", "recent/"+key)
+ resp, err = v.bucket.Head("recent/"+key, nil)
if err != nil {
- v.logger.WithError(err).Errorf("HEAD failed after creating %q", "recent/"+loc)
+ v.logger.WithError(err).Errorf("HEAD failed after creating %q", "recent/"+key)
return zeroTime, v.translateError(err)
}
} else if err != nil {
dataL := s3Lister{
Logger: v.logger,
Bucket: v.bucket.Bucket(),
- Prefix: prefix,
+ Prefix: v.key(prefix),
PageSize: v.IndexPageSize,
Stats: &v.bucket.stats,
}
recentL := s3Lister{
Logger: v.logger,
Bucket: v.bucket.Bucket(),
- Prefix: "recent/" + prefix,
+ Prefix: "recent/" + v.key(prefix),
PageSize: v.IndexPageSize,
Stats: &v.bucket.stats,
}
// over all of them needlessly with dataL.
break
}
- if !v.isKeepBlock(data.Key) {
+ loc, isBlk := v.isKeepBlock(data.Key)
+ if !isBlk {
continue
}
// We truncate sub-second precision here. Otherwise
// timestamps will never match the RFC1123-formatted
// Last-Modified values parsed by Mtime().
- fmt.Fprintf(writer, "%s+%d %d\n", data.Key, data.Size, t.Unix()*1000000000)
+ fmt.Fprintf(writer, "%s+%d %d\n", loc, data.Size, t.Unix()*1000000000)
}
return dataL.Error()
}
} else if time.Since(t) < v.cluster.Collections.BlobSigningTTL.Duration() {
return nil
}
+ key := v.key(loc)
if v.cluster.Collections.BlobTrashLifetime == 0 {
if !v.UnsafeDelete {
return ErrS3TrashDisabled
}
- return v.translateError(v.bucket.Del(loc))
+ return v.translateError(v.bucket.Del(key))
}
- err := v.checkRaceWindow(loc)
+ err := v.checkRaceWindow(key)
if err != nil {
return err
}
- err = v.safeCopy("trash/"+loc, loc)
+ err = v.safeCopy("trash/"+key, key)
if err != nil {
return err
}
- return v.translateError(v.bucket.Del(loc))
+ return v.translateError(v.bucket.Del(key))
}
-// checkRaceWindow returns a non-nil error if trash/loc is, or might
-// be, in the race window (i.e., it's not safe to trash loc).
-func (v *S3Volume) checkRaceWindow(loc string) error {
- resp, err := v.bucket.Head("trash/"+loc, nil)
+// checkRaceWindow returns a non-nil error if trash/key is, or might
+// be, in the race window (i.e., it's not safe to trash key).
+func (v *S3Volume) checkRaceWindow(key string) error {
+ resp, err := v.bucket.Head("trash/"+key, nil)
err = v.translateError(err)
if os.IsNotExist(err) {
// OK, trash/X doesn't exist so we're not in the race
// trash/X's lifetime. The new timestamp might not
// become visible until now+raceWindow, and EmptyTrash
// is allowed to delete trash/X before then.
- return fmt.Errorf("same block is already in trash, and safe window ended %s ago", -safeWindow)
+ return fmt.Errorf("%s: same block is already in trash, and safe window ended %s ago", key, -safeWindow)
}
// trash/X exists, but it won't be eligible for deletion until
// after now+raceWindow, so it's safe to overwrite it.
// Untrash moves block from trash back into store
func (v *S3Volume) Untrash(loc string) error {
- err := v.safeCopy(loc, "trash/"+loc)
+ key := v.key(loc)
+ err := v.safeCopy(key, "trash/"+key)
if err != nil {
return err
}
- err = v.bucket.PutReader("recent/"+loc, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
+ err = v.bucket.PutReader("recent/"+key, nil, 0, "application/octet-stream", s3ACL, s3.Options{})
return v.translateError(err)
}
var s3KeepBlockRegexp = regexp.MustCompile(`^[0-9a-f]{32}$`)
-func (v *S3Volume) isKeepBlock(s string) bool {
- return s3KeepBlockRegexp.MatchString(s)
+func (v *S3Volume) isKeepBlock(s string) (string, bool) {
+ if v.PrefixLength > 0 && len(s) == v.PrefixLength+33 && s[:v.PrefixLength] == s[v.PrefixLength+1:v.PrefixLength*2+1] {
+ s = s[v.PrefixLength+1:]
+ }
+ return s, s3KeepBlockRegexp.MatchString(s)
+}
+
+// Return the key used for a given loc. If PrefixLength==0 then
+// key("abcdef0123") is "abcdef0123", if PrefixLength==3 then key is
+// "abc/abcdef0123", etc.
+func (v *S3Volume) key(loc string) string {
+ if v.PrefixLength > 0 && v.PrefixLength < len(loc)-1 {
+ return loc[:v.PrefixLength] + "/" + loc
+ } else {
+ return loc
+ }
}
// fixRace(X) is called when "recent/X" exists but "X" doesn't
-// exist. If the timestamps on "recent/"+loc and "trash/"+loc indicate
-// there was a race between Put and Trash, fixRace recovers from the
-// race by Untrashing the block.
-func (v *S3Volume) fixRace(loc string) bool {
- trash, err := v.bucket.Head("trash/"+loc, nil)
+// exist. If the timestamps on "recent/X" and "trash/X" indicate there
+// was a race between Put and Trash, fixRace recovers from the race by
+// Untrashing the block.
+func (v *S3Volume) fixRace(key string) bool {
+ trash, err := v.bucket.Head("trash/"+key, nil)
if err != nil {
if !os.IsNotExist(v.translateError(err)) {
- v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "trash/"+loc)
+ v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "trash/"+key)
}
return false
}
return false
}
- recent, err := v.bucket.Head("recent/"+loc, nil)
+ recent, err := v.bucket.Head("recent/"+key, nil)
if err != nil {
- v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "recent/"+loc)
+ v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "recent/"+key)
return false
}
recentTime, err := v.lastModified(recent)
return false
}
- v.logger.Infof("fixRace: %q: trashed at %s but touched at %s (age when trashed = %s < %s)", loc, trashTime, recentTime, ageWhenTrashed, v.cluster.Collections.BlobSigningTTL)
- v.logger.Infof("fixRace: copying %q to %q to recover from race between Put/Touch and Trash", "recent/"+loc, loc)
- err = v.safeCopy(loc, "trash/"+loc)
+ v.logger.Infof("fixRace: %q: trashed at %s but touched at %s (age when trashed = %s < %s)", key, trashTime, recentTime, ageWhenTrashed, v.cluster.Collections.BlobSigningTTL)
+ v.logger.Infof("fixRace: copying %q to %q to recover from race between Put/Touch and Trash", "recent/"+key, key)
+ err = v.safeCopy(key, "trash/"+key)
if err != nil {
v.logger.WithError(err).Error("fixRace: copy failed")
return false
startT := time.Now()
emptyOneKey := func(trash *s3.Key) {
- loc := trash.Key[6:]
- if !v.isKeepBlock(loc) {
+ key := trash.Key[6:]
+ loc, isBlk := v.isKeepBlock(key)
+ if !isBlk {
return
}
atomic.AddInt64(&bytesInTrash, trash.Size)
v.logger.Warnf("EmptyTrash: %q: parse %q: %s", trash.Key, trash.LastModified, err)
return
}
- recent, err := v.bucket.Head("recent/"+loc, nil)
+ recent, err := v.bucket.Head("recent/"+key, nil)
if err != nil && os.IsNotExist(v.translateError(err)) {
v.logger.Warnf("EmptyTrash: found trash marker %q but no %q (%s); calling Untrash", trash.Key, "recent/"+loc, err)
err = v.Untrash(loc)
}
return
} else if err != nil {
- v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", "recent/"+loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", "recent/"+key)
return
}
recentT, err := v.lastModified(recent)
if err != nil {
- v.logger.WithError(err).Warnf("EmptyTrash: %q: error parsing %q", "recent/"+loc, recent.Header.Get("Last-Modified"))
+ v.logger.WithError(err).Warnf("EmptyTrash: %q: error parsing %q", "recent/"+key, recent.Header.Get("Last-Modified"))
return
}
if trashT.Sub(recentT) < v.cluster.Collections.BlobSigningTTL.Duration() {
// < BlobSigningTTL - raceWindow) is
// necessary to avoid starvation.
v.logger.Infof("EmptyTrash: detected old race for %q, calling fixRace + Touch", loc)
- v.fixRace(loc)
+ v.fixRace(key)
v.Touch(loc)
return
}
- _, err := v.bucket.Head(loc, nil)
+ _, err := v.bucket.Head(key, nil)
if os.IsNotExist(err) {
v.logger.Infof("EmptyTrash: detected recent race for %q, calling fixRace", loc)
- v.fixRace(loc)
+ v.fixRace(key)
return
} else if err != nil {
v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", loc)
atomic.AddInt64(&bytesDeleted, trash.Size)
atomic.AddInt64(&blocksDeleted, 1)
- _, err = v.bucket.Head(loc, nil)
+ _, err = v.bucket.Head(key, nil)
if err == nil {
- v.logger.Warnf("EmptyTrash: HEAD %q succeeded immediately after deleting %q", loc, loc)
+ v.logger.Warnf("EmptyTrash: HEAD %q succeeded immediately after deleting %q", key, key)
return
}
if !os.IsNotExist(v.translateError(err)) {
- v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", key)
return
}
- err = v.bucket.Del("recent/" + loc)
+ err = v.bucket.Del("recent/" + key)
if err != nil {
- v.logger.WithError(err).Warnf("EmptyTrash: error deleting %q", "recent/"+loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: error deleting %q", "recent/"+key)
}
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
})
}
+func (s *StubbedS3Suite) TestGenericWithPrefix(c *check.C) {
+ DoGenericVolumeTests(c, false, func(t TB, cluster *arvados.Cluster, volume arvados.Volume, logger logrus.FieldLogger, metrics *volumeMetricsVecs) TestableVolume {
+ v := s.newTestableVolume(c, cluster, volume, metrics, -2*time.Second)
+ v.PrefixLength = 3
+ return v
+ })
+}
+
func (s *StubbedS3Suite) TestIndex(c *check.C) {
v := s.newTestableVolume(c, s.cluster, arvados.Volume{Replication: 2}, newVolumeMetricsVecs(prometheus.NewRegistry()), 0)
v.IndexPageSize = 3
// Default V4 signature
vol := S3Volume{
S3VolumeDriverParameters: arvados.S3VolumeDriverParameters{
- AccessKey: "xxx",
- SecretKey: "xxx",
- Endpoint: stub.URL,
- Region: "test-region-1",
- Bucket: "test-bucket-name",
+ AccessKeyID: "xxx",
+ SecretAccessKey: "xxx",
+ Endpoint: stub.URL,
+ Region: "test-region-1",
+ Bucket: "test-bucket-name",
},
cluster: s.cluster,
logger: ctxlog.TestLogger(c),
// Force V2 signature
vol = S3Volume{
S3VolumeDriverParameters: arvados.S3VolumeDriverParameters{
- AccessKey: "xxx",
- SecretKey: "xxx",
- Endpoint: stub.URL,
- Region: "test-region-1",
- Bucket: "test-bucket-name",
- V2Signature: true,
+ AccessKeyID: "xxx",
+ SecretAccessKey: "xxx",
+ Endpoint: stub.URL,
+ Region: "test-region-1",
+ Bucket: "test-bucket-name",
+ V2Signature: true,
},
cluster: s.cluster,
logger: ctxlog.TestLogger(c),
defer s.metadata.Close()
v := s.newTestableVolume(c, s.cluster, arvados.Volume{Replication: 2}, newVolumeMetricsVecs(prometheus.NewRegistry()), 5*time.Minute)
- c.Check(v.AccessKey, check.Equals, "ASIAIOSFODNN7EXAMPLE")
- c.Check(v.SecretKey, check.Equals, "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY")
+ c.Check(v.AccessKeyID, check.Equals, "ASIAIOSFODNN7EXAMPLE")
+ c.Check(v.SecretAccessKey, check.Equals, "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY")
c.Check(v.bucket.bucket.S3.Auth.AccessKey, check.Equals, "ASIAIOSFODNN7EXAMPLE")
c.Check(v.bucket.bucket.S3.Auth.SecretKey, check.Equals, "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY")
false, false, false, true, true, true,
},
} {
- c.Log("Scenario: ", scenario.label)
-
- // We have a few tests to run for each scenario, and
- // the tests are expected to change state. By calling
- // this setup func between tests, we (re)create the
- // scenario as specified, using a new unique block
- // locator to prevent interference from previous
- // tests.
-
- setupScenario := func() (string, []byte) {
- nextKey++
- blk := []byte(fmt.Sprintf("%d", nextKey))
- loc := fmt.Sprintf("%x", md5.Sum(blk))
- c.Log("\t", loc)
- putS3Obj(scenario.dataT, loc, blk)
- putS3Obj(scenario.recentT, "recent/"+loc, nil)
- putS3Obj(scenario.trashT, "trash/"+loc, blk)
- v.serverClock.now = &t0
- return loc, blk
- }
-
- // Check canGet
- loc, blk := setupScenario()
- buf := make([]byte, len(blk))
- _, err := v.Get(context.Background(), loc, buf)
- c.Check(err == nil, check.Equals, scenario.canGet)
- if err != nil {
- c.Check(os.IsNotExist(err), check.Equals, true)
- }
-
- // Call Trash, then check canTrash and canGetAfterTrash
- loc, _ = setupScenario()
- err = v.Trash(loc)
- c.Check(err == nil, check.Equals, scenario.canTrash)
- _, err = v.Get(context.Background(), loc, buf)
- c.Check(err == nil, check.Equals, scenario.canGetAfterTrash)
- if err != nil {
- c.Check(os.IsNotExist(err), check.Equals, true)
- }
-
- // Call Untrash, then check canUntrash
- loc, _ = setupScenario()
- err = v.Untrash(loc)
- c.Check(err == nil, check.Equals, scenario.canUntrash)
- if scenario.dataT != none || scenario.trashT != none {
- // In all scenarios where the data exists, we
- // should be able to Get after Untrash --
- // regardless of timestamps, errors, race
- // conditions, etc.
+ for _, prefixLength := range []int{0, 3} {
+ v.PrefixLength = prefixLength
+ c.Logf("Scenario: %q (prefixLength=%d)", scenario.label, prefixLength)
+
+ // We have a few tests to run for each scenario, and
+ // the tests are expected to change state. By calling
+ // this setup func between tests, we (re)create the
+ // scenario as specified, using a new unique block
+ // locator to prevent interference from previous
+ // tests.
+
+ setupScenario := func() (string, []byte) {
+ nextKey++
+ blk := []byte(fmt.Sprintf("%d", nextKey))
+ loc := fmt.Sprintf("%x", md5.Sum(blk))
+ key := loc
+ if prefixLength > 0 {
+ key = loc[:prefixLength] + "/" + loc
+ }
+ c.Log("\t", loc)
+ putS3Obj(scenario.dataT, key, blk)
+ putS3Obj(scenario.recentT, "recent/"+key, nil)
+ putS3Obj(scenario.trashT, "trash/"+key, blk)
+ v.serverClock.now = &t0
+ return loc, blk
+ }
+
+ // Check canGet
+ loc, blk := setupScenario()
+ buf := make([]byte, len(blk))
+ _, err := v.Get(context.Background(), loc, buf)
+ c.Check(err == nil, check.Equals, scenario.canGet)
+ if err != nil {
+ c.Check(os.IsNotExist(err), check.Equals, true)
+ }
+
+ // Call Trash, then check canTrash and canGetAfterTrash
+ loc, _ = setupScenario()
+ err = v.Trash(loc)
+ c.Check(err == nil, check.Equals, scenario.canTrash)
_, err = v.Get(context.Background(), loc, buf)
+ c.Check(err == nil, check.Equals, scenario.canGetAfterTrash)
+ if err != nil {
+ c.Check(os.IsNotExist(err), check.Equals, true)
+ }
+
+ // Call Untrash, then check canUntrash
+ loc, _ = setupScenario()
+ err = v.Untrash(loc)
+ c.Check(err == nil, check.Equals, scenario.canUntrash)
+ if scenario.dataT != none || scenario.trashT != none {
+ // In all scenarios where the data exists, we
+ // should be able to Get after Untrash --
+ // regardless of timestamps, errors, race
+ // conditions, etc.
+ _, err = v.Get(context.Background(), loc, buf)
+ c.Check(err, check.IsNil)
+ }
+
+ // Call EmptyTrash, then check haveTrashAfterEmpty and
+ // freshAfterEmpty
+ loc, _ = setupScenario()
+ v.EmptyTrash()
+ _, err = v.bucket.Head("trash/"+v.key(loc), nil)
+ c.Check(err == nil, check.Equals, scenario.haveTrashAfterEmpty)
+ if scenario.freshAfterEmpty {
+ t, err := v.Mtime(loc)
+ c.Check(err, check.IsNil)
+ // new mtime must be current (with an
+ // allowance for 1s timestamp precision)
+ c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
+ }
+
+ // Check for current Mtime after Put (applies to all
+ // scenarios)
+ loc, blk = setupScenario()
+ err = v.Put(context.Background(), loc, blk)
c.Check(err, check.IsNil)
- }
-
- // Call EmptyTrash, then check haveTrashAfterEmpty and
- // freshAfterEmpty
- loc, _ = setupScenario()
- v.EmptyTrash()
- _, err = v.bucket.Head("trash/"+loc, nil)
- c.Check(err == nil, check.Equals, scenario.haveTrashAfterEmpty)
- if scenario.freshAfterEmpty {
t, err := v.Mtime(loc)
c.Check(err, check.IsNil)
- // new mtime must be current (with an
- // allowance for 1s timestamp precision)
c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
}
-
- // Check for current Mtime after Put (applies to all
- // scenarios)
- loc, blk = setupScenario()
- err = v.Put(context.Background(), loc, blk)
- c.Check(err, check.IsNil)
- t, err := v.Mtime(loc)
- c.Check(err, check.IsNil)
- c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
}
}
S3Volume: &S3Volume{
S3VolumeDriverParameters: arvados.S3VolumeDriverParameters{
IAMRole: iamRole,
- AccessKey: accessKey,
- SecretKey: secretKey,
+ AccessKeyID: accessKey,
+ SecretAccessKey: secretKey,
Bucket: TestBucketName,
Endpoint: endpoint,
Region: "test-region-1",
// PutRaw skips the ContentMD5 test
func (v *TestableS3Volume) PutRaw(loc string, block []byte) {
- err := v.bucket.Bucket().Put(loc, block, "application/octet-stream", s3ACL, s3.Options{})
+ key := v.key(loc)
+ err := v.bucket.Bucket().Put(key, block, "application/octet-stream", s3ACL, s3.Options{})
if err != nil {
v.logger.Printf("PutRaw: %s: %+v", loc, err)
}
- err = v.bucket.Bucket().Put("recent/"+loc, nil, "application/octet-stream", s3ACL, s3.Options{})
+ err = v.bucket.Bucket().Put("recent/"+key, nil, "application/octet-stream", s3ACL, s3.Options{})
if err != nil {
- v.logger.Printf("PutRaw: recent/%s: %+v", loc, err)
+ v.logger.Printf("PutRaw: recent/%s: %+v", key, err)
}
}
// while we do this.
func (v *TestableS3Volume) TouchWithDate(locator string, lastPut time.Time) {
v.serverClock.now = &lastPut
- err := v.bucket.Bucket().Put("recent/"+locator, nil, "application/octet-stream", s3ACL, s3.Options{})
+ err := v.bucket.Bucket().Put("recent/"+v.key(locator), nil, "application/octet-stream", s3ACL, s3.Options{})
if err != nil {
panic(err)
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
var s3AWSKeepBlockRegexp = regexp.MustCompile(`^[0-9a-f]{32}$`)
var s3AWSZeroTime time.Time
-func (v *S3AWSVolume) isKeepBlock(s string) bool {
- return s3AWSKeepBlockRegexp.MatchString(s)
+func (v *S3AWSVolume) isKeepBlock(s string) (string, bool) {
+ if v.PrefixLength > 0 && len(s) == v.PrefixLength+33 && s[:v.PrefixLength] == s[v.PrefixLength+1:v.PrefixLength*2+1] {
+ s = s[v.PrefixLength+1:]
+ }
+ return s, s3AWSKeepBlockRegexp.MatchString(s)
+}
+
+// Return the key used for a given loc. If PrefixLength==0 then
+// key("abcdef0123") is "abcdef0123", if PrefixLength==3 then key is
+// "abc/abcdef0123", etc.
+func (v *S3AWSVolume) key(loc string) string {
+ if v.PrefixLength > 0 && v.PrefixLength < len(loc)-1 {
+ return loc[:v.PrefixLength] + "/" + loc
+ } else {
+ return loc
+ }
}
func newS3AWSVolume(cluster *arvados.Cluster, volume arvados.Volume, logger logrus.FieldLogger, metrics *volumeMetricsVecs) (Volume, error) {
}
func (v *S3AWSVolume) translateError(err error) error {
- if aerr, ok := err.(awserr.Error); ok {
- switch aerr.Code() {
- case "NotFound":
+ if _, ok := err.(*aws.RequestCanceledError); ok {
+ return context.Canceled
+ } else if aerr, ok := err.(awserr.Error); ok {
+ if aerr.Code() == "NotFound" {
return os.ErrNotExist
- case "NoSuchKey":
+ } else if aerr.Code() == "NoSuchKey" {
return os.ErrNotExist
}
}
return err
}
-// safeCopy calls CopyObjectRequest, and checks the response to make sure the
-// copy succeeded and updated the timestamp on the destination object
+// safeCopy calls CopyObjectRequest, and checks the response to make
+// sure the copy succeeded and updated the timestamp on the
+// destination object
//
-// (If something goes wrong during the copy, the error will be embedded in the
-// 200 OK response)
+// (If something goes wrong during the copy, the error will be
+// embedded in the 200 OK response)
func (v *S3AWSVolume) safeCopy(dst, src string) error {
input := &s3.CopyObjectInput{
Bucket: aws.String(v.bucket.bucket),
creds := aws.NewChainProvider(
[]aws.CredentialsProvider{
- aws.NewStaticCredentialsProvider(v.AccessKey, v.SecretKey, v.AuthToken),
+ aws.NewStaticCredentialsProvider(v.AccessKeyID, v.SecretAccessKey, v.AuthToken),
ec2rolecreds.New(ec2metadata.New(cfg)),
})
// Compare the given data with the stored data.
func (v *S3AWSVolume) Compare(ctx context.Context, loc string, expect []byte) error {
+ key := v.key(loc)
errChan := make(chan error, 1)
go func() {
- _, err := v.Head("recent/" + loc)
+ _, err := v.head("recent/" + key)
errChan <- err
}()
var err error
case err = <-errChan:
}
if err != nil {
- // Checking for "loc" itself here would interfere with
- // future GET requests.
+ // Checking for the key itself here would interfere
+ // with future GET requests.
//
// On AWS, if X doesn't exist, a HEAD or GET request
// for X causes X's non-existence to be cached. Thus,
input := &s3.GetObjectInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String(loc),
+ Key: aws.String(key),
}
req := v.bucket.svc.GetObjectRequest(input)
startT := time.Now()
emptyOneKey := func(trash *s3.Object) {
- loc := strings.TrimPrefix(*trash.Key, "trash/")
- if !v.isKeepBlock(loc) {
+ key := strings.TrimPrefix(*trash.Key, "trash/")
+ loc, isblk := v.isKeepBlock(key)
+ if !isblk {
return
}
atomic.AddInt64(&bytesInTrash, *trash.Size)
atomic.AddInt64(&blocksInTrash, 1)
- trashT := *(trash.LastModified)
- recent, err := v.Head("recent/" + loc)
+ trashT := *trash.LastModified
+ recent, err := v.head("recent/" + key)
if err != nil && os.IsNotExist(v.translateError(err)) {
- v.logger.Warnf("EmptyTrash: found trash marker %q but no %q (%s); calling Untrash", trash.Key, "recent/"+loc, err)
+ v.logger.Warnf("EmptyTrash: found trash marker %q but no %q (%s); calling Untrash", *trash.Key, "recent/"+key, err)
err = v.Untrash(loc)
if err != nil {
v.logger.WithError(err).Errorf("EmptyTrash: Untrash(%q) failed", loc)
}
return
} else if err != nil {
- v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", "recent/"+loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", "recent/"+key)
return
}
if trashT.Sub(*recent.LastModified) < v.cluster.Collections.BlobSigningTTL.Duration() {
if age := startT.Sub(*recent.LastModified); age >= v.cluster.Collections.BlobSigningTTL.Duration()-time.Duration(v.RaceWindow) {
- // recent/loc is too old to protect
+ // recent/key is too old to protect
// loc from being Trashed again during
// the raceWindow that starts if we
// delete trash/X now.
// < BlobSigningTTL - raceWindow) is
// necessary to avoid starvation.
v.logger.Infof("EmptyTrash: detected old race for %q, calling fixRace + Touch", loc)
- v.fixRace(loc)
+ v.fixRace(key)
v.Touch(loc)
return
}
- _, err := v.Head(loc)
+ _, err := v.head(key)
if os.IsNotExist(err) {
v.logger.Infof("EmptyTrash: detected recent race for %q, calling fixRace", loc)
- v.fixRace(loc)
+ v.fixRace(key)
return
} else if err != nil {
v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", loc)
atomic.AddInt64(&bytesDeleted, *trash.Size)
atomic.AddInt64(&blocksDeleted, 1)
- _, err = v.Head(loc)
+ _, err = v.head(*trash.Key)
if err == nil {
v.logger.Warnf("EmptyTrash: HEAD %q succeeded immediately after deleting %q", loc, loc)
return
}
if !os.IsNotExist(v.translateError(err)) {
- v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: HEAD %q failed", key)
return
}
- err = v.bucket.Del("recent/" + loc)
+ err = v.bucket.Del("recent/" + key)
if err != nil {
- v.logger.WithError(err).Warnf("EmptyTrash: error deleting %q", "recent/"+loc)
+ v.logger.WithError(err).Warnf("EmptyTrash: error deleting %q", "recent/"+key)
}
}
}
// fixRace(X) is called when "recent/X" exists but "X" doesn't
-// exist. If the timestamps on "recent/"+loc and "trash/"+loc indicate
-// there was a race between Put and Trash, fixRace recovers from the
-// race by Untrashing the block.
-func (v *S3AWSVolume) fixRace(loc string) bool {
- trash, err := v.Head("trash/" + loc)
+// exist. If the timestamps on "recent/X" and "trash/X" indicate there
+// was a race between Put and Trash, fixRace recovers from the race by
+// Untrashing the block.
+func (v *S3AWSVolume) fixRace(key string) bool {
+ trash, err := v.head("trash/" + key)
if err != nil {
if !os.IsNotExist(v.translateError(err)) {
- v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "trash/"+loc)
+ v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "trash/"+key)
}
return false
}
- recent, err := v.Head("recent/" + loc)
+ recent, err := v.head("recent/" + key)
if err != nil {
- v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "recent/"+loc)
+ v.logger.WithError(err).Errorf("fixRace: HEAD %q failed", "recent/"+key)
return false
}
return false
}
- v.logger.Infof("fixRace: %q: trashed at %s but touched at %s (age when trashed = %s < %s)", loc, trashTime, recentTime, ageWhenTrashed, v.cluster.Collections.BlobSigningTTL)
- v.logger.Infof("fixRace: copying %q to %q to recover from race between Put/Touch and Trash", "recent/"+loc, loc)
- err = v.safeCopy(loc, "trash/"+loc)
+ v.logger.Infof("fixRace: %q: trashed at %s but touched at %s (age when trashed = %s < %s)", key, trashTime, recentTime, ageWhenTrashed, v.cluster.Collections.BlobSigningTTL)
+ v.logger.Infof("fixRace: copying %q to %q to recover from race between Put/Touch and Trash", "recent/"+key, key)
+ err = v.safeCopy(key, "trash/"+key)
if err != nil {
v.logger.WithError(err).Error("fixRace: copy failed")
return false
return true
}
-func (v *S3AWSVolume) Head(loc string) (result *s3.HeadObjectOutput, err error) {
+func (v *S3AWSVolume) head(key string) (result *s3.HeadObjectOutput, err error) {
input := &s3.HeadObjectInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String(loc),
+ Key: aws.String(key),
}
req := v.bucket.svc.HeadObjectRequest(input)
// Get a block: copy the block data into buf, and return the number of
// bytes copied.
func (v *S3AWSVolume) Get(ctx context.Context, loc string, buf []byte) (int, error) {
- return getWithPipe(ctx, loc, buf, v)
-}
-
-func (v *S3AWSVolume) readWorker(ctx context.Context, loc string) (rdr io.ReadCloser, err error) {
- buf := make([]byte, 0, 67108864)
- awsBuf := aws.NewWriteAtBuffer(buf)
-
- downloader := s3manager.NewDownloaderWithClient(v.bucket.svc, func(u *s3manager.Downloader) {
- u.PartSize = PartSize
- u.Concurrency = ReadConcurrency
- })
-
- v.logger.Debugf("Partsize: %d; Concurrency: %d\n", downloader.PartSize, downloader.Concurrency)
-
- _, err = downloader.DownloadWithContext(ctx, awsBuf, &s3.GetObjectInput{
- Bucket: aws.String(v.bucket.bucket),
- Key: aws.String(loc),
- })
- v.bucket.stats.TickOps("get")
- v.bucket.stats.Tick(&v.bucket.stats.Ops, &v.bucket.stats.GetOps)
- v.bucket.stats.TickErr(err)
- if err != nil {
- return nil, v.translateError(err)
- }
- buf = awsBuf.Bytes()
-
- rdr = NewCountingReader(bytes.NewReader(buf), v.bucket.stats.TickInBytes)
- return
-}
-
-// ReadBlock implements BlockReader.
-func (v *S3AWSVolume) ReadBlock(ctx context.Context, loc string, w io.Writer) error {
- rdr, err := v.readWorker(ctx, loc)
-
+ // Do not use getWithPipe here: the BlockReader interface does not pass
+ // through 'buf []byte', and we don't want to allocate two buffers for each
+ // read request. Instead, use a version of ReadBlock that accepts 'buf []byte'
+ // as an input.
+ key := v.key(loc)
+ count, err := v.readWorker(ctx, key, buf)
if err == nil {
- _, err2 := io.Copy(w, rdr)
- if err2 != nil {
- return err2
- }
- return err
+ return count, err
}
err = v.translateError(err)
if !os.IsNotExist(err) {
- return err
+ return 0, err
}
- _, err = v.Head("recent/" + loc)
+ _, err = v.head("recent/" + key)
err = v.translateError(err)
if err != nil {
// If we can't read recent/X, there's no point in
// trying fixRace. Give up.
- return err
+ return 0, err
}
- if !v.fixRace(loc) {
+ if !v.fixRace(key) {
err = os.ErrNotExist
- return err
+ return 0, err
}
- rdr, err = v.readWorker(ctx, loc)
+ count, err = v.readWorker(ctx, key, buf)
if err != nil {
v.logger.Warnf("reading %s after successful fixRace: %s", loc, err)
err = v.translateError(err)
- return err
+ return 0, err
}
+ return count, err
+}
- _, err = io.Copy(w, rdr)
+func (v *S3AWSVolume) readWorker(ctx context.Context, key string, buf []byte) (int, error) {
+ awsBuf := aws.NewWriteAtBuffer(buf)
+ downloader := s3manager.NewDownloaderWithClient(v.bucket.svc, func(u *s3manager.Downloader) {
+ u.PartSize = PartSize
+ u.Concurrency = ReadConcurrency
+ })
- return err
+ v.logger.Debugf("Partsize: %d; Concurrency: %d\n", downloader.PartSize, downloader.Concurrency)
+
+ count, err := downloader.DownloadWithContext(ctx, awsBuf, &s3.GetObjectInput{
+ Bucket: aws.String(v.bucket.bucket),
+ Key: aws.String(key),
+ })
+ v.bucket.stats.TickOps("get")
+ v.bucket.stats.Tick(&v.bucket.stats.Ops, &v.bucket.stats.GetOps)
+ v.bucket.stats.TickErr(err)
+ v.bucket.stats.TickInBytes(uint64(count))
+ return int(count), v.translateError(err)
}
-func (v *S3AWSVolume) writeObject(ctx context.Context, name string, r io.Reader) error {
+func (v *S3AWSVolume) writeObject(ctx context.Context, key string, r io.Reader) error {
if r == nil {
// r == nil leads to a memory violation in func readFillBuf in
// aws-sdk-go-v2@v0.23.0/service/s3/s3manager/upload.go
uploadInput := s3manager.UploadInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String(name),
+ Key: aws.String(key),
Body: r,
}
- if len(name) == 32 {
+ if loc, ok := v.isKeepBlock(key); ok {
var contentMD5 string
- md5, err := hex.DecodeString(name)
+ md5, err := hex.DecodeString(loc)
if err != nil {
- return err
+ return v.translateError(err)
}
contentMD5 = base64.StdEncoding.EncodeToString(md5)
uploadInput.ContentMD5 = &contentMD5
v.bucket.stats.Tick(&v.bucket.stats.Ops, &v.bucket.stats.PutOps)
v.bucket.stats.TickErr(err)
- return err
+ return v.translateError(err)
}
// Put writes a block.
func (v *S3AWSVolume) Put(ctx context.Context, loc string, block []byte) error {
- return putWithPipe(ctx, loc, block, v)
-}
-
-// WriteBlock implements BlockWriter.
-func (v *S3AWSVolume) WriteBlock(ctx context.Context, loc string, rdr io.Reader) error {
+ // Do not use putWithPipe here; we want to pass an io.ReadSeeker to the S3
+ // sdk to avoid memory allocation there. See #17339 for more information.
if v.volume.ReadOnly {
return MethodDisabledError
}
- r := NewCountingReader(rdr, v.bucket.stats.TickOutBytes)
- err := v.writeObject(ctx, loc, r)
+ rdr := bytes.NewReader(block)
+ r := NewCountingReaderAtSeeker(rdr, v.bucket.stats.TickOutBytes)
+ key := v.key(loc)
+ err := v.writeObject(ctx, key, r)
if err != nil {
return err
}
- return v.writeObject(ctx, "recent/"+loc, nil)
+ return v.writeObject(ctx, "recent/"+key, nil)
}
type s3awsLister struct {
// IndexTo writes a complete list of locators with the given prefix
// for which Get() can retrieve data.
func (v *S3AWSVolume) IndexTo(prefix string, writer io.Writer) error {
+ prefix = v.key(prefix)
// Use a merge sort to find matching sets of X and recent/X.
dataL := s3awsLister{
Logger: v.logger,
// over all of them needlessly with dataL.
break
}
- if !v.isKeepBlock(*data.Key) {
+ loc, isblk := v.isKeepBlock(*data.Key)
+ if !isblk {
continue
}
// We truncate sub-second precision here. Otherwise
// timestamps will never match the RFC1123-formatted
// Last-Modified values parsed by Mtime().
- fmt.Fprintf(writer, "%s+%d %d\n", *data.Key, *data.Size, stamp.LastModified.Unix()*1000000000)
+ fmt.Fprintf(writer, "%s+%d %d\n", loc, *data.Size, stamp.LastModified.Unix()*1000000000)
}
return dataL.Error()
}
// Mtime returns the stored timestamp for the given locator.
func (v *S3AWSVolume) Mtime(loc string) (time.Time, error) {
- _, err := v.Head(loc)
+ key := v.key(loc)
+ _, err := v.head(key)
if err != nil {
return s3AWSZeroTime, v.translateError(err)
}
- resp, err := v.Head("recent/" + loc)
+ resp, err := v.head("recent/" + key)
err = v.translateError(err)
if os.IsNotExist(err) {
// The data object X exists, but recent/X is missing.
- err = v.writeObject(context.Background(), "recent/"+loc, nil)
+ err = v.writeObject(context.Background(), "recent/"+key, nil)
if err != nil {
- v.logger.WithError(err).Errorf("error creating %q", "recent/"+loc)
+ v.logger.WithError(err).Errorf("error creating %q", "recent/"+key)
return s3AWSZeroTime, v.translateError(err)
}
- v.logger.Infof("Mtime: created %q to migrate existing block to new storage scheme", "recent/"+loc)
- resp, err = v.Head("recent/" + loc)
+ v.logger.Infof("Mtime: created %q to migrate existing block to new storage scheme", "recent/"+key)
+ resp, err = v.head("recent/" + key)
if err != nil {
- v.logger.WithError(err).Errorf("HEAD failed after creating %q", "recent/"+loc)
+ v.logger.WithError(err).Errorf("HEAD failed after creating %q", "recent/"+key)
return s3AWSZeroTime, v.translateError(err)
}
} else if err != nil {
if v.volume.ReadOnly {
return MethodDisabledError
}
- _, err := v.Head(loc)
+ key := v.key(loc)
+ _, err := v.head(key)
err = v.translateError(err)
- if os.IsNotExist(err) && v.fixRace(loc) {
+ if os.IsNotExist(err) && v.fixRace(key) {
// The data object got trashed in a race, but fixRace
// rescued it.
} else if err != nil {
return err
}
- err = v.writeObject(context.Background(), "recent/"+loc, nil)
+ err = v.writeObject(context.Background(), "recent/"+key, nil)
return v.translateError(err)
}
-// checkRaceWindow returns a non-nil error if trash/loc is, or might
-// be, in the race window (i.e., it's not safe to trash loc).
-func (v *S3AWSVolume) checkRaceWindow(loc string) error {
- resp, err := v.Head("trash/" + loc)
+// checkRaceWindow returns a non-nil error if trash/key is, or might
+// be, in the race window (i.e., it's not safe to trash key).
+func (v *S3AWSVolume) checkRaceWindow(key string) error {
+ resp, err := v.head("trash/" + key)
err = v.translateError(err)
if os.IsNotExist(err) {
// OK, trash/X doesn't exist so we're not in the race
// trash/X's lifetime. The new timestamp might not
// become visible until now+raceWindow, and EmptyTrash
// is allowed to delete trash/X before then.
- return fmt.Errorf("same block is already in trash, and safe window ended %s ago", -safeWindow)
+ return fmt.Errorf("%s: same block is already in trash, and safe window ended %s ago", key, -safeWindow)
}
// trash/X exists, but it won't be eligible for deletion until
// after now+raceWindow, so it's safe to overwrite it.
}
req := b.svc.DeleteObjectRequest(input)
_, err := req.Send(context.Background())
- //err := b.Bucket().Del(path)
b.stats.TickOps("delete")
b.stats.Tick(&b.stats.Ops, &b.stats.DelOps)
b.stats.TickErr(err)
} else if time.Since(t) < v.cluster.Collections.BlobSigningTTL.Duration() {
return nil
}
+ key := v.key(loc)
if v.cluster.Collections.BlobTrashLifetime == 0 {
if !v.UnsafeDelete {
return ErrS3TrashDisabled
}
- return v.translateError(v.bucket.Del(loc))
+ return v.translateError(v.bucket.Del(key))
}
- err := v.checkRaceWindow(loc)
+ err := v.checkRaceWindow(key)
if err != nil {
return err
}
- err = v.safeCopy("trash/"+loc, loc)
+ err = v.safeCopy("trash/"+key, key)
if err != nil {
return err
}
- return v.translateError(v.bucket.Del(loc))
+ return v.translateError(v.bucket.Del(key))
}
// Untrash moves block from trash back into store
func (v *S3AWSVolume) Untrash(loc string) error {
- err := v.safeCopy(loc, "trash/"+loc)
+ key := v.key(loc)
+ err := v.safeCopy(key, "trash/"+key)
if err != nil {
return err
}
- err = v.writeObject(context.Background(), "recent/"+loc, nil)
+ err = v.writeObject(context.Background(), "recent/"+key, nil)
return v.translateError(err)
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
})
}
+func (s *StubbedS3AWSSuite) TestGenericWithPrefix(c *check.C) {
+ DoGenericVolumeTests(c, false, func(t TB, cluster *arvados.Cluster, volume arvados.Volume, logger logrus.FieldLogger, metrics *volumeMetricsVecs) TestableVolume {
+ v := s.newTestableVolume(c, cluster, volume, metrics, -2*time.Second)
+ v.PrefixLength = 3
+ return v
+ })
+}
+
func (s *StubbedS3AWSSuite) TestIndex(c *check.C) {
v := s.newTestableVolume(c, s.cluster, arvados.Volume{Replication: 2}, newVolumeMetricsVecs(prometheus.NewRegistry()), 0)
v.IndexPageSize = 3
// as of June 24, 2020. Cf. https://forums.aws.amazon.com/ann.jspa?annID=5816
vol := S3AWSVolume{
S3VolumeDriverParameters: arvados.S3VolumeDriverParameters{
- AccessKey: "xxx",
- SecretKey: "xxx",
- Endpoint: stub.URL,
- Region: "test-region-1",
- Bucket: "test-bucket-name",
+ AccessKeyID: "xxx",
+ SecretAccessKey: "xxx",
+ Endpoint: stub.URL,
+ Region: "test-region-1",
+ Bucket: "test-bucket-name",
},
cluster: s.cluster,
logger: ctxlog.TestLogger(c),
panic(err)
}
v.serverClock.now = nil
- _, err = v.Head(key)
+ _, err = v.head(key)
if err != nil {
panic(err)
}
false, false, false, true, true, true,
},
} {
- c.Log("Scenario: ", scenario.label)
-
- // We have a few tests to run for each scenario, and
- // the tests are expected to change state. By calling
- // this setup func between tests, we (re)create the
- // scenario as specified, using a new unique block
- // locator to prevent interference from previous
- // tests.
-
- setupScenario := func() (string, []byte) {
- nextKey++
- blk := []byte(fmt.Sprintf("%d", nextKey))
- loc := fmt.Sprintf("%x", md5.Sum(blk))
- c.Log("\t", loc)
- putS3Obj(scenario.dataT, loc, blk)
- putS3Obj(scenario.recentT, "recent/"+loc, nil)
- putS3Obj(scenario.trashT, "trash/"+loc, blk)
- v.serverClock.now = &t0
- return loc, blk
- }
-
- // Check canGet
- loc, blk := setupScenario()
- buf := make([]byte, len(blk))
- _, err := v.Get(context.Background(), loc, buf)
- c.Check(err == nil, check.Equals, scenario.canGet)
- if err != nil {
- c.Check(os.IsNotExist(err), check.Equals, true)
- }
-
- // Call Trash, then check canTrash and canGetAfterTrash
- loc, _ = setupScenario()
- err = v.Trash(loc)
- c.Check(err == nil, check.Equals, scenario.canTrash)
- _, err = v.Get(context.Background(), loc, buf)
- c.Check(err == nil, check.Equals, scenario.canGetAfterTrash)
- if err != nil {
- c.Check(os.IsNotExist(err), check.Equals, true)
- }
-
- // Call Untrash, then check canUntrash
- loc, _ = setupScenario()
- err = v.Untrash(loc)
- c.Check(err == nil, check.Equals, scenario.canUntrash)
- if scenario.dataT != none || scenario.trashT != none {
- // In all scenarios where the data exists, we
- // should be able to Get after Untrash --
- // regardless of timestamps, errors, race
- // conditions, etc.
+ for _, prefixLength := range []int{0, 3} {
+ v.PrefixLength = prefixLength
+ c.Logf("Scenario: %q (prefixLength=%d)", scenario.label, prefixLength)
+
+ // We have a few tests to run for each scenario, and
+ // the tests are expected to change state. By calling
+ // this setup func between tests, we (re)create the
+ // scenario as specified, using a new unique block
+ // locator to prevent interference from previous
+ // tests.
+
+ setupScenario := func() (string, []byte) {
+ nextKey++
+ blk := []byte(fmt.Sprintf("%d", nextKey))
+ loc := fmt.Sprintf("%x", md5.Sum(blk))
+ key := loc
+ if prefixLength > 0 {
+ key = loc[:prefixLength] + "/" + loc
+ }
+ c.Log("\t", loc, "\t", key)
+ putS3Obj(scenario.dataT, key, blk)
+ putS3Obj(scenario.recentT, "recent/"+key, nil)
+ putS3Obj(scenario.trashT, "trash/"+key, blk)
+ v.serverClock.now = &t0
+ return loc, blk
+ }
+
+ // Check canGet
+ loc, blk := setupScenario()
+ buf := make([]byte, len(blk))
+ _, err := v.Get(context.Background(), loc, buf)
+ c.Check(err == nil, check.Equals, scenario.canGet)
+ if err != nil {
+ c.Check(os.IsNotExist(err), check.Equals, true)
+ }
+
+ // Call Trash, then check canTrash and canGetAfterTrash
+ loc, _ = setupScenario()
+ err = v.Trash(loc)
+ c.Check(err == nil, check.Equals, scenario.canTrash)
_, err = v.Get(context.Background(), loc, buf)
+ c.Check(err == nil, check.Equals, scenario.canGetAfterTrash)
+ if err != nil {
+ c.Check(os.IsNotExist(err), check.Equals, true)
+ }
+
+ // Call Untrash, then check canUntrash
+ loc, _ = setupScenario()
+ err = v.Untrash(loc)
+ c.Check(err == nil, check.Equals, scenario.canUntrash)
+ if scenario.dataT != none || scenario.trashT != none {
+ // In all scenarios where the data exists, we
+ // should be able to Get after Untrash --
+ // regardless of timestamps, errors, race
+ // conditions, etc.
+ _, err = v.Get(context.Background(), loc, buf)
+ c.Check(err, check.IsNil)
+ }
+
+ // Call EmptyTrash, then check haveTrashAfterEmpty and
+ // freshAfterEmpty
+ loc, _ = setupScenario()
+ v.EmptyTrash()
+ _, err = v.head("trash/" + v.key(loc))
+ c.Check(err == nil, check.Equals, scenario.haveTrashAfterEmpty)
+ if scenario.freshAfterEmpty {
+ t, err := v.Mtime(loc)
+ c.Check(err, check.IsNil)
+ // new mtime must be current (with an
+ // allowance for 1s timestamp precision)
+ c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
+ }
+
+ // Check for current Mtime after Put (applies to all
+ // scenarios)
+ loc, blk = setupScenario()
+ err = v.Put(context.Background(), loc, blk)
c.Check(err, check.IsNil)
- }
-
- // Call EmptyTrash, then check haveTrashAfterEmpty and
- // freshAfterEmpty
- loc, _ = setupScenario()
- v.EmptyTrash()
- _, err = v.Head("trash/" + loc)
- c.Check(err == nil, check.Equals, scenario.haveTrashAfterEmpty)
- if scenario.freshAfterEmpty {
t, err := v.Mtime(loc)
c.Check(err, check.IsNil)
- // new mtime must be current (with an
- // allowance for 1s timestamp precision)
c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
}
-
- // Check for current Mtime after Put (applies to all
- // scenarios)
- loc, blk = setupScenario()
- err = v.Put(context.Background(), loc, blk)
- c.Check(err, check.IsNil)
- t, err := v.Mtime(loc)
- c.Check(err, check.IsNil)
- c.Check(t.After(t0.Add(-time.Second)), check.Equals, true)
}
}
S3AWSVolume: &S3AWSVolume{
S3VolumeDriverParameters: arvados.S3VolumeDriverParameters{
IAMRole: iamRole,
- AccessKey: accessKey,
- SecretKey: secretKey,
+ AccessKeyID: accessKey,
+ SecretAccessKey: secretKey,
Bucket: S3AWSTestBucketName,
Endpoint: endpoint,
Region: "test-region-1",
// PutRaw skips the ContentMD5 test
func (v *TestableS3AWSVolume) PutRaw(loc string, block []byte) {
-
+ key := v.key(loc)
r := NewCountingReader(bytes.NewReader(block), v.bucket.stats.TickOutBytes)
uploader := s3manager.NewUploaderWithClient(v.bucket.svc, func(u *s3manager.Uploader) {
_, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String(loc),
+ Key: aws.String(key),
Body: r,
})
if err != nil {
- v.logger.Printf("PutRaw: %s: %+v", loc, err)
+ v.logger.Printf("PutRaw: %s: %+v", key, err)
}
empty := bytes.NewReader([]byte{})
_, err = uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String("recent/" + loc),
+ Key: aws.String("recent/" + key),
Body: empty,
})
if err != nil {
- v.logger.Printf("PutRaw: recent/%s: %+v", loc, err)
+ v.logger.Printf("PutRaw: recent/%s: %+v", key, err)
}
}
// TouchWithDate turns back the clock while doing a Touch(). We assume
// there are no other operations happening on the same s3test server
// while we do this.
-func (v *TestableS3AWSVolume) TouchWithDate(locator string, lastPut time.Time) {
+func (v *TestableS3AWSVolume) TouchWithDate(loc string, lastPut time.Time) {
v.serverClock.now = &lastPut
uploader := s3manager.NewUploaderWithClient(v.bucket.svc)
empty := bytes.NewReader([]byte{})
_, err := uploader.UploadWithContext(context.Background(), &s3manager.UploadInput{
Bucket: aws.String(v.bucket.bucket),
- Key: aws.String("recent/" + locator),
+ Key: aws.String("recent/" + v.key(loc)),
Body: empty,
})
if err != nil {
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"sync"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"encoding/json"
// getStatusItem("foo","bar","baz") retrieves /status.json, decodes
// the response body into resp, and returns resp["foo"]["bar"]["baz"].
func getStatusItem(h *handler, keys ...string) interface{} {
- resp := IssueRequest(h, &RequestTester{"/status.json", "", "GET", nil})
+ resp := IssueRequest(h, &RequestTester{"/status.json", "", "GET", nil, ""})
var s interface{}
json.NewDecoder(resp.Body).Decode(&s)
for _, k := range keys {
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"errors"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"container/list"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
if err != nil {
return giveup("opening %q: %s", udir, err)
}
+ defer d.Close()
uuids, err := d.Readdirnames(0)
if err != nil {
return giveup("reading %q: %s", udir, err)
return fmt.Errorf("error creating directory %s: %s", bdir, err)
}
- tmpfile, tmperr := v.os.TempFile(bdir, "tmp"+loc)
- if tmperr != nil {
- return fmt.Errorf("TempFile(%s, tmp%s) failed: %s", bdir, loc, tmperr)
- }
-
bpath := v.blockPath(loc)
+ tmpfile, err := v.os.TempFile(bdir, "tmp"+loc)
+ if err != nil {
+ return fmt.Errorf("TempFile(%s, tmp%s) failed: %s", bdir, loc, err)
+ }
+ defer v.os.Remove(tmpfile.Name())
+ defer tmpfile.Close()
- if err := v.lock(ctx); err != nil {
+ if err = v.lock(ctx); err != nil {
return err
}
defer v.unlock()
n, err := io.Copy(tmpfile, rdr)
v.os.stats.TickOutBytes(uint64(n))
if err != nil {
- err = fmt.Errorf("error writing %s: %s", bpath, err)
- tmpfile.Close()
- v.os.Remove(tmpfile.Name())
- return err
+ return fmt.Errorf("error writing %s: %s", bpath, err)
}
- if err := tmpfile.Close(); err != nil {
- err = fmt.Errorf("error closing %s: %s", tmpfile.Name(), err)
- v.os.Remove(tmpfile.Name())
- return err
+ if err = tmpfile.Close(); err != nil {
+ return fmt.Errorf("error closing %s: %s", tmpfile.Name(), err)
}
// ext4 uses a low-precision clock and effectively backdates
// files by up to 10 ms, sometimes across a 1-second boundary,
v.os.stats.TickOps("utimes")
v.os.stats.Tick(&v.os.stats.UtimesOps)
if err = os.Chtimes(tmpfile.Name(), ts, ts); err != nil {
- err = fmt.Errorf("error setting timestamps on %s: %s", tmpfile.Name(), err)
- v.os.Remove(tmpfile.Name())
- return err
+ return fmt.Errorf("error setting timestamps on %s: %s", tmpfile.Name(), err)
}
- if err := v.os.Rename(tmpfile.Name(), bpath); err != nil {
- err = fmt.Errorf("error renaming %s to %s: %s", tmpfile.Name(), bpath, err)
- v.os.Remove(tmpfile.Name())
- return err
+ if err = v.os.Rename(tmpfile.Name(), bpath); err != nil {
+ return fmt.Errorf("error renaming %s to %s: %s", tmpfile.Name(), bpath, err)
}
return nil
}
// e4de7a2810f5554cd39b36d8ddb132ff+67108864 1388701136
//
func (v *UnixVolume) IndexTo(prefix string, w io.Writer) error {
- var lastErr error
rootdir, err := v.os.Open(v.Root)
if err != nil {
return err
}
- defer rootdir.Close()
v.os.stats.TickOps("readdir")
v.os.stats.Tick(&v.os.stats.ReaddirOps)
- for {
- names, err := rootdir.Readdirnames(1)
- if err == io.EOF {
- return lastErr
- } else if err != nil {
- return err
- }
- if !strings.HasPrefix(names[0], prefix) && !strings.HasPrefix(prefix, names[0]) {
+ subdirs, err := rootdir.Readdirnames(-1)
+ rootdir.Close()
+ if err != nil {
+ return err
+ }
+ for _, subdir := range subdirs {
+ if !strings.HasPrefix(subdir, prefix) && !strings.HasPrefix(prefix, subdir) {
// prefix excludes all blocks stored in this dir
continue
}
- if !blockDirRe.MatchString(names[0]) {
+ if !blockDirRe.MatchString(subdir) {
continue
}
- blockdirpath := filepath.Join(v.Root, names[0])
- blockdir, err := v.os.Open(blockdirpath)
- if err != nil {
- v.logger.WithError(err).Errorf("error reading %q", blockdirpath)
- lastErr = fmt.Errorf("error reading %q: %s", blockdirpath, err)
- continue
- }
- v.os.stats.TickOps("readdir")
- v.os.stats.Tick(&v.os.stats.ReaddirOps)
- for {
- fileInfo, err := blockdir.Readdir(1)
- if err == io.EOF {
+ blockdirpath := filepath.Join(v.Root, subdir)
+
+ var dirents []os.DirEntry
+ for attempt := 0; ; attempt++ {
+ v.os.stats.TickOps("readdir")
+ v.os.stats.Tick(&v.os.stats.ReaddirOps)
+ dirents, err = os.ReadDir(blockdirpath)
+ if err == nil {
break
+ } else if attempt < 5 && strings.Contains(err.Error(), "errno 523") {
+ // EBADCOOKIE (NFS stopped accepting
+ // our readdirent cookie) -- retry a
+ // few times before giving up
+ v.logger.WithError(err).Printf("retry after error reading %s", blockdirpath)
+ continue
+ } else {
+ return err
+ }
+ }
+
+ for _, dirent := range dirents {
+ fileInfo, err := dirent.Info()
+ if os.IsNotExist(err) {
+ // File disappeared between ReadDir() and now
+ continue
} else if err != nil {
- v.logger.WithError(err).Errorf("error reading %q", blockdirpath)
- lastErr = fmt.Errorf("error reading %q: %s", blockdirpath, err)
- break
+ v.logger.WithError(err).Errorf("error getting FileInfo for %q in %q", dirent.Name(), blockdirpath)
+ return err
}
- name := fileInfo[0].Name()
+ name := fileInfo.Name()
if !strings.HasPrefix(name, prefix) {
continue
}
}
_, err = fmt.Fprint(w,
name,
- "+", fileInfo[0].Size(),
- " ", fileInfo[0].ModTime().UnixNano(),
+ "+", fileInfo.Size(),
+ " ", fileInfo.ModTime().UnixNano(),
"\n")
if err != nil {
- blockdir.Close()
return fmt.Errorf("error writing: %s", err)
}
}
- blockdir.Close()
}
+ return nil
}
// Trash trashes the block data from the unix storage
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
c.Check(err, check.NotNil)
c.Check(stats(), check.Matches, `.*"StatOps":[^0],.*`)
c.Check(stats(), check.Matches, `.*"Errors":[^0],.*`)
- c.Check(stats(), check.Matches, `.*"\*os\.PathError":[^0].*`)
+ c.Check(stats(), check.Matches, `.*"\*(fs|os)\.PathError":[^0].*`) // os.PathError changed to fs.PathError in Go 1.16
c.Check(stats(), check.Matches, `.*"InBytes":0,.*`)
c.Check(stats(), check.Matches, `.*"OpenOps":0,.*`)
c.Check(stats(), check.Matches, `.*"CreateOps":0,.*`)
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"context"
"fmt"
"io"
"math/big"
+ "sort"
"sync/atomic"
"time"
vm.writables = append(vm.writables, mnt)
}
}
+ // pri(mnt): return highest priority of any storage class
+ // offered by mnt
+ pri := func(mnt *VolumeMount) int {
+ any, best := false, 0
+ for class := range mnt.KeepMount.StorageClasses {
+ if p := cluster.StorageClasses[class].Priority; !any || best < p {
+ best = p
+ any = true
+ }
+ }
+ return best
+ }
+ // less(a,b): sort first by highest priority of any offered
+ // storage class (highest->lowest), then by volume UUID
+ less := func(a, b *VolumeMount) bool {
+ if pa, pb := pri(a), pri(b); pa != pb {
+ return pa > pb
+ } else {
+ return a.KeepMount.UUID < b.KeepMount.UUID
+ }
+ }
+ sort.Slice(vm.readables, func(i, j int) bool {
+ return less(vm.readables[i], vm.readables[j])
+ })
+ sort.Slice(vm.writables, func(i, j int) bool {
+ return less(vm.writables[i], vm.writables[j])
+ })
+ sort.Slice(vm.mounts, func(i, j int) bool {
+ return less(vm.mounts[i], vm.mounts[j])
+ })
return vm, nil
}
return vm.readables
}
-// AllWritable returns an array of all writable volumes
+// AllWritable returns writable volumes, sorted by priority/uuid. Used
+// by CompareAndTouch to ensure higher-priority volumes are checked
+// first.
func (vm *RRVolumeManager) AllWritable() []*VolumeMount {
return vm.writables
}
-// NextWritable returns the next writable
-func (vm *RRVolumeManager) NextWritable() *VolumeMount {
+// NextWritable returns writable volumes, rotated by vm.counter so
+// each volume gets a turn to be first. Used by PutBlock to distribute
+// new data across available volumes.
+func (vm *RRVolumeManager) NextWritable() []*VolumeMount {
if len(vm.writables) == 0 {
return nil
}
- i := atomic.AddUint32(&vm.counter, 1)
- return vm.writables[i%uint32(len(vm.writables))]
+ offset := (int(atomic.AddUint32(&vm.counter, 1)) - 1) % len(vm.writables)
+ return append(append([]*VolumeMount(nil), vm.writables[offset:]...), vm.writables[:offset]...)
}
// VolumeStats returns an ioStats for the given volume.
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"bytes"
}
return nil
} else {
- return NotFoundError
+ return os.ErrNotExist
}
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
/* A WorkQueue is an asynchronous thread-safe queue manager. It
provides a channel from which items can be read off the queue, and
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package keepstore
import (
"container/list"
s.files = ["bin/arvados-login-sync", "agpl-3.0.txt"]
s.executables << "arvados-login-sync"
s.required_ruby_version = '>= 2.1.0'
- s.add_runtime_dependency 'arvados', '~> 1.3.0', '>= 1.3.0'
+ s.add_runtime_dependency 'arvados', '>= 1.3.3.20190320201707'
s.add_runtime_dependency 'launchy', '< 2.5'
- # arvados-google-api-client 0.8.7.2 is incompatible with faraday 0.16.2
- s.add_dependency('faraday', '< 0.16')
+ # We need at least version 0.8.7.3, cf. https://dev.arvados.org/issues/15673
+ s.add_dependency('arvados-google-api-client', '>= 0.8.7.3', '< 0.8.9')
# arvados-google-api-client (and thus arvados) gems
# depend on signet, but signet 0.12 is incompatible with ruby 2.3.
s.add_dependency('signet', '< 0.12')
require 'etc'
require 'fileutils'
require 'yaml'
+require 'optparse'
req_envs = %w(ARVADOS_API_HOST ARVADOS_API_TOKEN ARVADOS_VIRTUAL_MACHINE_UUID)
req_envs.each do |k|
end
end
-exclusive_mode = ARGV.index("--exclusive")
+options = {}
+OptionParser.new do |parser|
+ parser.on('--exclusive', 'Manage SSH keys file exclusively.')
+ parser.on('--rotate-tokens', 'Force a rotation of all user tokens.')
+ parser.on('--skip-missing-users', "Don't try to create any local accounts.")
+ parser.on('--token-lifetime SECONDS', 'Create user tokens that expire after SECONDS.', Integer)
+ parser.on('--debug', 'Enable debug output')
+end.parse!(into: options)
+
exclusive_banner = "#######################################################################################
# THIS FILE IS MANAGED BY #{$0} -- CHANGES WILL BE OVERWRITTEN #
#######################################################################################\n\n"
start_banner = "### BEGIN Arvados-managed keys -- changes between markers will be overwritten\n"
end_banner = "### END Arvados-managed keys -- changes between markers will be overwritten\n"
-# Don't try to create any local accounts
-skip_missing_users = ARGV.index("--skip-missing-users")
-
keys = ''
begin
+ debug = false
+ if options[:"debug"]
+ debug = true
+ end
arv = Arvados.new({ :suppress_ssl_warnings => false })
logincluster_arv = Arvados.new({ :api_host => (ENV['LOGINCLUSTER_ARVADOS_API_HOST'] || ENV['ARVADOS_API_HOST']),
:api_token => (ENV['LOGINCLUSTER_ARVADOS_API_TOKEN'] || ENV['ARVADOS_API_TOKEN']),
begin
pwnam[l[:username]] = Etc.getpwnam(l[:username])
rescue
- if skip_missing_users
+ if options[:"skip-missing-users"]
STDERR.puts "Account #{l[:username]} not found. Skipping"
true
end
else
if pwnam[l[:username]].uid < uid_min
- STDERR.puts "Account #{l[:username]} uid #{pwnam[l[:username]].uid} < uid_min #{uid_min}. Skipping"
+ STDERR.puts "Account #{l[:username]} uid #{pwnam[l[:username]].uid} < uid_min #{uid_min}. Skipping" if debug
true
end
end
# Collect all keys
logins.each do |l|
+ STDERR.puts("Considering #{l[:username]} ...") if debug
keys[l[:username]] = Array.new() if not keys.has_key?(l[:username])
key = l[:public_key]
if !key.nil?
if existing_groups.index(addgroup).nil?
# User should be in group, but isn't, so add them.
STDERR.puts "Add user #{username} to #{addgroup} group"
- system("adduser", username, addgroup)
+ system("usermod", "-aG", addgroup, username)
end
end
if groups.index(removegroup).nil?
# User is in a group, but shouldn't be, so remove them.
STDERR.puts "Remove user #{username} from #{removegroup} group"
- system("deluser", username, removegroup)
+ system("gpasswd", "-d", username, removegroup)
end
end
oldkeys = ""
end
- if exclusive_mode
+ if options[:exclusive]
newkeys = exclusive_banner + newkeys
elsif oldkeys.start_with?(exclusive_banner)
newkeys = start_banner + newkeys + end_banner
tokenfile = File.join(configarvados, "settings.conf")
begin
- if !File.exist?(tokenfile)
- user_token = logincluster_arv.api_client_authorization.create(api_client_authorization: {owner_uuid: l[:user_uuid], api_client_id: 0})
+ STDERR.puts "Processing #{tokenfile} ..." if debug
+ newToken = false
+ if File.exist?(tokenfile)
+ # check if the token is still valid
+ myToken = ENV["ARVADOS_API_TOKEN"]
+ userEnv = IO::read(tokenfile)
+ if (m = /^ARVADOS_API_TOKEN=(.*?\n)/m.match(userEnv))
+ begin
+ tmp_arv = Arvados.new({ :api_host => (ENV['LOGINCLUSTER_ARVADOS_API_HOST'] || ENV['ARVADOS_API_HOST']),
+ :api_token => (m[1]),
+ :suppress_ssl_warnings => false })
+ tmp_arv.user.current
+ rescue Arvados::TransactionFailedError => e
+ if e.to_s =~ /401 Unauthorized/
+ STDERR.puts "Account #{l[:username]} token not valid, creating new token."
+ newToken = true
+ else
+ raise
+ end
+ end
+ end
+ elsif !File.exist?(tokenfile) || options[:"rotate-tokens"]
+ STDERR.puts "Account #{l[:username]} token file not found, creating new token."
+ newToken = true
+ end
+ if newToken
+ aca_params = {owner_uuid: l[:user_uuid], api_client_id: 0}
+ if options[:"token-lifetime"] && options[:"token-lifetime"] > 0
+ aca_params.merge!(expires_at: (Time.now + options[:"token-lifetime"]))
+ end
+ user_token = logincluster_arv.api_client_authorization.create(api_client_authorization: aca_params)
f = File.new(tokenfile, 'w')
f.write("ARVADOS_API_HOST=#{ENV['ARVADOS_API_HOST']}\n")
f.write("ARVADOS_API_TOKEN=v2/#{user_token[:uuid]}/#{user_token[:api_token]}\n")
With no arguments, list available arvboxes.
arvopen:
- Open an Arvados uuid in web browser (http://curover.se)
+ Open an Arvados uuid in web browser (http://arvadosapi.com)
arvissue
Open an Arvados ticket in web browser (http://dev.arvados.org)
arvopen() {
if [[ -n "$1" ]] ; then
- xdg-open https://curover.se/$1
+ xdg-open https://arvadosapi.com/$1
else
echo "Open Arvados uuid in browser"
echo "Usage: arvopen uuid"
WORKBENCH2_ROOT="$ARVBOX_DATA/workbench2"
fi
+if test -z "$ARVADOS_BRANCH" ; then
+ ARVADOS_BRANCH=main
+fi
+
+if test -z "$WORKBENCH2_BRANCH" ; then
+ WORKBENCH2_BRANCH=main
+fi
+
+# Update this to the docker tag for the version on releases.
+DEFAULT_TAG=
+
PG_DATA="$ARVBOX_DATA/postgres"
VAR_DATA="$ARVBOX_DATA/var"
PASSENGER="$ARVBOX_DATA/passenger"
GOSTUFF="$ARVBOX_DATA/gopath"
RLIBS="$ARVBOX_DATA/Rlibs"
ARVADOS_CONTAINER_PATH="/var/lib/arvados-arvbox"
-GEM_HOME="/var/lib/arvados/lib/ruby/gems/2.5.0"
+GEM_HOME="/var/lib/arvados/lib/ruby/gems/2.7.0"
getip() {
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ARVBOX_CONTAINER
fi
fi
+ if test -z "$TAG" -a -n "$DEFAULT_TAG"; then
+ TAG=":$DEFAULT_TAG"
+ fi
+
if [[ "$CONFIG" =~ ^public ]] ; then
if test -n "$ARVBOX_PUBLISH_IP" ; then
localip=$ARVBOX_PUBLISH_IP
if ! test -d "$ARVADOS_ROOT" ; then
git clone https://git.arvados.org/arvados.git "$ARVADOS_ROOT"
+ git -C "$ARVADOS_ROOT" checkout $ARVADOS_BRANCH
fi
if ! test -d "$COMPOSER_ROOT" ; then
git clone https://github.com/arvados/composer.git "$COMPOSER_ROOT"
git -C "$COMPOSER_ROOT" checkout arvados-fork
- git -C "$COMPOSER_ROOT" pull
fi
if ! test -d "$WORKBENCH2_ROOT" ; then
git clone https://git.arvados.org/arvados-workbench2.git "$WORKBENCH2_ROOT"
+ git -C "$ARVADOS_ROOT" checkout $WORKBENCH2_BRANCH
fi
if [[ "$CONFIG" = test ]] ; then
fi
set -e
+ # Get the go version we should use for bootstrapping
+ GO_VERSION=`grep 'const goversion =' $LOCAL_ARVADOS_ROOT/lib/install/deps.go |awk -F'"' '{print $2}'`
+
if test "$1" = localdemo -o "$1" = publicdemo ; then
BUILDTYPE=demo
else
BUILDTYPE=dev
fi
- docker build --build-arg=BUILDTYPE=$BUILDTYPE $NO_CACHE --build-arg=arvados_version=$GITHEAD --build-arg=workdir=/tools/arvbox/lib/arvbox/docker -t arvados/arvbox-base:$GITHEAD -f "$ARVBOX_DOCKER/Dockerfile.base" "$LOCAL_ARVADOS_ROOT"
+ if test "$ARVADOS_BRANCH" = "main" ; then
+ ARVADOS_BRANCH=$GITHEAD
+ fi
+
+ docker build --build-arg=BUILDTYPE=$BUILDTYPE $NO_CACHE \
+ --build-arg=go_version=$GO_VERSION \
+ --build-arg=arvados_version=$ARVADOS_BRANCH \
+ --build-arg=workbench2_version=$WORKBENCH2_BRANCH \
+ --build-arg=workdir=/tools/arvbox/lib/arvbox/docker \
+ -t arvados/arvbox-base:$GITHEAD \
+ -f "$ARVBOX_DOCKER/Dockerfile.base" \
+ "$LOCAL_ARVADOS_ROOT"
docker tag $FORCE arvados/arvbox-base:$GITHEAD arvados/arvbox-base:latest
- docker build $NO_CACHE -t arvados/arvbox-$BUILDTYPE:$GITHEAD -f "$ARVBOX_DOCKER/Dockerfile.$BUILDTYPE" "$ARVBOX_DOCKER"
+ docker build $NO_CACHE \
+ --build-arg=go_version=$GO_VERSION \
+ --build-arg=arvados_version=$ARVADOS_BRANCH \
+ --build-arg=workbench2_version=$WORKBENCH2_BRANCH \
+ -t arvados/arvbox-$BUILDTYPE:$GITHEAD \
+ -f "$ARVBOX_DOCKER/Dockerfile.$BUILDTYPE" \
+ "$ARVBOX_DOCKER"
docker tag $FORCE arvados/arvbox-$BUILDTYPE:$GITHEAD arvados/arvbox-$BUILDTYPE:latest
}
echo "Status: running"
echo "Container IP: $(getip)"
echo "Published host: $(gethost)"
+ echo "Workbench: https://$(gethost)"
else
echo "Status: not running"
fi
else
echo "Usage: $0 $subcmd <start|stop|restart> <service>"
echo "Available services:"
- exec docker execa $ARVBOX_CONTAINER ls /etc/service
+ exec docker exec $ARVBOX_CONTAINER ls /etc/service
fi
;;
cd /usr/src/arvados/services/api
export DISABLE_DATABASE_ENVIRONMENT_CHECK=1
export RAILS_ENV=development
-flock $GEM_HOME/gems.lock bundle exec rake db:drop
+flock $GEM_HOME/gems.lock bin/bundle exec rake db:drop
rm $ARVADOS_CONTAINER_PATH/api_database_setup
rm $ARVADOS_CONTAINER_PATH/superuser_token
sv start api
FROM debian:10-slim as dev
ENV DEBIAN_FRONTEND noninteractive
-RUN echo "deb http://deb.debian.org/debian buster-backports main" > /etc/apt/sources.list.d/backports.list
-
RUN apt-get update && \
apt-get -yq --no-install-recommends -o Acquire::Retries=6 install \
- golang -t buster-backports
-
-RUN apt-get -yq --no-install-recommends -o Acquire::Retries=6 install \
- build-essential ca-certificates git libpam0g-dev
+ build-essential ca-certificates git libpam0g-dev wget
ENV GOPATH /var/lib/gopath
+ARG go_version
+
+# Get Go
+RUN cd /usr/src && \
+ wget https://golang.org/dl/go${go_version}.linux-amd64.tar.gz && \
+ tar xzf go${go_version}.linux-amd64.tar.gz && \
+ ln -s /usr/src/go/bin/go /usr/local/bin/go-${go_version} && \
+ ln -s /usr/src/go/bin/gofmt /usr/local/bin/gofmt-${go_version} && \
+ ln -s /usr/local/bin/go-${go_version} /usr/local/bin/go && \
+ ln -s /usr/local/bin/gofmt-${go_version} /usr/local/bin/gofmt
# the --mount option requires the experimental syntax enabled (enables
# buildkit) on the first line of this file. This Dockerfile must also be built
FROM debian:10-slim as demo
ENV DEBIAN_FRONTEND noninteractive
-RUN echo "deb http://deb.debian.org/debian buster-backports main" > /etc/apt/sources.list.d/backports.list
-
RUN apt-get update && \
apt-get -yq --no-install-recommends -o Acquire::Retries=6 install \
- golang -t buster-backports
-
-RUN apt-get -yq --no-install-recommends -o Acquire::Retries=6 install \
- build-essential ca-certificates git libpam0g-dev
+ build-essential ca-certificates git libpam0g-dev wget
ENV GOPATH /var/lib/gopath
+ARG go_version
+
+RUN cd /usr/src && \
+ wget https://golang.org/dl/go${go_version}.linux-amd64.tar.gz && \
+ tar xzf go${go_version}.linux-amd64.tar.gz && \
+ ln -s /usr/src/go/bin/go /usr/local/bin/go-${go_version} && \
+ ln -s /usr/src/go/bin/gofmt /usr/local/bin/gofmt-${go_version} && \
+ ln -s /usr/local/bin/go-${go_version} /usr/local/bin/go && \
+ ln -s /usr/local/bin/gofmt-${go_version} /usr/local/bin/gofmt
ARG arvados_version
RUN echo arvados_version is git commit $arvados_version
# gnupg2 runit python3-pip python3-setuptools python3-yaml shellinabox netcat less
RUN apt-get update && \
apt-get -yq --no-install-recommends -o Acquire::Retries=6 install \
- gnupg2 runit python3-pip python3-setuptools python3-yaml shellinabox netcat less && \
+ gnupg2 runit python3-pip python3-setuptools python3-yaml shellinabox netcat less vim-tiny && \
apt-get clean
ENV GOPATH /var/lib/gopath
RUN mkdir -p /etc/apt/sources.list.d && \
echo deb https://download.docker.com/linux/debian/ buster stable > /etc/apt/sources.list.d/docker.list && \
apt-get update && \
- apt-get -yq --no-install-recommends install docker-ce=5:19.03.13~3-0~debian-buster && \
+ apt-get -yq --no-install-recommends install docker-ce=5:20.10.6~3-0~debian-buster && \
apt-get clean
# Set UTF-8 locale
FROM arvados/arvbox-base
ARG arvados_version
ARG composer_version=arvados-fork
-ARG workbench2_version=master
+ARG workbench2_version=main
RUN cd /usr/src && \
git clone --no-checkout https://git.arvados.org/arvados.git && \
git -C arvados checkout ${arvados_version} && \
- git -C arvados pull && \
git clone --no-checkout https://github.com/arvados/composer.git && \
git -C composer checkout ${composer_version} && \
- git -C composer pull && \
git clone --no-checkout https://git.arvados.org/arvados-workbench2.git workbench2 && \
git -C workbench2 checkout ${workbench2_version} && \
- git -C workbench2 pull && \
chown -R 1000:1000 /usr/src
# avoid rebuilding arvados-server, it's already been built as part of the base image
RUN sudo -u arvbox /var/lib/arvbox/service/vm/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/keepproxy/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/arv-git-httpd/run-service --only-deps
-RUN sudo -u arvbox /var/lib/arvbox/service/crunch-dispatch-local/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/websockets/run --only-deps
RUN sudo -u arvbox /usr/local/lib/arvbox/keep-setup.sh --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/sdk/run-service
fi
if ! test -f $ARVADOS_CONTAINER_PATH/api_database_setup ; then
- flock $GEM_HOME/gems.lock bundle exec rake db:setup
+ flock $GEM_HOME/gems.lock bin/bundle exec rake db:setup
touch $ARVADOS_CONTAINER_PATH/api_database_setup
fi
if ! test -s $ARVADOS_CONTAINER_PATH/superuser_token ; then
- superuser_tok=$(flock $GEM_HOME/gems.lock bundle exec ./script/create_superuser_token.rb)
+ superuser_tok=$(flock $GEM_HOME/gems.lock bin/bundle exec ./script/create_superuser_token.rb)
echo "$superuser_tok" > $ARVADOS_CONTAINER_PATH/superuser_token
fi
rm -rf tmp
mkdir -p tmp/cache
-flock $GEM_HOME/gems.lock bundle exec rake db:migrate
+flock $GEM_HOME/gems.lock bin/bundle exec rake db:migrate
run_bundler() {
if test -f Gemfile.lock ; then
- # The 'gem install bundler line below' is cf.
- # https://bundler.io/blog/2019/05/14/solutions-for-cant-find-gem-bundler-with-executable-bundle.html,
- # until we get bundler 2.7.10/3.0.0 or higher
- flock $GEM_HOME/gems.lock gem install bundler --no-document -v "$(grep -A 1 "BUNDLED WITH" Gemfile.lock | tail -n 1|tr -d ' ')"
frozen=--frozen
else
frozen=""
fi
- # if ! test -x $GEM_HOME/bin/bundler ; then
- # bundleversion=2.0.2
- # bundlergem=$(ls -r $GEM_HOME/cache/bundler-${bundleversion}.gem 2>/dev/null | head -n1 || true)
- # if test -n "$bundlergem" ; then
- # flock $GEM_HOME/gems.lock gem install --verbose --local --no-document $bundlergem
- # else
- # flock $GEM_HOME/gems.lock gem install --verbose --no-document bundler --version ${bundleversion}
- # fi
- # fi
- # Make sure to put the gem binaries in the right place
- flock /var/lib/arvados/lib/ruby/gems/2.5.0/gems.lock bundler config bin $GEM_HOME/bin
- if ! flock $GEM_HOME/gems.lock bundler install --verbose --local --no-deployment $frozen "$@" ; then
- flock $GEM_HOME/gems.lock bundler install --verbose --no-deployment $frozen "$@"
+ BUNDLER=bundler
+ if test -x $PWD/bin/bundler ; then
+ # If present, use the one associated with rails workbench or API
+ BUNDLER=$PWD/bin/bundler
+ fi
+ if ! flock $GEM_HOME/gems.lock $BUNDLER install --verbose --local --no-deployment $frozen "$@" ; then
+ flock $GEM_HOME/gems.lock $BUNDLER install --verbose --no-deployment $frozen "$@"
fi
}
mkdir -p /tmp/crunch0 /tmp/crunch1
chown crunch:crunch -R /tmp/crunch0 /tmp/crunch1
+ # singularity needs to be owned by root and suid
+ chown root /var/lib/arvados/bin/singularity \
+ /var/lib/arvados/etc/singularity/singularity.conf \
+ /var/lib/arvados/etc/singularity/capability.json \
+ /var/lib/arvados/etc/singularity/ecl.toml
+ chmod u+s /var/lib/arvados/bin/singularity
+
echo "arvbox ALL=(crunch) NOPASSWD: ALL" >> /etc/sudoers
cat <<EOF > /etc/profile.d/paths.sh
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/keepstore"
-install $GOPATH/bin/keepstore /usr/local/bin
+(cd /usr/local/bin && ln -sf arvados-server keepstore)
if test "$1" = "--only-deps" ; then
exit
fi
run_bundler --without=development
-flock $GEM_HOME/gems.lock bundle exec passenger-config build-native-support
-flock $GEM_HOME/gems.lock bundle exec passenger-config install-standalone-runtime
+flock $GEM_HOME/gems.lock bin/bundle exec passenger-config build-native-support
+flock $GEM_HOME/gems.lock bin/bundle exec passenger-config install-standalone-runtime
if test "$1" = "--only-deps" ; then
exit
touch $ARVADOS_CONTAINER_PATH/api.ready
-exec bundle exec passenger start --port=${services[api]}
+exec bin/bundle exec passenger start --port=${services[api]}
if ! test -d $ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git ; then
git clone --bare /usr/src/arvados $ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git
else
- git --git-dir=$ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git fetch -f /usr/src/arvados master:master
+ git --git-dir=$ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git fetch -f /usr/src/arvados main:main
fi
cd /usr/src/arvados/services/api
fi
done
-if ! (ps x | grep -v grep | grep "crunch-dispatch") > /dev/null ; then
+if ! (ps ax | grep -v grep | grep "crunch-dispatch") > /dev/null ; then
waiting="$waiting crunch-dispatch"
fi
if test "$1" != "--only-deps" ; then
openssl verify -CAfile $root_cert $server_cert
- exec bundle exec passenger start --port=${services[workbench]} \
+ exec bin/bundle exec passenger start --port=${services[workbench]} \
--ssl --ssl-certificate=$ARVADOS_CONTAINER_PATH/server-cert-${localip}.pem \
--ssl-certificate-key=$ARVADOS_CONTAINER_PATH/server-cert-${localip}.key \
--user arvbox
fi
run_bundler --without=development
-flock $GEM_HOME/gems.lock bundle exec passenger-config build-native-support
-flock $GEM_HOME/gems.lock bundle exec passenger-config install-standalone-runtime
+flock $GEM_HOME/gems.lock bin/bundle exec passenger-config build-native-support
+flock $GEM_HOME/gems.lock bin/bundle exec passenger-config install-standalone-runtime
mkdir -p /usr/src/arvados/apps/workbench/tmp
if test "$1" = "--only-deps" ; then
$RAILS_ENV:
keep_web_url: https://example.com/c=%{uuid_or_pdh}
EOF
- RAILS_GROUPS=assets flock $GEM_HOME/gems.lock bundle exec rake npm:install
+ RAILS_GROUPS=assets flock $GEM_HOME/gems.lock bin/bundle exec rake npm:install
rm config/application.yml
exit
fi
secret_token=$(cat $ARVADOS_CONTAINER_PATH/workbench_secret_token)
-RAILS_GROUPS=assets flock $GEM_HOME/gems.lock bundle exec rake npm:install
-flock $GEM_HOME/gems.lock bundle exec rake assets:precompile
+RAILS_GROUPS=assets flock $GEM_HOME/gems.lock bin/bundle exec rake npm:install
+flock $GEM_HOME/gems.lock bin/bundle exec rake assets:precompile
arv api_client create --api-client "$apiclient"
fi
-export HTTPS=false
# Can't use "yarn start", need to run the dev server script
# directly so that the TERM signal from "sv restart" gets to the
# right process.
export VERSION=$(./version-at-commit.sh)
-exec node node_modules/react-scripts-ts/scripts/start.js
+export BROWSER=none
+export CI=true
+export HTTPS=false
+node --version
+exec node node_modules/react-scripts/scripts/start.js
{
"variables": {
+ "arvados_cluster": "",
+ "associate_public_ip_address": "true",
"aws_access_key": "",
- "aws_secret_key": "",
"aws_profile": "",
+ "aws_secret_key": "",
+ "aws_source_ami": "ami-031283ff8a43b021c",
"build_environment": "aws",
- "arvados_cluster": "",
- "aws_source_ami": "ami-04d70e069399af2e9",
+ "public_key_file": "",
+ "mksquashfs_mem": "",
+ "nvidia_gpu_support": "",
+ "reposuffix": "",
+ "resolver": "",
"ssh_user": "admin",
- "vpc_id": "",
"subnet_id": "",
- "public_key_file": "",
- "associate_public_ip_address": "true"
+ "vpc_id": ""
},
"builders": [{
"type": "amazon-ebs",
"associate_public_ip_address": "{{user `associate_public_ip_address`}}",
"ssh_username": "{{user `ssh_user`}}",
"ami_name": "arvados-{{user `arvados_cluster`}}-compute-{{isotime \"20060102150405\"}}",
+ "launch_block_device_mappings": [{
+ "device_name": "/dev/xvda",
+ "volume_size": 20,
+ "volume_type": "gp2",
+ "delete_on_termination": true
+ }],
"ami_block_device_mappings": [
{
"device_name": "/dev/xvdb",
"type": "shell",
"execute_command": "sudo -S env {{ .Vars }} /bin/bash '{{ .Path }}'",
"script": "scripts/base.sh",
- "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}"]
+ "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}","MKSQUASHFS_MEM={{user `mksquashfs_mem`}}","NVIDIA_GPU_SUPPORT={{user `nvidia_gpu_support`}}","CLOUD=aws"]
}]
}
{
"variables": {
- "resource_group": null,
+ "account_file": "",
+ "arvados_cluster": "",
+ "build_environment": "azure-arm",
"client_id": "{{env `ARM_CLIENT_ID`}}",
"client_secret": "{{env `ARM_CLIENT_SECRET`}}",
- "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}",
- "tenant_id": "{{env `ARM_TENANT_ID`}}",
- "build_environment": "azure-arm",
"cloud_environment_name": "Public",
- "location": "centralus",
- "ssh_user": "packer",
- "ssh_private_key_file": "{{env `PACKERPRIVKEY`}}",
"image_sku": "",
- "arvados_cluster": "",
+ "location": "centralus",
"project_id": "",
- "account_file": "",
- "resolver": "",
+ "public_key_file": "",
+ "mksquashfs_mem": "",
+ "nvidia_gpu_support": "",
"reposuffix": "",
- "public_key_file": ""
+ "resolver": "",
+ "resource_group": null,
+ "ssh_private_key_file": "{{env `PACKERPRIVKEY`}}",
+ "ssh_user": "packer",
+ "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}",
+ "tenant_id": "{{env `ARM_TENANT_ID`}}"
},
"builders": [
{
"type": "shell",
"execute_command": "sudo -S env {{ .Vars }} /bin/bash '{{ .Path }}'",
"script": "scripts/base.sh",
- "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}"]
+ "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}","MKSQUASHFS_MEM={{user `mksquashfs_mem`}}","NVIDIA_GPU_SUPPORT={{user `nvidia_gpu_support`}}","CLOUD=azure"]
}]
}
Azure SKU image to use
--ssh_user (default: packer)
The user packer will use to log into the image
- --resolver (default: 8.8.8.8)
+ --resolver (default: host's network provided)
The dns resolver for the machine
--reposuffix (default: unset)
Set this to "-dev" to track the unstable/dev Arvados repositories
--public-key-file (required)
Path to the public key file that a-d-c will use to log into the compute node
- --debug
- Output debug information (default: false)
+ --mksquashfs-mem (default: 256M)
+ Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
+ --nvidia-gpu-support (default: false)
+ Install all the necessary tooling for Nvidia GPU support
+ --debug (default: false)
+ Output debug information
EOF
SSH_USER=
AWS_DEFAULT_REGION=us-east-1
PUBLIC_KEY_FILE=
+MKSQUASHFS_MEM=256M
+NVIDIA_GPU_SUPPORT=
PARSEDOPTS=$(getopt --name "$0" --longoptions \
- help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,debug \
+ help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,mksquashfs-mem:,nvidia-gpu-support,debug \
-- "" "$@")
if [ $? -ne 0 ]; then
exit 1
--public-key-file)
PUBLIC_KEY_FILE="$2"; shift
;;
+ --mksquashfs-mem)
+ MKSQUASHFS_MEM="$2"; shift
+ ;;
+ --nvidia-gpu-support)
+ NVIDIA_GPU_SUPPORT=1
+ ;;
--debug)
# If you want to debug a build issue, add the -debug flag to the build
# command in question.
if [[ "$PUBLIC_KEY_FILE" != "" ]]; then
EXTRA2+=" -var public_key_file=$PUBLIC_KEY_FILE"
fi
+if [[ "$MKSQUASHFS_MEM" != "" ]]; then
+ EXTRA2+=" -var mksquashfs_mem=$MKSQUASHFS_MEM"
+fi
+if [[ "$NVIDIA_GPU_SUPPORT" != "" ]]; then
+ EXTRA2+=" -var nvidia_gpu_support=$NVIDIA_GPU_SUPPORT"
+fi
+
+
+echo
+packer version
+echo
echo packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE
packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE
SUDO=sudo
+wait_for_apt_locks() {
+ while $SUDO fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock >/dev/null 2>&1; do
+ echo "APT: Waiting for apt/dpkg locks to be released..."
+ sleep 1
+ done
+}
+
# Run apt-get update
$SUDO DEBIAN_FRONTEND=noninteractive apt-get --yes update
# Install gnupg and dirmgr or gpg key checks will fail
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
gnupg \
dirmngr \
lsb-release
# For good measure, apt-get upgrade
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes upgrade
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes upgrade
# Make sure cloud-init is installed
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install cloud-init
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install cloud-init
if [[ ! -d /var/lib/cloud/scripts/per-boot ]]; then
mkdir -p /var/lib/cloud/scripts/per-boot
fi
# Add the arvados signing key
cat /tmp/1078ECD7.asc | $SUDO apt-key add -
# Add the debian keys
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get install --yes debian-keyring debian-archive-keyring
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get install --yes debian-keyring debian-archive-keyring
# Fix locale
$SUDO /bin/sed -ri 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
$SUDO /usr/sbin/locale-gen
# Install some packages we always need
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get --yes update
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get --yes update
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
openssh-server \
apt-utils \
git \
libcurl4-openssl-dev \
lvm2 \
cryptsetup \
- xfsprogs
-
-# See if python3-distutils is installable, and if so install it. This is a
-# temporary workaround for an Arvados packaging bug and should be removed once
-# Arvados 2.0.4 or 2.1.0 is released, whichever comes first.
-# See https://dev.arvados.org/issues/16611 for more information
-if apt-cache -qq show python3-distutils >/dev/null 2>&1; then
- $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install python3-distutils
-fi
+ xfsprogs \
+ squashfs-tools
# Install the Arvados packages we need
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
- python-arvados-fuse \
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
+ python3-arvados-fuse \
crunch-run \
arvados-docker-cleaner \
docker.io
+# Get Go and build singularity
+goversion=1.17.1
+mkdir -p /var/lib/arvados
+rm -rf /var/lib/arvados/go/
+curl -s https://storage.googleapis.com/golang/go${goversion}.linux-amd64.tar.gz | tar -C /var/lib/arvados -xzf -
+ln -sf /var/lib/arvados/go/bin/* /usr/local/bin/
+
+singularityversion=3.7.4
+curl -Ls https://github.com/sylabs/singularity/archive/refs/tags/v${singularityversion}.tar.gz | tar -C /var/lib/arvados -xzf -
+cd /var/lib/arvados/singularity-${singularityversion}
+
+# build dependencies for singularity
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes install \
+ make build-essential libssl-dev uuid-dev cryptsetup
+
+echo $singularityversion > VERSION
+./mconfig --prefix=/var/lib/arvados
+make -C ./builddir
+make -C ./builddir install
+ln -sf /var/lib/arvados/bin/* /usr/local/bin/
+
+# set `mksquashfs mem` in the singularity config file if it is configured
+if [ "$MKSQUASHFS_MEM" != "" ]; then
+ echo "mksquashfs mem = ${MKSQUASHFS_MEM}" >> /var/lib/arvados/etc/singularity/singularity.conf
+fi
+
+# Print singularity version installed
+singularity --version
+
# Remove unattended-upgrades if it is installed
-$SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes remove unattended-upgrades --purge
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get -qq --yes remove unattended-upgrades --purge
# Configure arvados-docker-cleaner
$SUDO mkdir -p /etc/arvados/docker-cleaner
$SUDO sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"/g' /etc/default/grub
$SUDO update-grub
-# Set a higher ulimit for docker
-$SUDO sed -i "s/ExecStart=\(.*\)/ExecStart=\1 --default-ulimit nofile=10000:10000 --dns ${RESOLVER}/g" /lib/systemd/system/docker.service
+# Set a higher ulimit and the resolver (if set) for docker
+if [ "x$RESOLVER" != "x" ]; then
+ SET_RESOLVER="--dns ${RESOLVER}"
+fi
+
+$SUDO sed "s/ExecStart=\(.*\)/ExecStart=\1 --default-ulimit nofile=10000:10000 ${SET_RESOLVER}/g" \
+ /lib/systemd/system/docker.service \
+ > /etc/systemd/system/docker.service
+
$SUDO systemctl daemon-reload
+# docker should not start on boot: we restart it inside /usr/local/bin/ensure-encrypted-partitions.sh,
+# and the BootProbeCommand might be "docker ps -q"
+$SUDO systemctl disable docker
+
# Make sure user_allow_other is set in fuse.conf
$SUDO sed -i 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
$SUDO chmod 600 /home/crunch/.ssh/authorized_keys
$SUDO chmod 700 /home/crunch/.ssh/
-# Make sure we resolve via the provided resolver IP. Prepending is good enough because
+# Make sure we resolve via the provided resolver IP if set. Prepending is good enough because
# unless 'rotate' is set, the nameservers are queried in order (cf. man resolv.conf)
-$SUDO sed -i "s/#prepend domain-name-servers 127.0.0.1;/prepend domain-name-servers ${RESOLVER};/" /etc/dhcp/dhclient.conf
-
+if [ "x$RESOLVER" != "x" ]; then
+ $SUDO sed -i "s/#prepend domain-name-servers 127.0.0.1;/prepend domain-name-servers ${RESOLVER};/" /etc/dhcp/dhclient.conf
+fi
# Set up the cloud-init script that will ensure encrypted disks
$SUDO mv /tmp/usr-local-bin-ensure-encrypted-partitions.sh /usr/local/bin/ensure-encrypted-partitions.sh
$SUDO chmod 755 /usr/local/bin/ensure-encrypted-partitions.sh
$SUDO chown root:root /usr/local/bin/ensure-encrypted-partitions.sh
$SUDO mv /tmp/etc-cloud-cloud.cfg.d-07_compute_arvados_dispatch_cloud.cfg /etc/cloud/cloud.cfg.d/07_compute_arvados_dispatch_cloud.cfg
$SUDO chown root:root /etc/cloud/cloud.cfg.d/07_compute_arvados_dispatch_cloud.cfg
+
+if [ "$NVIDIA_GPU_SUPPORT" == "1" ]; then
+ # $DIST should not have a dot if there is one in /etc/os-release (e.g. 18.04)
+ DIST=$(. /etc/os-release; echo $ID$VERSION_ID | tr -d '.')
+ # We need a kernel and matching headers
+ if [[ "$DIST" =~ ^debian ]]; then
+ $SUDO apt-get -y install linux-image-cloud-amd64 linux-headers-cloud-amd64
+ elif [ "$CLOUD" == "azure" ]; then
+ $SUDO apt-get -y install linux-image-azure linux-headers-azure
+ elif [ "$CLOUD" == "aws" ]; then
+ $SUDO apt-get -y install linux-image-aws linux-headers-aws
+ fi
+
+ # Install CUDA
+ $SUDO apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$DIST/x86_64/7fa2af80.pub
+ $SUDO apt-get -y install software-properties-common
+ $SUDO add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/$DIST/x86_64/ /"
+ $SUDO add-apt-repository contrib
+ $SUDO apt-get update
+ $SUDO apt-get -y install cuda
+
+ # Install libnvidia-container, the tooling for Docker/Singularity
+ curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | \
+ $SUDO apt-key add -
+ if [ "$DIST" == "debian11" ]; then
+ # As of 2021-12-16 libnvidia-container and friends are only available for
+ # Debian 10, not yet Debian 11. Install experimental rc1 package as per this
+ # workaround:
+ # https://github.com/NVIDIA/nvidia-docker/issues/1549#issuecomment-989670662
+ curl -s -L https://nvidia.github.io/libnvidia-container/debian10/libnvidia-container.list | \
+ $SUDO tee /etc/apt/sources.list.d/libnvidia-container.list
+ $SUDO sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/libnvidia-container.list
+ else
+ # here, $DIST should have a dot if there is one in /etc/os-release (e.g. 18.04)...
+ DIST=$(. /etc/os-release; echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/libnvidia-container/$DIST/libnvidia-container.list | \
+ $SUDO tee /etc/apt/sources.list.d/libnvidia-container.list
+ fi
+
+ if [ "$DIST" == "debian10" ]; then
+ # Debian 10 comes with Docker 18.xx, we need 19.03 or later
+ curl -fsSL https://download.docker.com/linux/debian/gpg | $SUDO gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ echo deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian/ buster stable | \
+ $SUDO tee /etc/apt/sources.list.d/docker.list
+ $SUDO apt-get update
+ $SUDO apt-get -yq --no-install-recommends install docker-ce=5:19.03.15~3-0~debian-buster
+
+ $SUDO sed "s/ExecStart=\(.*\)/ExecStart=\1 --default-ulimit nofile=10000:10000 ${SET_RESOLVER}/g" \
+ /lib/systemd/system/docker.service \
+ > /etc/systemd/system/docker.service
+
+ $SUDO systemctl daemon-reload
+
+ # docker should not start on boot: we restart it inside /usr/local/bin/ensure-encrypted-partitions.sh,
+ # and the BootProbeCommand might be "docker ps -q"
+ $SUDO systemctl disable docker
+ fi
+ $SUDO apt-get update
+ $SUDO apt-get -y install libnvidia-container1 libnvidia-container-tools nvidia-container-toolkit
+fi
+
+$SUDO apt-get clean
echo YES | cryptsetup luksFormat "$LVPATH" "$KEYPATH"
cryptsetup --key-file "$KEYPATH" luksOpen "$LVPATH" "$(basename "$CRYPTPATH")"
shred -u "$KEYPATH"
-mkfs.xfs "$CRYPTPATH"
+mkfs.xfs -f "$CRYPTPATH"
# First make sure docker is not using /tmp, then unmount everything under it.
if [ -d /etc/sv/docker.io ]
# SPDX-License-Identifier: AGPL-3.0
head=$(git log --first-parent --max-count=1 --format=%H)
-curl -X POST https://ci.curoverse.com/job/developer-run-tests/build \
- --user $(cat ~/.jenkins.ci.curoverse.com) \
+curl -X POST https://ci.arvados.org/job/developer-run-tests/build \
+ --user $(cat ~/.jenkins.ci.arvados.org) \
--data-urlencode json='{"parameter": [{"name":"git_hash", "value":"'$head'"}]}'
"errors"
"flag"
"fmt"
+ "io"
"io/ioutil"
"log"
"net/http"
"strings"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/keepclient"
)
var version = "dev"
func main() {
- err := doMain(os.Args[1:])
- if err != nil {
- log.Fatalf("%v", err)
- }
+ os.Exit(doMain(os.Args[1:], os.Stderr))
}
-func doMain(args []string) error {
+func doMain(args []string, stderr io.Writer) int {
flags := flag.NewFlagSet("keep-block-check", flag.ExitOnError)
configFile := flags.String(
false,
"Print version information and exit.")
- // Parse args; omit the first arg which is the command name
- flags.Parse(args)
-
- // Print version information if requested
- if *getVersion {
- fmt.Printf("keep-block-check %s\n", version)
- os.Exit(0)
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], args, "", stderr); !ok {
+ return code
+ } else if *getVersion {
+ fmt.Printf("%s %s\n", os.Args[0], version)
+ return 0
}
config, blobSigningKey, err := loadConfig(*configFile)
if err != nil {
- return fmt.Errorf("Error loading configuration from file: %s", err.Error())
+ fmt.Fprintf(stderr, "Error loading configuration from file: %s\n", err)
+ return 1
}
// get list of block locators to be checked
blockLocators, err := getBlockLocators(*locatorFile, *prefix)
if err != nil {
- return fmt.Errorf("Error reading block hashes to be checked from file: %s", err.Error())
+ fmt.Fprintf(stderr, "Error reading block hashes to be checked from file: %s\n", err)
+ return 1
}
// setup keepclient
kc, blobSignatureTTL, err := setupKeepClient(config, *keepServicesJSON, *blobSignatureTTLFlag)
if err != nil {
- return fmt.Errorf("Error configuring keepclient: %s", err.Error())
+ fmt.Fprintf(stderr, "Error configuring keepclient: %s\n", err)
+ return 1
+ }
+
+ err = performKeepBlockCheck(kc, blobSignatureTTL, blobSigningKey, blockLocators, *verbose)
+ if err != nil {
+ fmt.Fprintln(stderr, err)
+ return 1
}
- return performKeepBlockCheck(kc, blobSignatureTTL, blobSigningKey, blockLocators, *verbose)
+ return 0
}
type apiConfig struct {
var blobSignatureTTL = time.Duration(2*7*24) * time.Hour
-func (s *ServerRequiredSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
-}
-
func (s *ServerRequiredSuite) TearDownSuite(c *C) {
- arvadostest.StopAPI()
arvadostest.ResetEnv()
}
func (s *DoMainTestSuite) Test_doMain_WithNoConfig(c *C) {
args := []string{"-prefix", "a"}
- err := doMain(args)
- c.Check(err, NotNil)
- c.Assert(strings.Contains(err.Error(), "config file not specified"), Equals, true)
+ var stderr bytes.Buffer
+ code := doMain(args, &stderr)
+ c.Check(code, Equals, 1)
+ c.Check(stderr.String(), Matches, ".*config file not specified\n")
}
func (s *DoMainTestSuite) Test_doMain_WithNoSuchConfigFile(c *C) {
args := []string{"-config", "no-such-file"}
- err := doMain(args)
- c.Check(err, NotNil)
- c.Assert(strings.Contains(err.Error(), "no such file or directory"), Equals, true)
+ var stderr bytes.Buffer
+ code := doMain(args, &stderr)
+ c.Check(code, Equals, 1)
+ c.Check(stderr.String(), Matches, ".*no such file or directory\n")
}
func (s *DoMainTestSuite) Test_doMain_WithNoBlockHashFile(c *C) {
defer arvadostest.StopKeep(2)
args := []string{"-config", config}
- err := doMain(args)
- c.Assert(strings.Contains(err.Error(), "block-hash-file not specified"), Equals, true)
+ var stderr bytes.Buffer
+ code := doMain(args, &stderr)
+ c.Check(code, Equals, 1)
+ c.Check(stderr.String(), Matches, ".*block-hash-file not specified\n")
}
func (s *DoMainTestSuite) Test_doMain_WithNoSuchBlockHashFile(c *C) {
defer arvadostest.StopKeep(2)
args := []string{"-config", config, "-block-hash-file", "no-such-file"}
- err := doMain(args)
- c.Assert(strings.Contains(err.Error(), "no such file or directory"), Equals, true)
+ var stderr bytes.Buffer
+ code := doMain(args, &stderr)
+ c.Check(code, Equals, 1)
+ c.Check(stderr.String(), Matches, ".*no such file or directory\n")
}
func (s *DoMainTestSuite) Test_doMain(c *C) {
defer os.Remove(locatorFile)
args := []string{"-config", config, "-block-hash-file", locatorFile, "-v"}
- err := doMain(args)
- c.Check(err, NotNil)
- c.Assert(err.Error(), Equals, "Block verification failed for 2 out of 2 blocks with matching prefix")
+ var stderr bytes.Buffer
+ code := doMain(args, &stderr)
+ c.Check(code, Equals, 1)
+ c.Assert(stderr.String(), Matches, "Block verification failed for 2 out of 2 blocks with matching prefix\n")
checkErrorLog(c, []string{TestHash, TestHash2}, "Error verifying block", "Block not found")
c.Assert(strings.Contains(logBuffer.String(), "Verifying block 1 of 2"), Equals, true)
}
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
}
func main() {
- flag.Parse()
-
- // Print version information if requested
- if *getVersion {
- fmt.Printf("keep-exercise %s\n", version)
- os.Exit(0)
+ if ok, code := cmd.ParseFlags(flag.CommandLine, os.Args[0], os.Args[1:], "", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
+ fmt.Printf("%s %s\n", os.Args[0], version)
+ return
}
lgr := log.New(os.Stderr, "", log.LstdFlags)
"strings"
"time"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvadosclient"
"git.arvados.org/arvados.git/sdk/go/keepclient"
)
false,
"Print version information and exit.")
- // Parse args; omit the first arg which is the command name
- flags.Parse(os.Args[1:])
-
- // Print version information if requested
- if *getVersion {
- fmt.Printf("keep-rsync %s\n", version)
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], os.Args[1:], "", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
+ fmt.Printf("%s %s\n", os.Args[0], version)
os.Exit(0)
}
type ServerRequiredSuite struct{}
-func (s *ServerRequiredSuite) SetUpSuite(c *C) {
- arvadostest.StartAPI()
-}
-
func (s *ServerRequiredSuite) TearDownSuite(c *C) {
- arvadostest.StopAPI()
arvadostest.ResetEnv()
}
--- /dev/null
+local_config_dir
+local.params
+*pem
##### About
-This directory holds a small script to install Arvados on a single node, using the
-[Saltstack arvados-formula](https://github.com/saltstack-formulas/arvados-formula)
+This directory holds a small script to help you get Arvados up and running, using the
+[Saltstack arvados-formula](https://git.arvados.org/arvados-formula.git)
in master-less mode.
-The fastest way to get it running is to modify the first lines in the `provision.sh`
-script to suit your needs, copy it in the host where you want to install Arvados
-and run it as root.
+There are a few preset examples that you can use:
-There's an example `Vagrantfile` also, to install it in a vagrant box if you want
+* `single_host`: Install all the Arvados components in a single host. Suitable for testing
+ or demo-ing, but not recommended for production use.
+* `multi_host/aws`: Let's you install different Arvados components in different hosts on AWS.
+
+The fastest way to get it running is to copy the `local.params.example` file to `local.params`,
+edit and modify the file to suit your needs, copy this file along with the `provision.sh` script
+into the host where you want to install Arvados and run the `provision.sh` script as root.
+
+There's an example `Vagrantfile` also, to install Arvados in a vagrant box if you want
to try it locally.
For more information, please read https://doc.arvados.org/main/install/salt-single-host.html
config.ssh.insert_key = false
config.ssh.forward_x11 = true
- config.vm.define "arvados" do |arv|
- arv.vm.box = "bento/debian-10"
- arv.vm.hostname = "vagrant.local"
- # CPU/RAM
- config.vm.provider :virtualbox do |v|
- v.memory = 2048
- v.cpus = 2
- end
+ # A single_host multiple_hostnames example
+ config.vm.define "arvados-sh-mn" do |arv|
+ arv.vm.box = "bento/debian-10"
+ arv.vm.hostname = "harpo"
+ # CPU/RAM
+ config.vm.provider :virtualbox do |v|
+ v.memory = 2048
+ v.cpus = 2
+ end
- # Networking
- arv.vm.network "forwarded_port", guest: 8443, host: 8443
- arv.vm.network "forwarded_port", guest: 25100, host: 25100
- arv.vm.network "forwarded_port", guest: 9002, host: 9002
- arv.vm.network "forwarded_port", guest: 9000, host: 9000
- arv.vm.network "forwarded_port", guest: 8900, host: 8900
- arv.vm.network "forwarded_port", guest: 8002, host: 8002
- arv.vm.network "forwarded_port", guest: 8001, host: 8001
- arv.vm.network "forwarded_port", guest: 8000, host: 8000
- arv.vm.network "forwarded_port", guest: 3001, host: 3001
- arv.vm.provision "shell",
- path: "provision.sh",
- args: [
- # "--debug",
- "--test",
- "--vagrant",
- "--ssl-port=8443"
- ].join(" ")
- end
+ # Networking
+ # WEBUI PORT
+ arv.vm.network "forwarded_port", guest: 8443, host: 8443
+ # KEEPPROXY
+ arv.vm.network "forwarded_port", guest: 25101, host: 25101
+ # KEEPWEB
+ arv.vm.network "forwarded_port", guest: 9002, host: 9002
+ # WEBSOCKET
+ arv.vm.network "forwarded_port", guest: 8002, host: 8002
+ arv.vm.provision "shell",
+ inline: "cp -vr /vagrant/config_examples/single_host/multiple_hostnames /home/vagrant/local_config_dir;
+ cp -vr /vagrant/tests /home/vagrant/tests;
+ sed 's#cluster_fixme_or_this_wont_work#harpo#g;
+ s#domain_fixme_or_this_wont_work#local#g;
+ s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=8443#g;
+ s#RELEASE=\"production\"#RELEASE=\"development\"#g;
+ s/# VERSION=.*$/VERSION=\"latest\"/g;
+ s/#\ BRANCH=\"main\"/\ BRANCH=\"main\"/g' \
+ /vagrant/local.params.example.single_host_multiple_hostnames > /tmp/local.params.single_host_multiple_hostnames"
+
+ arv.vm.provision "shell",
+ path: "provision.sh",
+ args: [
+ # "--debug",
+ "--config /tmp/local.params.single_host_multiple_hostnames",
+ "--development",
+ "--test",
+ "--vagrant"
+ ].join(" ")
+ end
+
+ # A single_host single_hostname example
+ config.vm.define "arvados-sh-sn" do |arv|
+ arv.vm.box = "bento/debian-10"
+ arv.vm.hostname = "zeppo"
+ # CPU/RAM
+ config.vm.provider :virtualbox do |v|
+ v.memory = 2048
+ v.cpus = 2
+ end
+
+ # Networking
+ # WEBUI PORT
+ arv.vm.network "forwarded_port", guest: 9443, host: 9443
+ # WORKBENCH1
+ arv.vm.network "forwarded_port", guest: 9444, host: 9444
+ # WORKBENCH2
+ arv.vm.network "forwarded_port", guest: 9445, host: 9445
+ # KEEPPROXY
+ arv.vm.network "forwarded_port", guest: 35101, host: 35101
+ # KEEPWEB
+ arv.vm.network "forwarded_port", guest: 11002, host: 11002
+ # WEBSHELL
+ arv.vm.network "forwarded_port", guest: 14202, host: 14202
+ # WEBSOCKET
+ arv.vm.network "forwarded_port", guest: 18002, host: 18002
+ arv.vm.provision "shell",
+ inline: "cp -vr /vagrant/config_examples/single_host/single_hostname /home/vagrant/local_config_dir;
+ cp -vr /vagrant/tests /home/vagrant/tests;
+ sed 's#HOSTNAME_EXT=\"\"#HOSTNAME_EXT=\"zeppo.local\"#g;
+ s#cluster_fixme_or_this_wont_work#zeppo#g;
+ s/#\ BRANCH=\"main\"/\ BRANCH=\"main\"/g;
+ s#domain_fixme_or_this_wont_work#local#g;' \
+ /vagrant/local.params.example.single_host_single_hostname > /tmp/local.params.single_host_single_hostname"
+ arv.vm.provision "shell",
+ path: "provision.sh",
+ args: [
+ # "--debug",
+ "--config /tmp/local.params.single_host_single_hostname",
+ "--test",
+ "--vagrant"
+ ].join(" ")
+ end
end
--- /dev/null
+Arvados installation using multiple instances
+=============================================
+
+These files let you setup Arvados on multiple instances on AWS. This setup
+considers deploying the instances on an isolated VPC, created/managed with
+[the Arvados terraform code](https://github.com/arvados/arvados/tree/terraform/tools/terraform)
+in our repo.
+
+Please check [the Arvados installation documentation](https://doc.arvados.org/install/salt-multi-host.html) for more details.
--- /dev/null
+SSL Certificates
+================
+
+Add the certificates for your hosts in this directory.
+
+The nodes requiring certificates are:
+
+* CLUSTER.DOMAIN
+* collections.CLUSTER.DOMAIN
+* \*.collections.CLUSTER.DOMAIN
+* download.CLUSTER.DOMAIN
+* keep.CLUSTER.DOMAIN
+* workbench.CLUSTER.DOMAIN
+* workbench2.CLUSTER.DOMAIN
+* ws.CLUSTER.DOMAIN
+
+They can be individual certificates or a wildcard certificate for all of them.
+
+Please remember to modify the *nginx\_\** salt pillars accordingly.
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+# The variables commented out are the default values that the formula uses.
+# The uncommented values are REQUIRED values. If you don't set them, running
+# this formula will fail.
+arvados:
+ ### GENERAL CONFIG
+ version: '__VERSION__'
+ ## It makes little sense to disable this flag, but you can, if you want :)
+ # use_upstream_repo: true
+
+ ## Repo URL is built with grains values. If desired, it can be completely
+ ## overwritten with the pillar parameter 'repo_url'
+ # repo:
+ # humanname: Arvados Official Repository
+
+ release: __RELEASE__
+
+ ## IMPORTANT!!!!!
+ ## api, workbench and shell require some gems, so you need to make sure ruby
+ ## and deps are installed in order to install and compile the gems.
+ ## We default to `false` in these two variables as it's expected you already
+ ## manage OS packages with some other tool and you don't want us messing up
+ ## with your setup.
+ ruby:
+ ## We set these to `true` here for testing purposes.
+ ## They both default to `false`.
+ manage_ruby: true
+ manage_gems_deps: true
+ # pkg: ruby
+ # gems_deps:
+ # - curl
+ # - g++
+ # - gcc
+ # - git
+ # - libcurl4
+ # - libcurl4-gnutls-dev
+ # - libpq-dev
+ # - libxml2
+ # - libxml2-dev
+ # - make
+ # - python3-dev
+ # - ruby-dev
+ # - zlib1g-dev
+
+ # config:
+ # file: /etc/arvados/config.yml
+ # user: root
+ ## IMPORTANT!!!!!
+ ## If you're intalling any of the rails apps (api, workbench), the group
+ ## should be set to that of the web server, usually `www-data`
+ # group: root
+ # mode: 640
+ dispatcher:
+ pkg:
+ name: arvados-dispatch-cloud
+ service:
+ name: arvados-dispatch-cloud
+
+ ### ARVADOS CLUSTER CONFIG
+ cluster:
+ name: __CLUSTER__
+ domain: __DOMAIN__
+
+ database:
+ # max concurrent connections per arvados server daemon
+ # connection_pool_max: 32
+ name: __CLUSTER___arvados
+ host: __DATABASE_INT_IP__
+ password: "__DATABASE_PASSWORD__"
+ user: __CLUSTER___arvados
+ encoding: en_US.utf8
+ client_encoding: UTF8
+
+ tls:
+ # certificate: ''
+ # key: ''
+ # required to test with arvados-snakeoil certs
+ insecure: false
+
+ ### TOKENS
+ tokens:
+ system_root: __SYSTEM_ROOT_TOKEN__
+ management: __MANAGEMENT_TOKEN__
+ anonymous_user: __ANONYMOUS_USER_TOKEN__
+
+ ### KEYS
+ secrets:
+ blob_signing_key: __BLOB_SIGNING_KEY__
+ workbench_secret_key: __WORKBENCH_SECRET_KEY__
+
+ Login:
+ Test:
+ Enable: true
+ Users:
+ __INITIAL_USER__:
+ Email: __INITIAL_USER_EMAIL__
+ Password: __INITIAL_USER_PASSWORD__
+
+ ### CONTAINERS
+ Containers:
+ MaxRetryAttempts: 10
+ CloudVMs:
+ ResourceTags:
+ Name: __CLUSTER__-compute-node
+ BootProbeCommand: 'systemctl is-system-running'
+ ImageID: ami-FIXMEFIXMEFIXMEFI
+ Driver: ec2
+ DriverParameters:
+ Region: FIXME
+ EBSVolumeType: gp2
+ AdminUsername: FIXME
+ ### This SG should allow SSH from the dispatcher to the compute nodes
+ SecurityGroupIDs: ['sg-FIXMEFIXMEFIXMEFI']
+ SubnetID: subnet-FIXMEFIXMEFIXMEFI
+ DispatchPrivateKey: |
+ -----BEGIN OPENSSH PRIVATE KEY-----
+ Read https://doc.arvados.org/v2.0/install/install-dispatch-cloud.html
+ for details on how to create it and where to place the key
+ FIXMEFIXMEFIXMEFI
+ -----END OPENSSH PRIVATE KEY-----
+
+ ### VOLUMES
+ ## This should usually match all your `keepstore` instances
+ Volumes:
+ # the volume name will be composed with
+ # <cluster>-nyw5e-<volume>
+ __CLUSTER__-nyw5e-0000000000000000:
+ AccessViaHosts:
+ 'http://__KEEPSTORE0_INT_IP__:25107':
+ ReadOnly: false
+ Replication: 2
+ Driver: S3
+ DriverParameters:
+ Bucket: __CLUSTER__-nyw5e-0000000000000000-volume
+ IAMRole: __CLUSTER__-keepstore-00-iam-role
+ Region: FIXME
+ __CLUSTER__-nyw5e-0000000000000001:
+ AccessViaHosts:
+ 'http://__KEEPSTORE1_INT_IP__:25107':
+ ReadOnly: false
+ Replication: 2
+ Driver: S3
+ DriverParameters:
+ Bucket: __CLUSTER__-nyw5e-0000000000000001-volume
+ IAMRole: __CLUSTER__-keepstore-01-iam-role
+ Region: FIXME
+
+ Users:
+ NewUsersAreActive: true
+ AutoAdminFirstUser: true
+ AutoSetupNewUsers: true
+ AutoSetupNewUsersWithRepository: true
+
+ Services:
+ Controller:
+ ExternalURL: 'https://__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ InternalURLs:
+ 'http://localhost:8003': {}
+ DispatchCloud:
+ InternalURLs:
+ 'http://__CONTROLLER_INT_IP__:9006': {}
+ Keepproxy:
+ ExternalURL: 'https://keep.__CLUSTER__.__DOMAIN__:__KEEP_EXT_SSL_PORT__'
+ InternalURLs:
+ 'http://localhost:25107': {}
+ Keepstore:
+ InternalURLs:
+ 'http://__KEEPSTORE0_INT_IP__:25107': {}
+ 'http://__KEEPSTORE1_INT_IP__:25107': {}
+ RailsAPI:
+ InternalURLs:
+ 'http://localhost:8004': {}
+ WebDAV:
+ ExternalURL: 'https://*.collections.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__/'
+ InternalURLs:
+ 'http://localhost:9002': {}
+ WebDAVDownload:
+ ExternalURL: 'https://download.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
+ WebShell:
+ ExternalURL: 'https://webshell.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
+ Websocket:
+ ExternalURL: 'wss://ws.__CLUSTER__.__DOMAIN__/websocket'
+ InternalURLs:
+ 'http://localhost:8005': {}
+ Workbench1:
+ ExternalURL: 'https://workbench.__CLUSTER__.__DOMAIN__:__WORKBENCH1_EXT_SSL_PORT__'
+ Workbench2:
+ ExternalURL: 'https://workbench2.__CLUSTER__.__DOMAIN__:__WORKBENCH2_EXT_SSL_PORT__'
+
+ InstanceTypes:
+ t3small:
+ ProviderType: t3.small
+ VCPUs: 2
+ RAM: 2GiB
+ AddedScratch: 50GB
+ Price: 0.0208
+ c5large:
+ ProviderType: c5.large
+ VCPUs: 2
+ RAM: 4GiB
+ AddedScratch: 50GB
+ Price: 0.085
+ m5large:
+ ProviderType: m5.large
+ VCPUs: 2
+ RAM: 8GiB
+ AddedScratch: 50GB
+ Price: 0.096
+ c5xlarge:
+ ProviderType: c5.xlarge
+ VCPUs: 4
+ RAM: 8GiB
+ AddedScratch: 100GB
+ Price: 0.17
+ m5xlarge:
+ ProviderType: m5.xlarge
+ VCPUs: 4
+ RAM: 16GiB
+ AddedScratch: 100GB
+ Price: 0.192
+ m5xlarge_extradisk:
+ ProviderType: m5.xlarge
+ VCPUs: 4
+ RAM: 16GiB
+ AddedScratch: 400GB
+ Price: 0.193
+ c52xlarge:
+ ProviderType: c5.2xlarge
+ VCPUs: 8
+ RAM: 16GiB
+ AddedScratch: 200GB
+ Price: 0.34
+ m52xlarge:
+ ProviderType: m5.2xlarge
+ VCPUs: 8
+ RAM: 32GiB
+ AddedScratch: 200GB
+ Price: 0.384
+ c54xlarge:
+ ProviderType: c5.4xlarge
+ VCPUs: 16
+ RAM: 32GiB
+ AddedScratch: 400GB
+ Price: 0.68
+ m54xlarge:
+ ProviderType: m5.4xlarge
+ VCPUs: 16
+ RAM: 64GiB
+ AddedScratch: 400GB
+ Price: 0.768
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+aws_credentials:
+ region: __LE_AWS_REGION__
+ access_key_id: __LE_AWS_ACCESS_KEY_ID__
+ secret_access_key: __LE_AWS_SECRET_ACCESS_KEY__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ use_package: true
+ pkgs:
+ - certbot: latest
+ - python3-certbot-dns-route53
+ config:
+ server: https://acme-v02.api.letsencrypt.org/directory
+ email: __INITIAL_USER_EMAIL__
+ authenticator: dns-route53
+ agree-tos: true
+ keep-until-expiring: true
+ expand: true
+ max-log-backups: 0
+ deploy-hook: systemctl reload nginx
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ controller.__CLUSTER__.__DOMAIN__:
+ - __CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ keepproxy.__CLUSTER__.__DOMAIN__:
+ - keep.__CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ download.__CLUSTER__.__DOMAIN__:
+ - download.__CLUSTER__.__DOMAIN__
+ collections.__CLUSTER__.__DOMAIN__:
+ - collections.__CLUSTER__.__DOMAIN__
+ - '*.collections.__CLUSTER__.__DOMAIN__'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ webshell.__CLUSTER__.__DOMAIN__:
+ - webshell.__CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ websocket.__CLUSTER__.__DOMAIN__:
+ - ws.__CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ workbench2.__CLUSTER__.__DOMAIN__:
+ - workbench2.__CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### LETSENCRYPT
+letsencrypt:
+ domainsets:
+ workbench.__CLUSTER__.__DOMAIN__:
+ - workbench.__CLUSTER__.__DOMAIN__
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### ARVADOS
+arvados:
+ config:
+ group: www-data
+
+### NGINX
+nginx:
+ ### SITES
+ servers:
+ managed:
+ arvados_api.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - listen: 'localhost:8004'
+ - server_name: api
+ - root: /var/www/arvados-api/current/public
+ - index: index.html index.htm
+ - access_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.error.log
+ - passenger_enabled: 'on'
+ - client_max_body_size: 128m
### NGINX
nginx:
- ### SERVER
- server:
- config:
- ### STREAMS
- http:
- upstream collections_downloads_upstream:
- - server: 'collections.internal:9002 fail_timeout=10s'
-
servers:
managed:
### DEFAULT
- arvados_collections_download_default:
+ arvados_collections_default.conf:
enabled: true
overwrite: true
config:
- server:
- - server_name: collections.__CLUSTER__.__DOMAIN__ download.__CLUSTER__.__DOMAIN__
+ - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- ### COLLECTIONS / DOWNLOAD
- arvados_collections_download_ssl:
+ ### COLLECTIONS
+ arvados_collections_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- - server_name: collections.__CLUSTER__.__DOMAIN__ download.__CLUSTER__.__DOMAIN__
+ - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __KEEPWEB_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- proxy_pass: 'http://collections_downloads_upstream'
- client_max_body_size: 0
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- - include: 'snippets/arvados-snakeoil.conf'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ 'geo $external_client':
+ default: 1
+ '127.0.0.0/8': 0
+ '__CLUSTER_INT_CIDR__': 0
+ upstream controller_upstream:
+ - server: 'localhost:8003 fail_timeout=10s'
+
+ ### SITES
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_controller_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __CLUSTER__.__DOMAIN__
+ - listen:
+ - 80 default
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_controller_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ __CERT_REQUIRES__
+ config:
+ - server:
+ - server_name: __CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://controller_upstream'
+ - proxy_read_timeout: 300
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_set_header: 'X-External-Client $external_client'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.error.log
+ - client_max_body_size: 128m
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_download_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: download.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ ### DOWNLOAD
+ arvados_download_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ __CERT_REQUIRES__
+ config:
+ - server:
+ - server_name: download.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __KEEPWEB_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://collections_downloads_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_max_body_size: 0
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.error.log
### STREAMS
http:
upstream keepproxy_upstream:
- - server: 'keep.internal:25100 fail_timeout=10s'
+ - server: 'localhost:25107 fail_timeout=10s'
servers:
managed:
### DEFAULT
- arvados_keepproxy_default:
+ arvados_keepproxy_default.conf:
enabled: true
overwrite: true
config:
- server_name: keep.__CLUSTER__.__DOMAIN__
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_keepproxy_ssl:
+ arvados_keepproxy_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- server_name: keep.__CLUSTER__.__DOMAIN__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __KEEP_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- proxy_pass: 'http://keepproxy_upstream'
- client_max_body_size: 64M
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- - include: 'snippets/arvados-snakeoil.conf'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+# Keepweb upstream is common to both downloads and collections
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream collections_downloads_upstream:
+ - server: 'localhost:9002 fail_timeout=10s'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ install_from_phusionpassenger: true
+ lookup:
+ passenger_package: libnginx-mod-http-passenger
+ passenger_config_file: /etc/nginx/conf.d/mod-http-passenger.conf
+
+ ### SNIPPETS
+ snippets:
+ # Based on https://ssl-config.mozilla.org/#server=nginx&version=1.14.2&config=intermediate&openssl=1.1.1d&guideline=5.4
+ ssl_hardening_default.conf:
+ - ssl_session_timeout: 1d
+ - ssl_session_cache: 'shared:arvadosSSL:10m'
+ - ssl_session_tickets: 'off'
+
+ # intermediate configuration
+ - ssl_protocols: TLSv1.2 TLSv1.3
+ - ssl_ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
+ - ssl_prefer_server_ciphers: 'off'
+
+ # HSTS (ngx_http_headers_module is required) (63072000 seconds)
+ - add_header: 'Strict-Transport-Security "max-age=63072000" always'
+
+ # OCSP stapling
+ - ssl_stapling: 'on'
+ - ssl_stapling_verify: 'on'
+
+ # verify chain of trust of OCSP response using Root CA and Intermediate certs
+ # - ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates
+
+ # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
+ # - ssl_dhparam: /path/to/dhparam
+
+ # replace with the IP address of your resolver
+ # - resolver: 127.0.0.1
+
+ ### SERVER
+ server:
+ config:
+ include: 'modules-enabled/*.conf'
+ worker_processes: 4
+
+ ### SITES
+ servers:
+ managed:
+ # Remove default webserver
+ default:
+ enabled: false
### STREAMS
http:
upstream webshell_upstream:
- - server: 'shell.internal:4200 fail_timeout=10s'
+ - server: 'localhost:4200 fail_timeout=10s'
### SITES
servers:
managed:
- arvados_webshell_default:
+ arvados_webshell_default.conf:
enabled: true
overwrite: true
config:
- server_name: webshell.__CLUSTER__.__DOMAIN__
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_webshell_ssl:
+ arvados_webshell_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- server_name: webshell.__CLUSTER__.__DOMAIN__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __WEBSHELL_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /shell.__CLUSTER__.__DOMAIN__:
- proxy_pass: 'http://webshell_upstream'
- add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
- add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
- - include: 'snippets/arvados-snakeoil.conf'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
### STREAMS
http:
upstream websocket_upstream:
- - server: 'ws.internal:8005 fail_timeout=10s'
+ - server: 'localhost:8005 fail_timeout=10s'
servers:
managed:
### DEFAULT
- arvados_websocket_default:
+ arvados_websocket_default.conf:
enabled: true
overwrite: true
config:
- server_name: ws.__CLUSTER__.__DOMAIN__
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_websocket_ssl:
+ arvados_websocket_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- server_name: ws.__CLUSTER__.__DOMAIN__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- proxy_pass: 'http://websocket_upstream'
- client_max_body_size: 64M
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- - include: 'snippets/arvados-snakeoil.conf'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_workbench2_default:
+ arvados_workbench2_default.conf:
enabled: true
overwrite: true
config:
- server_name: workbench2.__CLUSTER__.__DOMAIN__
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_workbench2_ssl:
+ arvados_workbench2_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- server_name: workbench2.__CLUSTER__.__DOMAIN__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- root: /var/www/arvados-workbench2/workbench2
- 'if (-f $document_root/maintenance.html)':
- return: 503
- location /config.json:
- - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__"}' ~ "'" }}
- - include: 'snippets/arvados-snakeoil.conf'
+ - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
### STREAMS
http:
upstream workbench_upstream:
- - server: 'workbench.internal:9000 fail_timeout=10s'
+ - server: 'localhost:9000 fail_timeout=10s'
### SITES
servers:
managed:
### DEFAULT
- arvados_workbench_default:
+ arvados_workbench_default.conf:
enabled: true
overwrite: true
config:
- server_name: workbench.__CLUSTER__.__DOMAIN__
- listen:
- 80
- - location /.well-known:
- - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_workbench_ssl:
+ arvados_workbench_ssl.conf:
enabled: true
overwrite: true
+ requires:
+ __CERT_REQUIRES__
config:
- server:
- server_name: workbench.__CLUSTER__.__DOMAIN__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- proxy_pass: 'http://workbench_upstream'
- proxy_set_header: 'Host $http_host'
- proxy_set_header: 'X-Real-IP $remote_addr'
- proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- - include: 'snippets/arvados-snakeoil.conf'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
overwrite: true
config:
- server:
- - listen: 'workbench.internal:9000'
+ - listen: 'localhost:9000'
- server_name: workbench
- root: /var/www/arvados-workbench/current/public
- index: index.html index.htm
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### POSTGRESQL
+postgres:
+ use_upstream_repo: true
+ version: '11'
+ postgresconf: |-
+ listen_addresses = '*' # listen on all interfaces
+ acls:
+ - ['local', 'all', 'postgres', 'peer']
+ - ['local', 'all', 'all', 'peer']
+ - ['host', 'all', 'all', '127.0.0.1/32', 'md5']
+ - ['host', 'all', 'all', '::1/128', 'md5']
+ - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '127.0.0.1/32']
+ - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '__CONTROLLER_INT_IP__/32']
+ users:
+ __CLUSTER___arvados:
+ ensure: present
+ password: __DATABASE_PASSWORD__
+
+ # tablespaces:
+ # arvados_tablespace:
+ # directory: /path/to/some/tbspace/arvados_tbsp
+ # owner: arvados
+
+ databases:
+ __CLUSTER___arvados:
+ owner: __CLUSTER___arvados
+ template: template0
+ lc_ctype: en_US.utf8
+ lc_collate: en_US.utf8
+ # tablespace: arvados_tablespace
+ schemas:
+ public:
+ owner: __CLUSTER___arvados
+ extensions:
+ pg_trgm:
+ if_not_exists: true
+ schema: public
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set aws_credentials = pillar.get('aws_credentials', {}) %}
+
+{%- if aws_credentials %}
+extra_extra_aws_credentials_root_aws_config_file_managed:
+ file.managed:
+ - name: /root/.aws/config
+ - makedirs: true
+ - user: root
+ - group: root
+ - mode: '0600'
+ - replace: false
+ - contents: |
+ [default]
+ region= {{ aws_credentials.region }}
+
+extra_extra_aws_credentials_root_aws_credentials_file_managed:
+ file.managed:
+ - name: /root/.aws/credentials
+ - makedirs: true
+ - user: root
+ - group: root
+ - mode: '0600'
+ - replace: false
+ - contents: |
+ [default]
+ aws_access_key_id = {{ aws_credentials.access_key_id }}
+ aws_secret_access_key = {{ aws_credentials.secret_access_key }}
+{%- endif %}
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set curr_tpldir = tpldir %}
+{%- set tpldir = 'arvados' %}
+{%- from "arvados/map.jinja" import arvados with context %}
+{%- set tpldir = curr_tpldir %}
+
+#CRUDE, but functional
+extra_extra_hosts_entries_etc_hosts_database_host_present:
+ host.present:
+ - ip: __DATABASE_INT_IP__
+ - names:
+ - db.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ - database.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_api_host_present:
+ host.present:
+ - ip: __CONTROLLER_INT_IP__
+ - names:
+ - {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_websocket_host_present:
+ host.present:
+ - ip: __CONTROLLER_INT_IP__
+ - names:
+ - ws.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_workbench_host_present:
+ host.present:
+ - ip: __WORKBENCH1_INT_IP__
+ - names:
+ - workbench.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_workbench2_host_present:
+ host.present:
+ - ip: __WORKBENCH1_INT_IP__
+ - names:
+ - workbench2.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_keepproxy_host_present:
+ host.present:
+ - ip: __KEEP_INT_IP__
+ - names:
+ - keep.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_keepweb_host_present:
+ host.present:
+ - ip: __KEEP_INT_IP__
+ - names:
+ - download.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ - collections.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_webshell_host_present:
+ host.present:
+ - ip: __WEBSHELL_INT_IP__
+ - names:
+ - webshell.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_shell_host_present:
+ host.present:
+ - ip: __SHELL_INT_IP__
+ - names:
+ - shell.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_keep0_host_present:
+ host.present:
+ - ip: __KEEPSTORE0_INT_IP__
+ - names:
+ - keep0.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+
+extra_extra_hosts_entries_etc_hosts_keep1_host_present:
+ host.present:
+ - ip: __KEEPSTORE1_INT_IP__
+ - names:
+ - keep1.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
--- /dev/null
+Single host with multiple hostnames
+===================================
+
+These files let you setup Arvados on a single host using different hostnames
+for each of its components nginx's virtualhosts.
+
+The hostnames are composed after the variables "CLUSTER" and "DOMAIN" set in
+the `local.params` file.
+
+The virtual hosts' hostnames that will be used are:
+
+* CLUSTER.DOMAIN
+* collections.CLUSTER.DOMAIN
+* download.CLUSTER.DOMAIN
+* keep.CLUSTER.DOMAIN
+* keep0.CLUSTER.DOMAIN
+* webshell.CLUSTER.DOMAIN
+* workbench.CLUSTER.DOMAIN
+* workbench2.CLUSTER.DOMAIN
+* ws.CLUSTER.DOMAIN
--- /dev/null
+# -*- coding: utf-8 -*-
+# vim: ft=yaml
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+# The variables commented out are the default values that the formula uses.
+# The uncommented values are REQUIRED values. If you don't set them, running
+# this formula will fail.
+arvados:
+ ### GENERAL CONFIG
+ version: '__VERSION__'
+ ## It makes little sense to disable this flag, but you can, if you want :)
+ # use_upstream_repo: true
+
+ ## Repo URL is built with grains values. If desired, it can be completely
+ ## overwritten with the pillar parameter 'repo_url'
+ # repo:
+ # humanname: Arvados Official Repository
+
+ release: __RELEASE__
+
+ ## IMPORTANT!!!!!
+ ## api, workbench and shell require some gems, so you need to make sure ruby
+ ## and deps are installed in order to install and compile the gems.
+ ## We default to `false` in these two variables as it's expected you already
+ ## manage OS packages with some other tool and you don't want us messing up
+ ## with your setup.
+ ruby:
+
+ ## We set these to `true` here for testing purposes.
+ ## They both default to `false`.
+ manage_ruby: true
+ manage_gems_deps: true
+ # pkg: ruby
+ # gems_deps:
+ # - curl
+ # - g++
+ # - gcc
+ # - git
+ # - libcurl4
+ # - libcurl4-gnutls-dev
+ # - libpq-dev
+ # - libxml2
+ # - libxml2-dev
+ # - make
+ # - python3-dev
+ # - ruby-dev
+ # - zlib1g-dev
+
+ # config:
+ # file: /etc/arvados/config.yml
+ # user: root
+ ## IMPORTANT!!!!!
+ ## If you're intalling any of the rails apps (api, workbench), the group
+ ## should be set to that of the web server, usually `www-data`
+ # group: root
+ # mode: 640
+
+ ### ARVADOS CLUSTER CONFIG
+ cluster:
+ name: __CLUSTER__
+ domain: __DOMAIN__
+
+ database:
+ # max concurrent connections per arvados server daemon
+ # connection_pool_max: 32
+ name: __CLUSTER___arvados
+ host: 127.0.0.1
+ password: "__DATABASE_PASSWORD__"
+ user: __CLUSTER___arvados
+ extra_conn_params:
+ client_encoding: UTF8
+ # Centos7 does not enable SSL by default, so we disable
+ # it here just for testing of the formula purposes only.
+ # You should not do this in production, and should
+ # configure Postgres certificates correctly
+ {%- if grains.os_family in ('RedHat',) %}
+ sslmode: disable
+ {%- endif %}
+
+ tls:
+ # certificate: ''
+ # key: ''
+ # When using arvados-snakeoil certs set insecure: true
+ insecure: false
+
+ resources:
+ virtual_machines:
+ shell:
+ name: webshell
+ backend: 127.0.1.1
+ port: 4200
+
+ ### TOKENS
+ tokens:
+ system_root: __SYSTEM_ROOT_TOKEN__
+ management: __MANAGEMENT_TOKEN__
+ anonymous_user: __ANONYMOUS_USER_TOKEN__
+
+ ### KEYS
+ secrets:
+ blob_signing_key: __BLOB_SIGNING_KEY__
+ workbench_secret_key: __WORKBENCH_SECRET_KEY__
+
+ Login:
+ Test:
+ Enable: true
+ Users:
+ __INITIAL_USER__:
+ Email: __INITIAL_USER_EMAIL__
+ Password: __INITIAL_USER_PASSWORD__
+
+ ### VOLUMES
+ ## This should usually match all your `keepstore` instances
+ Volumes:
+ # the volume name will be composed with
+ # <cluster>-nyw5e-<volume>
+ __CLUSTER__-nyw5e-000000000000000:
+ AccessViaHosts:
+ 'http://keep0.__CLUSTER__.__DOMAIN__:25107':
+ ReadOnly: false
+ Replication: 2
+ Driver: Directory
+ DriverParameters:
+ Root: /tmp
+
+ Users:
+ NewUsersAreActive: true
+ AutoAdminFirstUser: true
+ AutoSetupNewUsers: true
+ AutoSetupNewUsersWithRepository: true
+
+ Services:
+ Controller:
+ ExternalURL: 'https://__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ InternalURLs:
+ 'http://controller.internal:8003': {}
+ DispatchCloud:
+ InternalURLs:
+ 'http://__CLUSTER__.__DOMAIN__:9006': {}
+ Keepbalance:
+ InternalURLs:
+ 'http://__CLUSTER__.__DOMAIN__:9005': {}
+ Keepproxy:
+ ExternalURL: 'https://keep.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ InternalURLs:
+ 'http://keep.internal:25100': {}
+ Keepstore:
+ InternalURLs:
+ 'http://keep0.__CLUSTER__.__DOMAIN__:25107': {}
+ RailsAPI:
+ InternalURLs:
+ 'http://api.internal:8004': {}
+ WebDAV:
+ ExternalURL: 'https://collections.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ InternalURLs:
+ 'http://collections.internal:9002': {}
+ WebDAVDownload:
+ ExternalURL: 'https://download.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ WebShell:
+ ExternalURL: 'https://webshell.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ Websocket:
+ ExternalURL: 'wss://ws.__CLUSTER__.__DOMAIN__/websocket'
+ InternalURLs:
+ 'http://ws.internal:8005': {}
+ Workbench1:
+ ExternalURL: 'https://workbench.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ Workbench2:
+ ExternalURL: 'https://workbench2.__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+docker:
+ pkg:
+ docker:
+ use_upstream: package
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+locale:
+ present:
+ - "en_US.UTF-8 UTF-8"
+ default:
+ # Note: On debian systems don't write the second 'UTF-8' here or you will
+ # experience salt problems like: LookupError: unknown encoding: utf_8_utf_8
+ # Restart the minion after you corrected this!
+ name: 'en_US.UTF-8'
+ requires: 'en_US.UTF-8 UTF-8'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- if grains.os_family in ('RedHat',) %}
+ {%- set group = 'nginx' %}
+{%- else %}
+ {%- set group = 'www-data' %}
+{%- endif %}
+
+### ARVADOS
+arvados:
+ config:
+ group: {{ group }}
+
+### NGINX
+nginx:
+ ### SITES
+ servers:
+ managed:
+ arvados_api.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - listen: 'api.internal:8004'
+ - server_name: api
+ - root: /var/www/arvados-api/current/public
+ - index: index.html index.htm
+ - access_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.error.log
+ - passenger_enabled: 'on'
+ - client_max_body_size: 128m
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ 'geo $external_client':
+ default: 1
+ '127.0.0.0/8': 0
+ upstream controller_upstream:
+ - server: 'controller.internal:8003 fail_timeout=10s'
+
+ ### SITES
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_controller_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __CLUSTER__.__DOMAIN__
+ - listen:
+ - 80 default
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_controller_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ __CERT_REQUIRES__
+ config:
+ - server:
+ - server_name: __CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://controller_upstream'
+ - proxy_read_timeout: 300
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_set_header: 'X-External-Client $external_client'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.error.log
+ - client_max_body_size: 128m
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream keepproxy_upstream:
+ - server: 'keep.internal:25100 fail_timeout=10s'
+
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_keepproxy_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: keep.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_keepproxy_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-keepproxy.pem
+ config:
+ - server:
+ - server_name: keep.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://keepproxy_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_body_buffer_size: 64M
+ - client_max_body_size: 64M
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-keepproxy.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-keepproxy.key
+ - access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream collections_downloads_upstream:
+ - server: 'collections.internal:9002 fail_timeout=10s'
+
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_collections_download_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: collections.__CLUSTER__.__DOMAIN__ download.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ ### COLLECTIONS / DOWNLOAD
+ {%- for vh in [
+ 'collections',
+ 'download'
+ ]
+ %}
+ arvados_{{ vh }}.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
+ config:
+ - server:
+ - server_name: {{ vh }}.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://collections_downloads_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_max_body_size: 0
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-{{ vh }}.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-{{ vh }}.key
+ - access_log: /var/log/nginx/{{ vh }}.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/{{ vh }}.__CLUSTER__.__DOMAIN__.error.log
+ {%- endfor %}
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set passenger_pkg = 'nginx-mod-http-passenger'
+ if grains.osfinger in ('CentOS Linux-7') else
+ 'libnginx-mod-http-passenger' %}
+{%- set passenger_mod = '/usr/lib64/nginx/modules/ngx_http_passenger_module.so'
+ if grains.osfinger in ('CentOS Linux-7',) else
+ '/usr/lib/nginx/modules/ngx_http_passenger_module.so' %}
+{%- set passenger_ruby = '/usr/local/rvm/rubies/ruby-2.7.2/bin/ruby'
+ if grains.osfinger in ('CentOS Linux-7', 'Ubuntu-18.04',) else
+ '/usr/bin/ruby' %}
+
+### NGINX
+nginx:
+ install_from_phusionpassenger: true
+ lookup:
+ passenger_package: {{ passenger_pkg }}
+ ### PASSENGER
+ passenger:
+ passenger_ruby: {{ passenger_ruby }}
+
+ ### SERVER
+ server:
+ config:
+ # This is required to get the passenger module loaded
+ # In Debian it can be done with this
+ # include: 'modules-enabled/*.conf'
+ load_module: {{ passenger_mod }}
+
+ worker_processes: 4
+
+ ### SNIPPETS
+ snippets:
+ # Based on https://ssl-config.mozilla.org/#server=nginx&version=1.14.2&config=intermediate&openssl=1.1.1d&guideline=5.4
+ ssl_hardening_default.conf:
+ - ssl_session_timeout: 1d
+ - ssl_session_cache: 'shared:arvadosSSL:10m'
+ - ssl_session_tickets: 'off'
+
+ # intermediate configuration
+ - ssl_protocols: TLSv1.2 TLSv1.3
+ - ssl_ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
+ - ssl_prefer_server_ciphers: 'off'
+
+ # HSTS (ngx_http_headers_module is required) (63072000 seconds)
+ - add_header: 'Strict-Transport-Security "max-age=63072000" always'
+
+ # OCSP stapling
+ # FIXME! Stapling does not work with self-signed certificates, so disabling for tests
+ # - ssl_stapling: 'on'
+ # - ssl_stapling_verify: 'on'
+
+ # verify chain of trust of OCSP response using Root CA and Intermediate certs
+ # - ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates
+
+ # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
+ # - ssl_dhparam: /path/to/dhparam
+
+ # replace with the IP address of your resolver
+ # - resolver: 127.0.0.1
+
+ ### SITES
+ servers:
+ managed:
+ # Remove default webserver
+ default:
+ enabled: false
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+# This parameter will be used here to generate a list of upstreams and vhosts.
+# This dict is here for convenience and should be managed some other way, but the
+# different ways of orchestration that can be used for this are outside the scope
+# of this formula and their examples.
+# These upstreams should match those defined in `arvados:cluster:resources:virtual_machines`
+{% set webshell_virtual_machines = {
+ 'shell': {
+ 'name': 'webshell',
+ 'backend': '127.0.1.1',
+ 'port': 4200,
+ }
+}
+%}
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+
+ ### STREAMS
+ http:
+ {%- for vm, params in webshell_virtual_machines.items() %}
+ {%- set vm_name = params.name | default(vm) %}
+ {%- set vm_backend = params.backend | default(vm_name) %}
+ {%- set vm_port = params.port | default(4200) %}
+
+ upstream {{ vm_name }}_upstream:
+ - server: '{{ vm_backend }}:{{ vm_port }} fail_timeout=10s'
+
+ {%- endfor %}
+
+ ### SITES
+ servers:
+ managed:
+ arvados_webshell_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: webshell.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_webshell_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-webshell.pem
+ config:
+ - server:
+ - server_name: webshell.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ {%- for vm, params in webshell_virtual_machines.items() %}
+ {%- set vm_name = params.name | default(vm) %}
+ - location /{{ vm_name }}:
+ - proxy_pass: 'http://{{ vm_name }}_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_ssl_session_reuse: 'off'
+
+ - "if ($request_method = 'OPTIONS')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+ - add_header: "'Access-Control-Max-Age' 1728000"
+ - add_header: "'Content-Type' 'text/plain charset=UTF-8'"
+ - add_header: "'Content-Length' 0"
+ - return: 204
+
+ - "if ($request_method = 'POST')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+
+ - "if ($request_method = 'GET')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+ {%- endfor %}
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-webshell.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-webshell.key
+ - access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
+
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream websocket_upstream:
+ - server: 'ws.internal:8005 fail_timeout=10s'
+
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_websocket_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: ws.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_websocket_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-websocket.pem
+ config:
+ - server:
+ - server_name: ws.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://websocket_upstream'
+ - proxy_read_timeout: 600
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: 'Host $host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'Upgrade $http_upgrade'
+ - proxy_set_header: 'Connection "upgrade"'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_body_buffer_size: 64M
+ - client_max_body_size: 64M
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-websocket.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-websocket.key
+ - access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+{%- if grains.os_family in ('RedHat',) %}
+ {%- set group = 'nginx' %}
+{%- else %}
+ {%- set group = 'www-data' %}
+{%- endif %}
+
+### ARVADOS
+arvados:
+ config:
+ group: {{ group }}
+
+### NGINX
+nginx:
+ ### SITES
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_workbench2_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: workbench2.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_workbench2_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-workbench2.pem
+ config:
+ - server:
+ - server_name: workbench2.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - root: /var/www/arvados-workbench2/workbench2
+ - try_files: '$uri $uri/ /index.html'
+ - 'if (-f $document_root/maintenance.html)':
+ - return: 503
+ - location /config.json:
+ - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-workbench2.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-workbench2.key
+ - access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- if grains.os_family in ('RedHat',) %}
+ {%- set group = 'nginx' %}
+{%- else %}
+ {%- set group = 'www-data' %}
+{%- endif %}
+
+### ARVADOS
+arvados:
+ config:
+ group: {{ group }}
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+
+ ### STREAMS
+ http:
+ upstream workbench_upstream:
+ - server: 'workbench.internal:9000 fail_timeout=10s'
+
+ ### SITES
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_workbench_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: workbench.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /.well-known:
+ - root: /var/www
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ arvados_workbench_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ file: extra_custom_certs_file_copy_arvados-workbench.pem
+ config:
+ - server:
+ - server_name: workbench.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://workbench_upstream'
+ - proxy_read_timeout: 300
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-workbench.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-workbench.key
+ - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
+
+ arvados_workbench_upstream.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - listen: 'workbench.internal:9000'
+ - server_name: workbench
+ - root: /var/www/arvados-workbench/current/public
+ - index: index.html index.htm
+ - passenger_enabled: 'on'
+ # yamllint disable-line rule:line-length
+ - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### POSTGRESQL
+postgres:
+ # Centos-7's postgres package is too old, so we need to force using upstream's
+ # This is not required in Debian's family as they already ship with PG +11
+ {%- if salt['grains.get']('os_family') == 'RedHat' %}
+ use_upstream_repo: true
+ version: '12'
+
+ pkgs_deps:
+ - libicu
+ - libxslt
+ - systemd-sysv
+
+ pkgs_extra:
+ - postgresql12-contrib
+
+ {%- else %}
+ use_upstream_repo: false
+ pkgs_extra:
+ - postgresql-contrib
+ {%- endif %}
+ postgresconf: |-
+ listen_addresses = '*' # listen on all interfaces
+ #ssl = on
+ #ssl_cert_file = '/etc/ssl/certs/arvados-snakeoil-cert.pem'
+ #ssl_key_file = '/etc/ssl/private/arvados-snakeoil-cert.key'
+ acls:
+ - ['local', 'all', 'postgres', 'peer']
+ - ['local', 'all', 'all', 'peer']
+ - ['host', 'all', 'all', '127.0.0.1/32', 'md5']
+ - ['host', 'all', 'all', '::1/128', 'md5']
+ - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '127.0.0.1/32']
+ users:
+ __CLUSTER___arvados:
+ ensure: present
+ password: __DATABASE_PASSWORD__
+
+ # tablespaces:
+ # arvados_tablespace:
+ # directory: /path/to/some/tbspace/arvados_tbsp
+ # owner: arvados
+
+ databases:
+ __CLUSTER___arvados:
+ owner: __CLUSTER___arvados
+ template: template0
+ lc_ctype: en_US.utf8
+ lc_collate: en_US.utf8
+ # tablespace: arvados_tablespace
+ schemas:
+ public:
+ owner: __CLUSTER___arvados
+ extensions:
+ pg_trgm:
+ if_not_exists: true
+ schema: public
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+{%- set orig_cert_dir = salt['pillar.get']('extra_custom_certs_dir', '/srv/salt/certs') %}
+{%- set dest_cert_dir = '/etc/nginx/ssl' %}
+{%- set certs = salt['pillar.get']('extra_custom_certs', []) %}
+
+extra_custom_certs_file_directory_certs_dir:
+ file.directory:
+ - name: /etc/nginx/ssl
+ - require:
+ - pkg: nginx_install
+
+{%- for cert in certs %}
+ {%- set cert_file = 'arvados-' ~ cert ~ '.pem' %}
+ {#- set csr_file = 'arvados-' ~ cert ~ '.csr' #}
+ {%- set key_file = 'arvados-' ~ cert ~ '.key' %}
+ {% for c in [cert_file, key_file] %}
+extra_custom_certs_file_copy_{{ c }}:
+ file.copy:
+ - name: {{ dest_cert_dir }}/{{ c }}
+ - source: {{ orig_cert_dir }}/{{ c }}
+ - force: true
+ - user: root
+ - group: root
+ - unless: cmp {{ dest_cert_dir }}/{{ c }} {{ orig_cert_dir }}/{{ c }}
+ - require:
+ - file: extra_custom_certs_file_directory_certs_dir
+ {%- endfor %}
+{%- endfor %}
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set curr_tpldir = tpldir %}
+{%- set tpldir = 'arvados' %}
+{%- from "arvados/map.jinja" import arvados with context %}
+{%- set tpldir = curr_tpldir %}
+
+arvados_test_salt_states_examples_single_host_etc_hosts_host_present:
+ host.present:
+ - ip: 127.0.1.1
+ - names:
+ - {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ # FIXME! This just works for our testings.
+ # Won't work if the cluster name != host name
+ {%- for entry in [
+ 'api',
+ 'collections',
+ 'controller',
+ 'download',
+ 'keep',
+ 'keepweb',
+ 'keep0',
+ 'shell',
+ 'workbench',
+ 'workbench2',
+ 'ws',
+ ]
+ %}
+ - {{ entry }}
+ - {{ entry }}.internal
+ - {{ entry }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- endfor %}
+ - require_in:
+ - file: nginx_config
+ - service: nginx_service
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# WARNING: This file is only used for testing purposes, and should not be used
+# in a production environment
+
+{%- set curr_tpldir = tpldir %}
+{%- set tpldir = 'arvados' %}
+{%- from "arvados/map.jinja" import arvados with context %}
+{%- set tpldir = curr_tpldir %}
+
+{%- set orig_cert_dir = salt['pillar.get']('extra_custom_certs_dir', '/srv/salt/certs') %}
+
+include:
+ - nginx.passenger
+ - nginx.config
+ - nginx.service
+
+# Debian uses different dirs for certs and keys, but being a Snake Oil example,
+# we'll keep it simple here.
+{%- set arvados_ca_cert_file = '/etc/ssl/private/arvados-snakeoil-ca.pem' %}
+{%- set arvados_ca_key_file = '/etc/ssl/private/arvados-snakeoil-ca.key' %}
+
+{%- if grains.get('os_family') == 'Debian' %}
+ {%- set arvados_ca_cert_dest = '/usr/local/share/ca-certificates/arvados-snakeoil-ca.crt' %}
+ {%- set update_ca_cert = '/usr/sbin/update-ca-certificates' %}
+ {%- set openssl_conf = '/etc/ssl/openssl.cnf' %}
+
+extra_snakeoil_certs_ssl_cert_pkg_installed:
+ pkg.installed:
+ - name: ssl-cert
+ - require_in:
+ - sls: postgres
+
+{%- else %}
+ {%- set arvados_ca_cert_dest = '/etc/pki/ca-trust/source/anchors/arvados-snakeoil-ca.pem' %}
+ {%- set update_ca_cert = '/usr/bin/update-ca-trust' %}
+ {%- set openssl_conf = '/etc/pki/tls/openssl.cnf' %}
+
+{%- endif %}
+
+extra_snakeoil_certs_dependencies_pkg_installed:
+ pkg.installed:
+ - pkgs:
+ - openssl
+ - ca-certificates
+
+# Remove the RANDFILE parameter in openssl.cnf as it makes openssl fail in Ubuntu 18.04
+# Saving and restoring the rng state is not necessary anymore in the openssl 1.1.1
+# random generator, cf
+# https://github.com/openssl/openssl/issues/7754
+#
+extra_snakeoil_certs_file_comment_etc_openssl_conf:
+ file.comment:
+ - name: /etc/ssl/openssl.cnf
+ - regex: ^RANDFILE.*
+ - onlyif: grep -q ^RANDFILE /etc/ssl/openssl.cnf
+ - require_in:
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
+
+extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run:
+ # Taken from https://github.com/arvados/arvados/blob/master/tools/arvbox/lib/arvbox/docker/service/certificate/run
+ cmd.run:
+ - name: |
+ # These dirs are not to CentOS-ish, but this is a helper script
+ # and they should be enough
+ mkdir -p /etc/ssl/certs/ /etc/ssl/private/ && \
+ openssl req \
+ -new \
+ -nodes \
+ -sha256 \
+ -x509 \
+ -subj "/C=CC/ST=Some State/O=Arvados Formula/OU=arvados-formula/CN=snakeoil-ca-{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}" \
+ -extensions x509_ext \
+ -config <(cat {{ openssl_conf }} \
+ <(printf "\n[x509_ext]\nbasicConstraints=critical,CA:true,pathlen:0\nkeyUsage=critical,keyCertSign,cRLSign")) \
+ -out {{ arvados_ca_cert_file }} \
+ -keyout {{ arvados_ca_key_file }} \
+ -days 365 && \
+ cp {{ arvados_ca_cert_file }} {{ arvados_ca_cert_dest }} && \
+ {{ update_ca_cert }}
+ - unless:
+ - test -f {{ arvados_ca_cert_file }}
+ - openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_ca_cert_file }}
+ - require:
+ - pkg: extra_snakeoil_certs_dependencies_pkg_installed
+
+# Create independent certs for each vhost
+{%- for vh in [
+ 'collections',
+ 'controller',
+ 'download',
+ 'keepproxy',
+ 'webshell',
+ 'workbench',
+ 'workbench2',
+ 'websocket',
+ ]
+%}
+# We're creating these in a tmp directory, so they're copied to their destination
+# with the `custom_certs` state file, as if using custom certificates.
+{%- set arvados_cert_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.pem' %}
+{%- set arvados_csr_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.csr' %}
+{%- set arvados_key_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.key' %}
+
+extra_snakeoil_certs_arvados_snakeoil_cert_{{ vh }}_cmd_run:
+ cmd.run:
+ - name: |
+ cat > /tmp/{{ vh }}.openssl.cnf <<-CNF
+ [req]
+ default_bits = 2048
+ prompt = no
+ default_md = sha256
+ distinguished_name = dn
+ req_extensions = rext
+ [rext]
+ subjectAltName = @alt_names
+ [dn]
+ C = CC
+ ST = Some State
+ L = Some Location
+ O = Arvados Provision Example Single Host / Multiple Hostnames
+ OU = arvados-provision-example-single_host_multiple_hostnames
+ CN = {{ vh }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ emailAddress = admin@{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ [alt_names]
+ {%- for entry in grains.get('ipv4') %}
+ IP.{{ loop.index }} = {{ entry }}
+ {%- endfor %}
+ DNS.1 = {{ vh }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- if vh in [
+ 'controller',
+ 'keepproxy',
+ 'websocket'
+ ]
+ %}
+ {%- if vh == 'controller' %}
+ DNS.2 = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- elif vh == 'keepproxy' %}
+ DNS.2 = keep.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- elif vh == 'websocket' %}
+ DNS.2 = ws.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- endif %}
+ {%- endif %}
+ CNF
+
+ # The req
+ openssl req \
+ -config /tmp/{{ vh }}.openssl.cnf \
+ -new \
+ -nodes \
+ -sha256 \
+ -out {{ arvados_csr_file }} \
+ -keyout {{ arvados_key_file }} > /tmp/snakeoil_certs.{{ vh }}.output 2>&1 && \
+ # The cert
+ openssl x509 \
+ -req \
+ -days 365 \
+ -in {{ arvados_csr_file }} \
+ -out {{ arvados_cert_file }} \
+ -extfile /tmp/{{ vh }}.openssl.cnf \
+ -extensions rext \
+ -CA {{ arvados_ca_cert_file }} \
+ -CAkey {{ arvados_ca_key_file }} \
+ -set_serial $(date +%s) && \
+ chmod 0644 {{ arvados_cert_file }} && \
+ chmod 0640 {{ arvados_key_file }}
+ - unless:
+ - test -f {{ arvados_key_file }}
+ - openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_cert_file }}
+ - require:
+ - pkg: extra_snakeoil_certs_dependencies_pkg_installed
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
+ - require_in:
+ - file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
+ - file: extra_custom_certs_file_copy_arvados-{{ vh }}.key
+
+ {%- if grains.get('os_family') == 'Debian' %}
+extra_snakeoil_certs_certs_permissions_{{ vh}}_cmd_run:
+ file.managed:
+ - name: {{ arvados_key_file }}
+ - owner: root
+ - group: ssl-cert
+ - require:
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_cert_{{ vh }}_cmd_run
+ - pkg: extra_snakeoil_certs_ssl_cert_pkg_installed
+ {%- endif %}
+{%- endfor %}
--- /dev/null
+Single host with a single hostname
+==================================
+
+These files let you setup Arvados on a single host using a single hostname
+for all of its components nginx's virtualhosts.
+
+The hostname MUST be given in the `local.params` file. The script won't try
+to guess it because, depending on the network architecture where you're
+installing Arvados, things might not work as expected.
+
+The services will be available on the same hostname but different ports,
+which can be given on the `local.params` file or will default to the following
+values:
+
+* CLUSTER.DOMAIN
+* collections
+* download
+* keep
+* keep0
+* webshell
+* workbench
+* workbench2
+* ws
database:
# max concurrent connections per arvados server daemon
# connection_pool_max: 32
- name: arvados
+ name: __CLUSTER___arvados
host: 127.0.0.1
- password: changeme_arvados
- user: arvados
+ password: "__DATABASE_PASSWORD__"
+ user: __CLUSTER___arvados
encoding: en_US.utf8
- client_encoding: UTF8
tls:
# certificate: ''
# key: ''
- # required to test with arvados-snakeoil certs
+ # When using arvados-snakeoil certs set insecure: true
insecure: true
### TOKENS
tokens:
- system_root: changemesystemroottoken
- management: changememanagementtoken
- rails_secret: changemerailssecrettoken
- anonymous_user: changemeanonymoususertoken
+ system_root: __SYSTEM_ROOT_TOKEN__
+ management: __MANAGEMENT_TOKEN__
+ anonymous_user: __ANONYMOUS_USER_TOKEN__
+ rails_secret: YDLxHf4GqqmLXYAMgndrAmFEdqgC0sBqX7TEjMN2rw9D6EVwgx
### KEYS
secrets:
- blob_signing_key: changemeblobsigningkey
- workbench_secret_key: changemeworkbenchsecretkey
- dispatcher_access_key: changemedispatcheraccesskey
- dispatcher_secret_key: changeme_dispatchersecretkey
- keep_access_key: changemekeepaccesskey
- keep_secret_key: changemekeepsecretkey
+ blob_signing_key: __BLOB_SIGNING_KEY__
+ workbench_secret_key: __WORKBENCH_SECRET_KEY__
Login:
Test:
# <cluster>-nyw5e-<volume>
__CLUSTER__-nyw5e-000000000000000:
AccessViaHosts:
- http://keep0.__CLUSTER__.__DOMAIN__:25107:
+ 'http://__HOSTNAME_INT__:25107':
ReadOnly: false
Replication: 2
Driver: Directory
Services:
Controller:
- ExternalURL: https://__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__CONTROLLER_EXT_SSL_PORT__'
InternalURLs:
- http://controller.internal:8003: {}
- DispatchCloud:
- InternalURLs:
- http://__CLUSTER__.__DOMAIN__:9006: {}
- Keepbalance:
- InternalURLs:
- http://__CLUSTER__.__DOMAIN__:9005: {}
+ 'http://__HOSTNAME_INT__:8003': {}
Keepproxy:
- ExternalURL: https://keep.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__KEEP_EXT_SSL_PORT__'
InternalURLs:
- http://keep.internal:25100: {}
+ 'http://__HOSTNAME_INT__:25100': {}
Keepstore:
InternalURLs:
- http://keep0.__CLUSTER__.__DOMAIN__:25107: {}
+ 'http://__HOSTNAME_INT__:25107': {}
RailsAPI:
InternalURLs:
- http://api.internal:8004: {}
+ 'http://__HOSTNAME_INT__:8004': {}
WebDAV:
- ExternalURL: https://collections.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__KEEPWEB_EXT_SSL_PORT__'
InternalURLs:
- http://collections.internal:9002: {}
+ 'http://__HOSTNAME_INT__:9003': {}
WebDAVDownload:
- ExternalURL: https://download.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__KEEPWEB_EXT_SSL_PORT__'
WebShell:
- ExternalURL: https://webshell.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__WEBSHELL_EXT_SSL_PORT__'
Websocket:
- ExternalURL: wss://ws.__CLUSTER__.__DOMAIN__/websocket
+ ExternalURL: 'wss://__HOSTNAME_EXT__:__WEBSOCKET_EXT_SSL_PORT__/websocket'
InternalURLs:
- http://ws.internal:8005: {}
+ 'http://__HOSTNAME_INT__:8005': {}
Workbench1:
- ExternalURL: https://workbench.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__WORKBENCH1_EXT_SSL_PORT__'
Workbench2:
- ExternalURL: https://workbench2.__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+ ExternalURL: 'https://__HOSTNAME_EXT__:__WORKBENCH2_EXT_SSL_PORT__'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+docker:
+ pkg:
+ docker:
+ use_upstream: package
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+locale:
+ present:
+ - "en_US.UTF-8 UTF-8"
+ default:
+ # Note: On debian systems don't write the second 'UTF-8' here or you will
+ # experience salt problems like: LookupError: unknown encoding: utf_8_utf_8
+ # Restart the minion after you corrected this!
+ name: 'en_US.UTF-8'
+ requires: 'en_US.UTF-8 UTF-8'
overwrite: true
config:
- server:
- - listen: 'api.internal:8004'
+ - listen: '__HOSTNAME_INT__:8004'
- server_name: api
- root: /var/www/arvados-api/current/public
- index: index.html index.htm
default: 1
'127.0.0.0/8': 0
upstream controller_upstream:
- - server: 'controller.internal:8003 fail_timeout=10s'
+ - server: '__HOSTNAME_INT__:8003 fail_timeout=10s'
### SITES
servers:
overwrite: true
config:
- server:
- - server_name: __CLUSTER__.__DOMAIN__
+ - server_name: _
- listen:
- - 80 default
+ - 80 default_server
- location /.well-known:
- root: /var/www
- location /:
overwrite: true
config:
- server:
- - server_name: __CLUSTER__.__DOMAIN__
+ - server_name: __HOSTNAME_EXT__
- listen:
- - __HOST_SSL_PORT__ http2 ssl
+ - __CONTROLLER_EXT_SSL_PORT__ http2 ssl default_server
- index: index.html index.htm
- location /:
- proxy_pass: 'http://controller_upstream'
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream keepproxy_upstream:
+ - server: '__HOSTNAME_INT__:25100 fail_timeout=10s'
+
+ servers:
+ managed:
+ arvados_keepproxy_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __KEEP_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://keepproxy_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_body_buffer_size: 64M
+ - client_max_body_size: 64M
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream collections_downloads_upstream:
+ - server: '__HOSTNAME_INT__:9003 fail_timeout=10s'
+
+ servers:
+ managed:
+ ### COLLECTIONS / DOWNLOAD
+ arvados_collections_download_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __KEEPWEB_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://collections_downloads_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_max_body_size: 0
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/keepweb.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/keepweb.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set passenger_pkg = 'nginx-mod-http-passenger'
+ if grains.osfinger in ('CentOS Linux-7') else
+ 'libnginx-mod-http-passenger' %}
+{%- set passenger_mod = '/usr/lib64/nginx/modules/ngx_http_passenger_module.so'
+ if grains.osfinger in ('CentOS Linux-7',) else
+ '/usr/lib/nginx/modules/ngx_http_passenger_module.so' %}
+{%- set passenger_ruby = '/usr/local/rvm/rubies/ruby-2.7.2/bin/ruby'
+ if grains.osfinger in ('CentOS Linux-7', 'Ubuntu-18.04',) else
+ '/usr/bin/ruby' %}
+
+### NGINX
+nginx:
+ install_from_phusionpassenger: true
+ lookup:
+ passenger_package: {{ passenger_pkg }}
+ ### PASSENGER
+ passenger:
+ passenger_ruby: {{ passenger_ruby }}
+
+ ### SERVER
+ server:
+ config:
+ # This is required to get the passenger module loaded
+ # In Debian it can be done with this
+ # include: 'modules-enabled/*.conf'
+ load_module: {{ passenger_mod }}
+
+ worker_processes: 4
+
+ ### SNIPPETS
+ snippets:
+ # Based on https://ssl-config.mozilla.org/#server=nginx&version=1.14.2&config=intermediate&openssl=1.1.1d&guideline=5.4
+ ssl_hardening_default.conf:
+ - ssl_session_timeout: 1d
+ - ssl_session_cache: 'shared:arvadosSSL:10m'
+ - ssl_session_tickets: 'off'
+
+ # intermediate configuration
+ - ssl_protocols: TLSv1.2 TLSv1.3
+ - ssl_ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
+ - ssl_prefer_server_ciphers: 'off'
+
+ # HSTS (ngx_http_headers_module is required) (63072000 seconds)
+ - add_header: 'Strict-Transport-Security "max-age=63072000" always'
+
+ # OCSP stapling
+ # FIXME! Stapling does not work with self-signed certificates, so disabling for tests
+ # - ssl_stapling: 'on'
+ # - ssl_stapling_verify: 'on'
+
+ # verify chain of trust of OCSP response using Root CA and Intermediate certs
+ # - ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates
+
+ # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
+ # - ssl_dhparam: /path/to/dhparam
+
+ # replace with the IP address of your resolver
+ # - resolver: 127.0.0.1
+
+ arvados-snakeoil.conf:
+ - ssl_certificate: /etc/ssl/private/arvados-snakeoil-cert.pem
+ - ssl_certificate_key: /etc/ssl/private/arvados-snakeoil-cert.key
+
+ ### SITES
+ servers:
+ managed:
+ # Remove default webserver
+ default:
+ enabled: false
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+
+ ### STREAMS
+ http:
+ upstream webshell_upstream:
+ - server: '__HOSTNAME_INT__:4200 fail_timeout=10s'
+
+ ### SITES
+ servers:
+ managed:
+ arvados_webshell_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __WEBSHELL_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /__HOSTNAME_EXT__:
+ - proxy_pass: 'http://webshell_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_ssl_session_reuse: 'off'
+
+ - "if ($request_method = 'OPTIONS')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+ - add_header: "'Access-Control-Max-Age' 1728000"
+ - add_header: "'Content-Type' 'text/plain charset=UTF-8'"
+ - add_header: "'Content-Length' 0"
+ - return: 204
+
+ - "if ($request_method = 'POST')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+
+ - "if ($request_method = 'GET')":
+ - add_header: "'Access-Control-Allow-Origin' '*'"
+ - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
+ - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
+
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
+
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+ ### STREAMS
+ http:
+ upstream websocket_upstream:
+ - server: '__HOSTNAME_INT__:8005 fail_timeout=10s'
+
+ servers:
+ managed:
+ arvados_websocket_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __WEBSOCKET_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://websocket_upstream'
+ - proxy_read_timeout: 600
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: 'Host $host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'Upgrade $http_upgrade'
+ - proxy_set_header: 'Connection "upgrade"'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_body_buffer_size: 64M
+ - client_max_body_size: 64M
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### ARVADOS
+arvados:
+ config:
+ group: www-data
+
+### NGINX
+nginx:
+ ### SITES
+ servers:
+ managed:
+ arvados_workbench2_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __WORKBENCH2_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - root: /var/www/arvados-workbench2/workbench2
+ - try_files: '$uri $uri/ /index.html'
+ - 'if (-f $document_root/maintenance.html)':
+ - return: 503
+ - location /config.json:
+ - return: {{ "200 '" ~ '{"API_HOST":"__HOSTNAME_EXT__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### ARVADOS
+arvados:
+ config:
+ group: www-data
+
+### NGINX
+nginx:
+ ### SERVER
+ server:
+ config:
+
+ ### STREAMS
+ http:
+ upstream workbench_upstream:
+ - server: '__HOSTNAME_INT__:9000 fail_timeout=10s'
+
+ ### SITES
+ servers:
+ managed:
+ arvados_workbench_ssl:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: __HOSTNAME_EXT__
+ - listen:
+ - __WORKBENCH1_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://workbench_upstream'
+ - proxy_read_timeout: 300
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - include: 'snippets/arvados-snakeoil.conf'
+ - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
+
+ arvados_workbench_upstream:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - listen: '__HOSTNAME_INT__:9000'
+ - server_name: workbench
+ - root: /var/www/arvados-workbench/current/public
+ - index: index.html index.htm
+ - passenger_enabled: 'on'
+ # yamllint disable-line rule:line-length
+ - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.error.log
- ['local', 'all', 'all', 'peer']
- ['host', 'all', 'all', '127.0.0.1/32', 'md5']
- ['host', 'all', 'all', '::1/128', 'md5']
- - ['host', 'arvados', 'arvados', '127.0.0.1/32']
+ - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '127.0.0.0/8']
users:
- arvados:
+ __CLUSTER___arvados:
ensure: present
- password: changeme_arvados
+ password: __DATABASE_PASSWORD__
# tablespaces:
# arvados_tablespace:
# owner: arvados
databases:
- arvados:
- owner: arvados
+ __CLUSTER___arvados:
+ owner: __CLUSTER___arvados
template: template0
lc_ctype: en_US.utf8
lc_collate: en_US.utf8
# tablespace: arvados_tablespace
schemas:
public:
- owner: arvados
+ owner: __CLUSTER___arvados
extensions:
pg_trgm:
if_not_exists: true
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+{%- set curr_tpldir = tpldir %}
+{%- set tpldir = 'arvados' %}
+{%- from "arvados/map.jinja" import arvados with context %}
+{%- set tpldir = curr_tpldir %}
+
+arvados_test_salt_states_examples_single_host_etc_hosts_host_present:
+ host.present:
+ - ip: 127.0.1.1
+ - names:
+ - {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ # FIXME! This just works for our testings.
+ # Won't work if the cluster name != host name
+ {%- for entry in [
+ 'api',
+ 'collections',
+ 'controller',
+ 'download',
+ 'keep',
+ 'keepweb',
+ 'keep0',
+ 'shell',
+ 'workbench',
+ 'workbench2',
+ 'ws',
+ ]
+ %}
+ - {{ entry }}
+ - {{ entry }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- endfor %}
+ - require_in:
+ - file: nginx_config
+ - service: nginx_service
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+{%- set curr_tpldir = tpldir %}
+{%- set tpldir = 'arvados' %}
+{%- from "arvados/map.jinja" import arvados with context %}
+{%- set tpldir = curr_tpldir %}
+
+include:
+ - nginx.passenger
+ - nginx.config
+ - nginx.service
+
+# Debian uses different dirs for certs and keys, but being a Snake Oil example,
+# we'll keep it simple here.
+{%- set arvados_ca_cert_file = '/etc/ssl/private/arvados-snakeoil-ca.pem' %}
+{%- set arvados_ca_key_file = '/etc/ssl/private/arvados-snakeoil-ca.key' %}
+{%- set arvados_cert_file = '/etc/ssl/private/arvados-snakeoil-cert.pem' %}
+{%- set arvados_csr_file = '/etc/ssl/private/arvados-snakeoil-cert.csr' %}
+{%- set arvados_key_file = '/etc/ssl/private/arvados-snakeoil-cert.key' %}
+
+{%- if grains.get('os_family') == 'Debian' %}
+ {%- set arvados_ca_cert_dest = '/usr/local/share/ca-certificates/arvados-snakeoil-ca.crt' %}
+ {%- set update_ca_cert = '/usr/sbin/update-ca-certificates' %}
+ {%- set openssl_conf = '/etc/ssl/openssl.cnf' %}
+{%- else %}
+ {%- set arvados_ca_cert_dest = '/etc/pki/ca-trust/source/anchors/arvados-snakeoil-ca.pem' %}
+ {%- set update_ca_cert = '/usr/bin/update-ca-trust' %}
+ {%- set openssl_conf = '/etc/pki/tls/openssl.cnf' %}
+{%- endif %}
+
+arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed:
+ pkg.installed:
+ - pkgs:
+ - openssl
+ - ca-certificates
+
+arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run:
+ # Taken from https://github.com/arvados/arvados/blob/master/tools/arvbox/lib/arvbox/docker/service/certificate/run
+ cmd.run:
+ - name: |
+ # These dirs are not to CentOS-ish, but this is a helper script
+ # and they should be enough
+ mkdir -p /etc/ssl/certs/ /etc/ssl/private/ && \
+ openssl req \
+ -new \
+ -nodes \
+ -sha256 \
+ -x509 \
+ -subj "/C=CC/ST=Some State/O=Arvados Formula/OU=arvados-formula/CN=snakeoil-ca-{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}" \
+ -extensions x509_ext \
+ -config <(cat {{ openssl_conf }} \
+ <(printf "\n[x509_ext]\nbasicConstraints=critical,CA:true,pathlen:0\nkeyUsage=critical,keyCertSign,cRLSign")) \
+ -out {{ arvados_ca_cert_file }} \
+ -keyout {{ arvados_ca_key_file }} \
+ -days 365 && \
+ cp {{ arvados_ca_cert_file }} {{ arvados_ca_cert_dest }} && \
+ {{ update_ca_cert }}
+ - unless:
+ - test -f {{ arvados_ca_cert_file }}
+ - openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_ca_cert_file }}
+ - require:
+ - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed
+
+arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_cert_cmd_run:
+ cmd.run:
+ - name: |
+ cat > /tmp/openssl.cnf <<-CNF
+ [req]
+ default_bits = 2048
+ prompt = no
+ default_md = sha256
+ req_extensions = rext
+ distinguished_name = dn
+ [dn]
+ C = CC
+ ST = Some State
+ L = Some Location
+ O = Arvados Formula
+ OU = arvados-formula
+ CN = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ emailAddress = admin@{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ [rext]
+ subjectAltName = @alt_names
+ [alt_names]
+ {%- for entry in grains.get('ipv4') %}
+ IP.{{ loop.index }} = {{ entry }}
+ {%- endfor %}
+ {%- for entry in [
+ 'keep',
+ 'collections',
+ 'download',
+ 'keepweb',
+ 'ws',
+ 'workbench',
+ 'workbench2',
+ ]
+ %}
+ DNS.{{ loop.index }} = {{ entry }}
+ {%- endfor %}
+ DNS.8 = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ DNS.9 = '__HOSTNAME_EXT__'
+ DNS.10 = '__HOSTNAME_INT__'
+ CNF
+
+ # The req
+ openssl req \
+ -config /tmp/openssl.cnf \
+ -new \
+ -nodes \
+ -sha256 \
+ -out {{ arvados_csr_file }} \
+ -keyout {{ arvados_key_file }} > /tmp/snake_oil_certs.output 2>&1 && \
+ # The cert
+ openssl x509 \
+ -req \
+ -days 365 \
+ -in {{ arvados_csr_file }} \
+ -out {{ arvados_cert_file }} \
+ -extfile /tmp/openssl.cnf \
+ -extensions rext \
+ -CA {{ arvados_ca_cert_file }} \
+ -CAkey {{ arvados_ca_key_file }} \
+ -set_serial $(date +%s) && \
+ chmod 0644 {{ arvados_cert_file }} && \
+ chmod 0640 {{ arvados_key_file }}
+ - unless:
+ - test -f {{ arvados_key_file }}
+ - openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_cert_file }}
+ - require:
+ - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed
+ - cmd: arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run
+ # We need this before we can add the nginx's snippet
+ - require_in:
+ - file: nginx_snippet_arvados-snakeoil.conf
+
+{%- if grains.get('os_family') == 'Debian' %}
+arvados_test_salt_states_examples_single_host_snakeoil_certs_ssl_cert_pkg_installed:
+ pkg.installed:
+ - name: ssl-cert
+ - require_in:
+ - sls: postgres
+
+arvados_test_salt_states_examples_single_host_snakeoil_certs_certs_permissions_cmd_run:
+ file.managed:
+ - name: {{ arvados_key_file }}
+ - owner: root
+ - group: ssl-cert
+ - require:
+ - cmd: arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_cert_cmd_run
+ - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_ssl_cert_pkg_installed
+ - require_in:
+ - file: nginx_snippet_arvados-snakeoil.conf
+{%- endif %}
--- /dev/null
+##########################################################
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: CC-BY-SA-3.0
+
+# These are the basic parameters to configure the installation
+
+# The FIVE ALPHANUMERIC CHARACTERS name you want to give your cluster
+CLUSTER="cluster_fixme_or_this_wont_work"
+
+# The domainname you want tou give to your cluster's hosts
+DOMAIN="domain_fixme_or_this_wont_work"
+
+# Host SSL port where you want to point your browser to access Arvados
+# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
+# You can point it to another port if desired
+# In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
+CONTROLLER_EXT_SSL_PORT=443
+KEEP_EXT_SSL_PORT=443
+# Both for collections and downloads
+KEEPWEB_EXT_SSL_PORT=443
+WEBSHELL_EXT_SSL_PORT=443
+WEBSOCKET_EXT_SSL_PORT=443
+WORKBENCH1_EXT_SSL_PORT=443
+WORKBENCH2_EXT_SSL_PORT=443
+
+# Internal IPs for the configuration
+CLUSTER_INT_CIDR=10.0.0.0/16
+
+# Note the IPs in this example are shared between roles, as suggested in
+# https://doc.arvados.org/main/install/salt-multi-host.html
+CONTROLLER_INT_IP=10.0.0.1
+WEBSOCKET_INT_IP=10.0.0.1
+KEEP_INT_IP=10.0.0.2
+# Both for collections and downloads
+KEEPWEB_INT_IP=10.0.0.2
+KEEPSTORE0_INT_IP=10.0.0.3
+KEEPSTORE1_INT_IP=10.0.0.4
+WORKBENCH1_INT_IP=10.0.0.5
+WORKBENCH2_INT_IP=10.0.0.5
+WEBSHELL_INT_IP=10.0.0.5
+DATABASE_INT_IP=10.0.0.6
+SHELL_INT_IP=10.0.0.7
+
+INITIAL_USER="admin"
+
+# If not specified, the initial user email will be composed as
+# INITIAL_USER@CLUSTER.DOMAIN
+INITIAL_USER_EMAIL="admin@cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
+INITIAL_USER_PASSWORD="password"
+
+# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
+BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=please_set_this_to_some_secure_value
+
+# SSL CERTIFICATES
+# Arvados REQUIRES valid SSL to work correctly. Otherwise, some components will fail
+# to communicate and can silently drop traffic. You can try to use the Letsencrypt
+# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
+# automatically obtain and install SSL certificates for your instances or set this
+# variable to "no", provide and upload your own certificates to the instances and
+# modify the 'nginx_*' salt pillars accordingly (see CUSTOM_CERTS_DIR below)
+USE_LETSENCRYPT="yes"
+USE_LETSENCRYPT_IAM_USER="yes"
+# For collections, we need to obtain a wildcard certificate for
+# '*.collections.<cluster>.<domain>'. This is only possible through a DNS-01 challenge.
+# For that reason, you'll need to provide AWS credentials with permissions to manage
+# RRs in the route53 zone for the cluster.
+# WARNING!: If AWS credentials files already exist in the hosts, they won't be replaced.
+LE_AWS_REGION="us-east-1"
+LE_AWS_ACCESS_KEY_ID="AKIABCDEFGHIJKLMNOPQ"
+LE_AWS_SECRET_ACCESS_KEY="thisistherandomstringthatisyoursecretkey"
+
+# If you going to provide your own certificates for Arvados, the provision script can
+# help you deploy them. In order to do that, you need to set `USE_LETSENCRYPT=no` above,
+# and copy the required certificates under the directory specified in the next line.
+# The certs will be copied from this directory by the provision script.
+CUSTOM_CERTS_DIR="./certs"
+# The script expects cert/key files with these basenames (matching the role except for
+# keepweb, which is split in both downoad/collections):
+# "controller"
+# "websocket"
+# "workbench"
+# "workbench2"
+# "webshell"
+# "download" # Part of keepweb
+# "collections" # Part of keepweb
+# "keep" # Keepproxy
+# Ie., 'keep', the script will lookup for
+# ${CUSTOM_CERTS_DIR}/keep.crt
+# ${CUSTOM_CERTS_DIR}/keep.key
+
+# The directory to check for the config files (pillars, states) you want to use.
+# There are a few examples under 'config_examples'.
+# CONFIG_DIR="local_config_dir"
+# Extra states to apply. If you use your own subdir, change this value accordingly
+# EXTRA_STATES_DIR="${CONFIG_DIR}/states"
+
+# These are ARVADOS-related settings.
+# Which release of Arvados repo you want to use
+RELEASE="production"
+# Which version of Arvados you want to install. Defaults to latest stable
+# VERSION="2.1.2-1"
+
+# This is an arvados-formula setting.
+# If branch is set, the script will switch to it before running salt
+# Usually not needed, only used for testing
+# BRANCH="main"
+
+##########################################################
+# Usually there's no need to modify things below this line
+
+# Formulas versions
+# ARVADOS_TAG="2.2.0"
+# POSTGRES_TAG="v0.41.6"
+# NGINX_TAG="temp-fix-missing-statements-in-pillar"
+# DOCKER_TAG="v2.0.7"
+# LOCALE_TAG="v0.3.4"
+# LETSENCRYPT_TAG="v2.1.0"
--- /dev/null
+##########################################################
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: CC-BY-SA-3.0
+
+# These are the basic parameters to configure the installation
+
+# The FIVE ALPHANUMERIC CHARACTERS name you want to give your cluster
+CLUSTER="cluster_fixme_or_this_wont_work"
+
+# The domainname you want tou give to your cluster's hosts
+DOMAIN="domain_fixme_or_this_wont_work"
+
+# Host SSL port where you want to point your browser to access Arvados
+# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
+# You can point it to another port if desired
+# In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
+CONTROLLER_EXT_SSL_PORT=443
+KEEP_EXT_SSL_PORT=25101
+# Both for collections and downloads
+KEEPWEB_EXT_SSL_PORT=9002
+WEBSHELL_EXT_SSL_PORT=4202
+WEBSOCKET_EXT_SSL_PORT=8002
+WORKBENCH1_EXT_SSL_PORT=443
+WORKBENCH2_EXT_SSL_PORT=3001
+
+INITIAL_USER="admin"
+
+# If not specified, the initial user email will be composed as
+# INITIAL_USER@CLUSTER.DOMAIN
+INITIAL_USER_EMAIL="admin@cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
+INITIAL_USER_PASSWORD="password"
+
+# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
+BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=please_set_this_to_some_secure_value
+
+# SSL CERTIFICATES
+# Arvados REQUIRES valid SSL to work correctly. Otherwise, some components will fail
+# to communicate and can silently drop traffic. You can try to use the Letsencrypt
+# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
+# automatically obtain and install SSL certificates for your instances or set this
+# variable to "no", provide and upload your own certificates to the instances and
+# modify the 'nginx_*' salt pillars accordingly (see CUSTOM_CERTS_DIR below)
+USE_LETSENCRYPT="no"
+
+# If you going to provide your own certificates for Arvados, the provision script can
+# help you deploy them. In order to do that, you need to set `USE_LETSENCRYPT=no` above,
+# and copy the required certificates under the directory specified in the next line.
+# The certs will be copied from this directory by the provision script.
+CUSTOM_CERTS_DIR="./certs"
+# The script expects cert/key files with these basenames (matching the role except for
+# keepweb, which is split in both downoad/collections):
+# "controller"
+# "websocket"
+# "workbench"
+# "workbench2"
+# "webshell"
+# "download" # Part of keepweb
+# "collections" # Part of keepweb
+# "keepproxy"
+# Ie., 'keepproxy', the script will lookup for
+# ${CUSTOM_CERTS_DIR}/keepproxy.crt
+# ${CUSTOM_CERTS_DIR}/keepproxy.key
+
+# The directory to check for the config files (pillars, states) you want to use.
+# There are a few examples under 'config_examples'.
+# CONFIG_DIR="local_config_dir"
+# Extra states to apply. If you use your own subdir, change this value accordingly
+# EXTRA_STATES_DIR="${CONFIG_DIR}/states"
+
+# These are ARVADOS-related settings.
+# Which release of Arvados repo you want to use
+RELEASE="production"
+# Which version of Arvados you want to install. Defaults to latest stable
+# VERSION="2.1.2-1"
+
+# This is an arvados-formula setting.
+# If branch is set, the script will switch to it before running salt
+# Usually not needed, only used for testing
+# BRANCH="main"
+
+##########################################################
+# Usually there's no need to modify things below this line
+
+# Formulas versions
+# ARVADOS_TAG="2.2.0"
+# POSTGRES_TAG="v0.41.6"
+# NGINX_TAG="temp-fix-missing-statements-in-pillar"
+# DOCKER_TAG="v2.0.7"
+# LOCALE_TAG="v0.3.4"
+# LETSENCRYPT_TAG="v2.1.0"
--- /dev/null
+##########################################################
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: CC-BY-SA-3.0
+
+# These are the basic parameters to configure the installation
+
+# The FIVE ALPHANUMERIC CHARACTERS name you want to give your cluster
+CLUSTER="cluster_fixme_or_this_wont_work"
+
+# The domainname you want tou give to your cluster's hosts
+DOMAIN="domain_fixme_or_this_wont_work"
+
+# Set this value when installing a cluster in a single host with a single hostname
+# to access all the instances. Not used in the other examples.
+# When using virtualization (ie AWS), this should be
+# the EXTERNAL/PUBLIC hostname for the instance.
+# If empty, ${CLUSTER}.${DOMAIN} will be used
+HOSTNAME_EXT=""
+# The internal hostname for the host. In the example files, only used in the
+# single_host/single_hostname example
+HOSTNAME_INT="127.0.1.1"
+# Host SSL port where you want to point your browser to access Arvados
+# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
+# You can point it to another port if desired
+# In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
+CONTROLLER_EXT_SSL_PORT=9443
+KEEP_EXT_SSL_PORT=35101
+# Both for collections and downloads
+KEEPWEB_EXT_SSL_PORT=11002
+WEBSHELL_EXT_SSL_PORT=14202
+WEBSOCKET_EXT_SSL_PORT=18002
+WORKBENCH1_EXT_SSL_PORT=9444
+WORKBENCH2_EXT_SSL_PORT=9445
+
+INITIAL_USER="admin"
+
+# If not specified, the initial user email will be composed as
+# INITIAL_USER@CLUSTER.DOMAIN
+INITIAL_USER_EMAIL="admin@cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
+INITIAL_USER_PASSWORD="password"
+
+# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
+BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=please_set_this_to_some_secure_value
+
+# SSL CERTIFICATES
+# Arvados REQUIRES valid SSL to work correctly. Otherwise, some components will fail
+# to communicate and can silently drop traffic. You can try to use the Letsencrypt
+# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
+# automatically obtain and install SSL certificates for your instances or set this
+# variable to "no", provide and upload your own certificates to the instances and
+# modify the 'nginx_*' salt pillars accordingly
+USE_LETSENCRYPT="no"
+
+# The directory to check for the config files (pillars, states) you want to use.
+# There are a few examples under 'config_examples'.
+# CONFIG_DIR="local_config_dir"
+# Extra states to apply. If you use your own subdir, change this value accordingly
+# EXTRA_STATES_DIR="${CONFIG_DIR}/states"
+
+# These are ARVADOS-related settings.
+# Which release of Arvados repo you want to use
+RELEASE="production"
+# Which version of Arvados you want to install. Defaults to latest stable
+# VERSION="2.1.2-1"
+
+# This is an arvados-formula setting.
+# If branch is set, the script will switch to it before running salt
+# Usually not needed, only used for testing
+# BRANCH="main"
+
+##########################################################
+# Usually there's no need to modify things below this line
+
+# Formulas versions
+# ARVADOS_TAG="2.2.0"
+# POSTGRES_TAG="v0.41.6"
+# NGINX_TAG="temp-fix-missing-statements-in-pillar"
+# DOCKER_TAG="v2.0.7"
+# LOCALE_TAG="v0.3.4"
+# LETSENCRYPT_TAG="v2.1.0"
#
# vagrant up
-##########################################################
-# This section are the basic parameters to configure the installation
-
-# The 5 letters name you want to give your cluster
-CLUSTER="arva2"
-DOMAIN="arv.local"
-
-INITIAL_USER="admin"
-
-# If not specified, the initial user email will be composed as
-# INITIAL_USER@CLUSTER.DOMAIN
-INITIAL_USER_EMAIL="${INITIAL_USER}@${CLUSTER}.${DOMAIN}"
-INITIAL_USER_PASSWORD="password"
-
-# The example config you want to use. Currently, only "single_host" is
-# available
-CONFIG_DIR="single_host"
-
-# Which release of Arvados repo you want to use
-RELEASE="production"
-# Which version of Arvados you want to install. Defaults to 'latest'
-# in the desired repo
-VERSION="latest"
-
-# Host SSL port where you want to point your browser to access Arvados
-# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
-# You can point it to another port if desired
-# In Vagrant, make sure it matches what you set in the Vagrantfile
-# HOST_SSL_PORT=443
-
-# This is a arvados-formula setting.
-# If branch is set, the script will switch to it before running salt
-# Usually not needed, only used for testing
-# BRANCH="master"
-
-##########################################################
-# Usually there's no need to modify things below this line
-
-# Formulas versions
-ARVADOS_TAG="v1.1.4"
-POSTGRES_TAG="v0.41.3"
-NGINX_TAG="v2.4.0"
-DOCKER_TAG="v1.0.0"
-LOCALE_TAG="v0.3.4"
-
set -o pipefail
# capture the directory that the script is running from
echo >&2 "Usage: ${0} [-h] [-h]"
echo >&2
echo >&2 "${0} options:"
- echo >&2 " -d, --debug Run salt installation in debug mode"
- echo >&2 " -p <N>, --ssl-port <N> SSL port to use for the web applications"
- echo >&2 " -t, --test Test installation running a CWL workflow"
- echo >&2 " -h, --help Display this help and exit"
- echo >&2 " -v, --vagrant Run in vagrant and use the /vagrant shared dir"
+ echo >&2 " -d, --debug Run salt installation in debug mode"
+ echo >&2 " -c <local.params>, --config <local.params> Path to the local.params config file"
+ echo >&2 " -t, --test Test installation running a CWL workflow"
+ echo >&2 " -r, --roles List of Arvados roles to apply to the host, comma separated"
+ echo >&2 " Possible values are:"
+ echo >&2 " api"
+ echo >&2 " controller"
+ echo >&2 " dispatcher"
+ echo >&2 " keepproxy"
+ echo >&2 " keepstore"
+ echo >&2 " keepweb"
+ echo >&2 " shell"
+ echo >&2 " webshell"
+ echo >&2 " websocket"
+ echo >&2 " workbench"
+ echo >&2 " workbench2"
+ echo >&2 " Defaults to applying them all"
+ echo >&2 " -h, --help Display this help and exit"
+ echo >&2 " --dump-config <dest_dir> Dumps the pillars and states to a directory"
+ echo >&2 " This parameter does not perform any installation at all. It's"
+ echo >&2 " intended to give you a parsed sot of configuration files so"
+ echo >&2 " you can inspect them or use them in you Saltstack infrastructure."
+ echo >&2 " It"
+ echo >&2 " - parses the pillar and states templates,"
+ echo >&2 " - downloads the helper formulas with their desired versions,"
+ echo >&2 " - prepares the 'top.sls' files both for pillars and states"
+ echo >&2 " for the selected role/s"
+ echo >&2 " - writes the resulting files into <dest_dir>"
+ echo >&2 " -v, --vagrant Run in vagrant and use the /vagrant shared dir"
+ echo >&2 " --development Run in dev mode, using snakeoil certs"
echo >&2
}
arguments() {
# NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
- TEMP=$(getopt -o dhp:tv \
- --long debug,help,ssl-port:,test,vagrant \
+ if ! which getopt > /dev/null; then
+ echo >&2 "GNU getopt is required to run this script. Please install it and re-reun it"
+ exit 1
+ fi
+
+ TEMP=$(getopt -o c:dhp:r:tv \
+ --long config:,debug,development,dump-config:,help,roles:,test,vagrant \
-n "${0}" -- "${@}")
- if [ ${?} != 0 ] ; then echo "GNU getopt missing? Use -h for help"; exit 1 ; fi
+ if [ ${?} != 0 ];
+ then echo "Please check the parameters you entered and re-run again"
+ exit 1
+ fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while [ ${#} -ge 1 ]; do
case ${1} in
+ -c | --config)
+ CONFIG_FILE=${2}
+ shift 2
+ ;;
-d | --debug)
LOG_LEVEL="debug"
shift
+ set -x
+ ;;
+ --dump-config)
+ if [[ ${2} = /* ]]; then
+ DUMP_SALT_CONFIG_DIR=${2}
+ else
+ DUMP_SALT_CONFIG_DIR=${PWD}/${2}
+ fi
+ ## states
+ S_DIR="${DUMP_SALT_CONFIG_DIR}/salt"
+ ## formulas
+ F_DIR="${DUMP_SALT_CONFIG_DIR}/formulas"
+ ## pillars
+ P_DIR="${DUMP_SALT_CONFIG_DIR}/pillars"
+ ## tests
+ T_DIR="${DUMP_SALT_CONFIG_DIR}/tests"
+ DUMP_CONFIG="yes"
+ shift 2
+ ;;
+ --development)
+ DEV_MODE="yes"
+ shift 1
+ ;;
+ -r | --roles)
+ for i in ${2//,/ }
+ do
+ # Verify the role exists
+ if [[ ! "database,api,controller,keepstore,websocket,keepweb,workbench2,webshell,keepproxy,shell,workbench,dispatcher" == *"$i"* ]]; then
+ echo "The role '${i}' is not a valid role"
+ usage
+ exit 1
+ fi
+ ROLES="${ROLES} ${i}"
+ done
+ shift 2
;;
-t | --test)
TEST="yes"
VAGRANT="yes"
shift
;;
- -p | --ssl-port)
- HOST_SSL_PORT=${2}
- shift 2
- ;;
--)
shift
break
done
}
+DEV_MODE="no"
+CONFIG_FILE="${SCRIPT_DIR}/local.params"
+CONFIG_DIR="local_config_dir"
+DUMP_CONFIG="no"
LOG_LEVEL="info"
-HOST_SSL_PORT=443
+CONTROLLER_EXT_SSL_PORT=443
TESTS_DIR="tests"
-arguments ${@}
+CLUSTER=""
+DOMAIN=""
+
+# Hostnames/IPs used for single-host deploys
+HOSTNAME_EXT=""
+HOSTNAME_INT="127.0.1.1"
+
+# Initial user setup
+INITIAL_USER=""
+INITIAL_USER_EMAIL=""
+INITIAL_USER_PASSWORD=""
+
+CONTROLLER_EXT_SSL_PORT=8000
+KEEP_EXT_SSL_PORT=25101
+# Both for collections and downloads
+KEEPWEB_EXT_SSL_PORT=9002
+WEBSHELL_EXT_SSL_PORT=4202
+WEBSOCKET_EXT_SSL_PORT=8002
+WORKBENCH1_EXT_SSL_PORT=443
+WORKBENCH2_EXT_SSL_PORT=3001
+
+USE_LETSENCRYPT="no"
+CUSTOM_CERTS_DIR="./certs"
+
+## These are ARVADOS-related parameters
+# For a stable release, change RELEASE "production" and VERSION to the
+# package version (including the iteration, e.g. X.Y.Z-1) of the
+# release.
+# The "local.params.example.*" files already set "RELEASE=production"
+# to deploy production-ready packages
+RELEASE="development"
+VERSION="latest"
+
+# These are arvados-formula-related parameters
+# An arvados-formula tag. For a stable release, this should be a
+# branch name (e.g. X.Y-dev) or tag for the release.
+# ARVADOS_TAG="2.2.0"
+# BRANCH="main"
+
+# Other formula versions we depend on
+POSTGRES_TAG="v0.41.6"
+NGINX_TAG="temp-fix-missing-statements-in-pillar"
+DOCKER_TAG="v2.0.7"
+LOCALE_TAG="v0.3.4"
+LETSENCRYPT_TAG="v2.1.0"
# Salt's dir
+DUMP_SALT_CONFIG_DIR=""
## states
S_DIR="/srv/salt"
## formulas
F_DIR="/srv/formulas"
-##pillars
+## pillars
P_DIR="/srv/pillars"
+## tests
+T_DIR="/tmp/cluster_tests"
-apt-get update
-apt-get install -y curl git jq
+arguments ${@}
-dpkg -l |grep salt-minion
-if [ ${?} -eq 0 ]; then
- echo "Salt already installed"
+if [ -s ${CONFIG_FILE} ]; then
+ source ${CONFIG_FILE}
else
- curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
- sh /tmp/bootstrap_salt.sh -XUdfP -x python3
- /bin/systemctl disable salt-minion.service
+ echo >&2 "You don't seem to have a config file with initial values."
+ echo >&2 "Please create a '${CONFIG_FILE}' file as described in"
+ echo >&2 " * https://doc.arvados.org/install/salt-single-host.html#single_host, or"
+ echo >&2 " * https://doc.arvados.org/install/salt-multi-host.html#multi_host_multi_hostnames"
+ exit 1
+fi
+
+if [ ! -d ${CONFIG_DIR} ]; then
+ echo >&2 "You don't seem to have a config directory with pillars and states."
+ echo >&2 "Please create a '${CONFIG_DIR}' directory (as configured in your '${CONFIG_FILE}'). Please see"
+ echo >&2 " * https://doc.arvados.org/install/salt-single-host.html#single_host, or"
+ echo >&2 " * https://doc.arvados.org/install/salt-multi-host.html#multi_host_multi_hostnames"
+ exit 1
+fi
+
+if grep -q 'fixme_or_this_wont_work' ${CONFIG_FILE} ; then
+ echo >&2 "The config file ${CONFIG_FILE} has some parameters that need to be modified."
+ echo >&2 "Please, fix them and re-run the provision script."
+ exit 1
fi
-# Set salt to masterless mode
-cat > /etc/salt/minion << EOFSM
+if ! grep -qE '^[[:alnum:]]{5}$' <<<${CLUSTER} ; then
+ echo >&2 "ERROR: <CLUSTER> must be exactly 5 alphanumeric characters long"
+ echo >&2 "Fix the cluster name in the 'local.params' file and re-run the provision script"
+ exit 1
+fi
+
+# Only used in single_host/single_name deploys
+if [ "x${HOSTNAME_EXT}" = "x" ] ; then
+ HOSTNAME_EXT="${CLUSTER}.${DOMAIN}"
+fi
+
+if [ "${DUMP_CONFIG}" = "yes" ]; then
+ echo "The provision installer will just dump a config under ${DUMP_SALT_CONFIG_DIR} and exit"
+else
+ # Install a few dependency packages
+ # First, let's figure out the OS we're working on
+ OS_ID=$(grep ^ID= /etc/os-release |cut -f 2 -d= |cut -f 2 -d \")
+ echo "Detected distro: ${OS_ID}"
+
+ case ${OS_ID} in
+ "centos")
+ echo "WARNING! Disabling SELinux, see https://dev.arvados.org/issues/18019"
+ sed -i 's/SELINUX=enforcing/SELINUX=permissive' /etc/sysconfig/selinux
+ setenforce permissive
+ yum install -y curl git jq
+ ;;
+ "debian"|"ubuntu")
+ DEBIAN_FRONTEND=noninteractive apt update
+ DEBIAN_FRONTEND=noninteractive apt install -y curl git jq
+ ;;
+ esac
+
+ if which salt-call; then
+ echo "Salt already installed"
+ else
+ curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
+ sh /tmp/bootstrap_salt.sh -XdfP -x python3
+ /bin/systemctl stop salt-minion.service
+ /bin/systemctl disable salt-minion.service
+ fi
+
+ # Set salt to masterless mode
+ cat > /etc/salt/minion << EOFSM
+failhard: "True"
+
file_client: local
file_roots:
base:
- ${S_DIR}
- ${F_DIR}/*
- - ${F_DIR}/*/test/salt/states/examples
pillar_roots:
base:
- ${P_DIR}
EOFSM
+fi
+
+mkdir -p ${S_DIR} ${F_DIR} ${P_DIR} ${T_DIR}
+
+# Get the formula and dependencies
+cd ${F_DIR} || exit 1
+echo "Cloning formulas"
+rm -rf ${F_DIR}/* || exit 1
+git clone --quiet https://github.com/saltstack-formulas/docker-formula.git ${F_DIR}/docker
+( cd docker && git checkout --quiet tags/"${DOCKER_TAG}" -b "${DOCKER_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/locale-formula.git ${F_DIR}/locale
+( cd locale && git checkout --quiet tags/"${LOCALE_TAG}" -b "${LOCALE_TAG}" )
+
+git clone --quiet https://github.com/netmanagers/nginx-formula.git ${F_DIR}/nginx
+( cd nginx && git checkout --quiet tags/"${NGINX_TAG}" -b "${NGINX_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/postgres-formula.git ${F_DIR}/postgres
+( cd postgres && git checkout --quiet tags/"${POSTGRES_TAG}" -b "${POSTGRES_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/letsencrypt-formula.git ${F_DIR}/letsencrypt
+( cd letsencrypt && git checkout --quiet tags/"${LETSENCRYPT_TAG}" -b "${LETSENCRYPT_TAG}" )
-mkdir -p ${S_DIR}
-mkdir -p ${F_DIR}
-mkdir -p ${P_DIR}
+git clone --quiet https://git.arvados.org/arvados-formula.git ${F_DIR}/arvados
+
+# If we want to try a specific branch of the formula
+if [ "x${BRANCH}" != "x" ]; then
+ ( cd ${F_DIR}/arvados && git checkout --quiet -t origin/"${BRANCH}" -b "${BRANCH}" )
+elif [ "x${ARVADOS_TAG}" != "x" ]; then
+( cd ${F_DIR}/arvados && git checkout --quiet tags/"${ARVADOS_TAG}" -b "${ARVADOS_TAG}" )
+fi
+
+if [ "x${VAGRANT}" = "xyes" ]; then
+ EXTRA_STATES_DIR="/home/vagrant/${CONFIG_DIR}/states"
+ SOURCE_PILLARS_DIR="/home/vagrant/${CONFIG_DIR}/pillars"
+ SOURCE_TESTS_DIR="/home/vagrant/${TESTS_DIR}"
+else
+ EXTRA_STATES_DIR="${SCRIPT_DIR}/${CONFIG_DIR}/states"
+ SOURCE_PILLARS_DIR="${SCRIPT_DIR}/${CONFIG_DIR}/pillars"
+ SOURCE_TESTS_DIR="${SCRIPT_DIR}/${TESTS_DIR}"
+fi
+
+SOURCE_STATES_DIR="${EXTRA_STATES_DIR}"
+
+echo "Writing pillars and states"
+
+# Replace variables (cluster, domain, etc) in the pillars, states and tests
+# to ease deployment for newcomers
+if [ ! -d "${SOURCE_PILLARS_DIR}" ]; then
+ echo "${SOURCE_PILLARS_DIR} does not exist or is not a directory. Exiting."
+ exit 1
+fi
+for f in $(ls "${SOURCE_PILLARS_DIR}"/*); do
+ sed "s#__ANONYMOUS_USER_TOKEN__#${ANONYMOUS_USER_TOKEN}#g;
+ s#__BLOB_SIGNING_KEY__#${BLOB_SIGNING_KEY}#g;
+ s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+ s#__CLUSTER__#${CLUSTER}#g;
+ s#__DOMAIN__#${DOMAIN}#g;
+ s#__HOSTNAME_EXT__#${HOSTNAME_EXT}#g;
+ s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+ s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+ s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g;
+ s#__INITIAL_USER__#${INITIAL_USER}#g;
+ s#__LE_AWS_REGION__#${LE_AWS_REGION}#g;
+ s#__LE_AWS_SECRET_ACCESS_KEY__#${LE_AWS_SECRET_ACCESS_KEY}#g;
+ s#__LE_AWS_ACCESS_KEY_ID__#${LE_AWS_ACCESS_KEY_ID}#g;
+ s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+ s#__KEEPWEB_EXT_SSL_PORT__#${KEEPWEB_EXT_SSL_PORT}#g;
+ s#__KEEP_EXT_SSL_PORT__#${KEEP_EXT_SSL_PORT}#g;
+ s#__MANAGEMENT_TOKEN__#${MANAGEMENT_TOKEN}#g;
+ s#__RELEASE__#${RELEASE}#g;
+ s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g;
+ s#__VERSION__#${VERSION}#g;
+ s#__WEBSHELL_EXT_SSL_PORT__#${WEBSHELL_EXT_SSL_PORT}#g;
+ s#__WEBSOCKET_EXT_SSL_PORT__#${WEBSOCKET_EXT_SSL_PORT}#g;
+ s#__WORKBENCH1_EXT_SSL_PORT__#${WORKBENCH1_EXT_SSL_PORT}#g;
+ s#__WORKBENCH2_EXT_SSL_PORT__#${WORKBENCH2_EXT_SSL_PORT}#g;
+ s#__CLUSTER_INT_CIDR__#${CLUSTER_INT_CIDR}#g;
+ s#__CONTROLLER_INT_IP__#${CONTROLLER_INT_IP}#g;
+ s#__WEBSOCKET_INT_IP__#${WEBSOCKET_INT_IP}#g;
+ s#__KEEP_INT_IP__#${KEEP_INT_IP}#g;
+ s#__KEEPSTORE0_INT_IP__#${KEEPSTORE0_INT_IP}#g;
+ s#__KEEPSTORE1_INT_IP__#${KEEPSTORE1_INT_IP}#g;
+ s#__KEEPWEB_INT_IP__#${KEEPWEB_INT_IP}#g;
+ s#__WEBSHELL_INT_IP__#${WEBSHELL_INT_IP}#g;
+ s#__SHELL_INT_IP__#${SHELL_INT_IP}#g;
+ s#__WORKBENCH1_INT_IP__#${WORKBENCH1_INT_IP}#g;
+ s#__WORKBENCH2_INT_IP__#${WORKBENCH2_INT_IP}#g;
+ s#__DATABASE_INT_IP__#${DATABASE_INT_IP}#g;
+ s#__WORKBENCH_SECRET_KEY__#${WORKBENCH_SECRET_KEY}#g" \
+ "${f}" > "${P_DIR}"/$(basename "${f}")
+done
+
+if [ "x${TEST}" = "xyes" ] && [ ! -d "${SOURCE_TESTS_DIR}" ]; then
+ echo "You requested to run tests, but ${SOURCE_TESTS_DIR} does not exist or is not a directory. Exiting."
+ exit 1
+fi
+mkdir -p ${T_DIR}
+# Replace cluster and domain name in the test files
+for f in $(ls "${SOURCE_TESTS_DIR}"/*); do
+ sed "s#__CLUSTER__#${CLUSTER}#g;
+ s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+ s#__DOMAIN__#${DOMAIN}#g;
+ s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+ s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+ s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g
+ s#__INITIAL_USER__#${INITIAL_USER}#g;
+ s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+ s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g" \
+ "${f}" > ${T_DIR}/$(basename "${f}")
+done
+chmod 755 ${T_DIR}/run-test.sh
+
+# Replace helper state files that differ from the formula's examples
+if [ -d "${SOURCE_STATES_DIR}" ]; then
+ mkdir -p "${F_DIR}"/extra/extra
+
+ for f in $(ls "${SOURCE_STATES_DIR}"/*); do
+ sed "s#__ANONYMOUS_USER_TOKEN__#${ANONYMOUS_USER_TOKEN}#g;
+ s#__CLUSTER__#${CLUSTER}#g;
+ s#__BLOB_SIGNING_KEY__#${BLOB_SIGNING_KEY}#g;
+ s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+ s#__DOMAIN__#${DOMAIN}#g;
+ s#__HOSTNAME_EXT__#${HOSTNAME_EXT}#g;
+ s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+ s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+ s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g;
+ s#__INITIAL_USER__#${INITIAL_USER}#g;
+ s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+ s#__KEEPWEB_EXT_SSL_PORT__#${KEEPWEB_EXT_SSL_PORT}#g;
+ s#__KEEP_EXT_SSL_PORT__#${KEEP_EXT_SSL_PORT}#g;
+ s#__MANAGEMENT_TOKEN__#${MANAGEMENT_TOKEN}#g;
+ s#__RELEASE__#${RELEASE}#g;
+ s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g;
+ s#__VERSION__#${VERSION}#g;
+ s#__CLUSTER_INT_CIDR__#${CLUSTER_INT_CIDR}#g;
+ s#__CONTROLLER_INT_IP__#${CONTROLLER_INT_IP}#g;
+ s#__WEBSOCKET_INT_IP__#${WEBSOCKET_INT_IP}#g;
+ s#__KEEP_INT_IP__#${KEEP_INT_IP}#g;
+ s#__KEEPSTORE0_INT_IP__#${KEEPSTORE0_INT_IP}#g;
+ s#__KEEPSTORE1_INT_IP__#${KEEPSTORE1_INT_IP}#g;
+ s#__KEEPWEB_INT_IP__#${KEEPWEB_INT_IP}#g;
+ s#__WEBSHELL_INT_IP__#${WEBSHELL_INT_IP}#g;
+ s#__WORKBENCH1_INT_IP__#${WORKBENCH1_INT_IP}#g;
+ s#__WORKBENCH2_INT_IP__#${WORKBENCH2_INT_IP}#g;
+ s#__DATABASE_INT_IP__#${DATABASE_INT_IP}#g;
+ s#__WEBSHELL_EXT_SSL_PORT__#${WEBSHELL_EXT_SSL_PORT}#g;
+ s#__WEBSOCKET_EXT_SSL_PORT__#${WEBSOCKET_EXT_SSL_PORT}#g;
+ s#__WORKBENCH1_EXT_SSL_PORT__#${WORKBENCH1_EXT_SSL_PORT}#g;
+ s#__WORKBENCH2_EXT_SSL_PORT__#${WORKBENCH2_EXT_SSL_PORT}#g;
+ s#__WORKBENCH_SECRET_KEY__#${WORKBENCH_SECRET_KEY}#g" \
+ "${f}" > "${F_DIR}/extra/extra"/$(basename "${f}")
+ done
+fi
+
+# Now, we build the SALT states/pillars trees
+# As we need to separate both states and pillars in case we want specific
+# roles, we iterate on both at the same time
# States
cat > ${S_DIR}/top.sls << EOFTSLS
base:
'*':
- - single_host.host_entries
- - single_host.snakeoil_certs
- locale
- - nginx.passenger
- - postgres
- - docker
- - arvados
EOFTSLS
# Pillars
cat > ${P_DIR}/top.sls << EOFPSLS
base:
'*':
- - arvados
- - docker
- locale
- - nginx_api_configuration
- - nginx_controller_configuration
- - nginx_keepproxy_configuration
- - nginx_keepweb_configuration
- - nginx_passenger
- - nginx_websocket_configuration
- - nginx_webshell_configuration
- - nginx_workbench2_configuration
- - nginx_workbench_configuration
- - postgresql
+ - arvados
EOFPSLS
-# Get the formula and dependencies
-cd ${F_DIR} || exit 1
-git clone --branch "${ARVADOS_TAG}" https://github.com/saltstack-formulas/arvados-formula.git
-git clone --branch "${DOCKER_TAG}" https://github.com/saltstack-formulas/docker-formula.git
-git clone --branch "${LOCALE_TAG}" https://github.com/saltstack-formulas/locale-formula.git
-git clone --branch "${NGINX_TAG}" https://github.com/saltstack-formulas/nginx-formula.git
-git clone --branch "${POSTGRES_TAG}" https://github.com/saltstack-formulas/postgres-formula.git
-
-if [ "x${BRANCH}" != "x" ]; then
- cd ${F_DIR}/arvados-formula || exit 1
- git checkout -t origin/"${BRANCH}"
- cd -
+# States, extra states
+if [ -d "${F_DIR}"/extra/extra ]; then
+ if [ "$DEV_MODE" = "yes" ]; then
+ # In dev mode, we create some snake oil certs that we'll
+ # use as CUSTOM_CERTS, so we don't skip the states file
+ SKIP_SNAKE_OIL="dont_snakeoil_certs"
+ else
+ SKIP_SNAKE_OIL="snakeoil_certs"
+ fi
+ for f in $(ls "${F_DIR}"/extra/extra/*.sls | grep -v ${SKIP_SNAKE_OIL}); do
+ echo " - extra.$(basename ${f} | sed 's/.sls$//g')" >> ${S_DIR}/top.sls
+ done
+ # Use custom certs
+ if [ "x${USE_LETSENCRYPT}" != "xyes" ]; then
+ mkdir -p "${F_DIR}"/extra/extra/files
+ fi
fi
-if [ "x${VAGRANT}" = "xyes" ]; then
- SOURCE_PILLARS_DIR="/vagrant/${CONFIG_DIR}"
- TESTS_DIR="/vagrant/${TESTS_DIR}"
+# If we want specific roles for a node, just add the desired states
+# and its dependencies
+if [ -z "${ROLES}" ]; then
+ # States
+ echo " - nginx.passenger" >> ${S_DIR}/top.sls
+ # Currently, only available on config_examples/multi_host/aws
+ if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - extra.aws_credentials" >> ${S_DIR}/top.sls
+ fi
+ grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs
+ # Copy certs to formula extra/files
+ # In dev mode, the files will be created and put in the destination directory by the
+ # snakeoil_certs.sls state file
+ mkdir -p /srv/salt/certs
+ cp -rv ${CUSTOM_CERTS_DIR}/* /srv/salt/certs/
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
+ fi
+
+ echo " - postgres" >> ${S_DIR}/top.sls
+ echo " - docker.software" >> ${S_DIR}/top.sls
+ echo " - arvados" >> ${S_DIR}/top.sls
+
+ # Pillars
+ echo " - docker" >> ${P_DIR}/top.sls
+ echo " - nginx_api_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_controller_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_keepproxy_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_keepweb_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_passenger" >> ${P_DIR}/top.sls
+ echo " - nginx_websocket_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_webshell_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_workbench2_configuration" >> ${P_DIR}/top.sls
+ echo " - nginx_workbench_configuration" >> ${P_DIR}/top.sls
+ echo " - postgresql" >> ${P_DIR}/top.sls
+
+ # Currently, only available on config_examples/multi_host/aws
+ if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${P_DIR}/top.sls || echo " - aws_credentials" >> ${P_DIR}/top.sls
+ fi
+ grep -q "letsencrypt" ${P_DIR}/top.sls || echo " - letsencrypt" >> ${P_DIR}/top.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ for c in controller websocket workbench workbench2 webshell download collections keepproxy; do
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${c}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${c}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${c}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${c}_configuration.sls
+ done
+ else
+ # Use custom certs (either dev mode or prod)
+ grep -q "extra_custom_certs" ${P_DIR}/top.sls || echo " - extra_custom_certs" >> ${P_DIR}/top.sls
+ # And add the certs in the custom_certs pillar
+ echo "extra_custom_certs_dir: /srv/salt/certs" > ${P_DIR}/extra_custom_certs.sls
+ echo "extra_custom_certs:" >> ${P_DIR}/extra_custom_certs.sls
+
+ for c in controller websocket workbench workbench2 webshell download collections keepproxy; do
+ grep -q ${c} ${P_DIR}/extra_custom_certs.sls || echo " - ${c}" >> ${P_DIR}/extra_custom_certs.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${c}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${c}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${c}.key#g" \
+ ${P_DIR}/nginx_${c}_configuration.sls
+ done
+ fi
else
- SOURCE_PILLARS_DIR="${SCRIPT_DIR}/${CONFIG_DIR}"
- TESTS_DIR="${SCRIPT_DIR}/${TESTS_DIR}"
+ # If we add individual roles, make sure we add the repo first
+ echo " - arvados.repo" >> ${S_DIR}/top.sls
+ for R in ${ROLES}; do
+ case "${R}" in
+ "database")
+ # States
+ echo " - postgres" >> ${S_DIR}/top.sls
+ # Pillars
+ echo ' - postgresql' >> ${P_DIR}/top.sls
+ ;;
+ "api")
+ # States
+ # FIXME: https://dev.arvados.org/issues/17352
+ grep -q "postgres.client" ${S_DIR}/top.sls || echo " - postgres.client" >> ${S_DIR}/top.sls
+ grep -q "nginx.passenger" ${S_DIR}/top.sls || echo " - nginx.passenger" >> ${S_DIR}/top.sls
+ ### If we don't install and run LE before arvados-api-server, it fails and breaks everything
+ ### after it. So we add this here as we are, after all, sharing the host for api and controller
+ # Currently, only available on config_examples/multi_host/aws
+ if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - aws_credentials" >> ${S_DIR}/top.sls
+ fi
+ grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs
+ cp -v ${CUSTOM_CERTS_DIR}/controller.* "${F_DIR}/extra/extra/files/"
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
+ fi
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ # Pillars
+ grep -q "aws_credentials" ${P_DIR}/top.sls || echo " - aws_credentials" >> ${P_DIR}/top.sls
+ grep -q "docker" ${P_DIR}/top.sls || echo " - docker" >> ${P_DIR}/top.sls
+ grep -q "postgresql" ${P_DIR}/top.sls || echo " - postgresql" >> ${P_DIR}/top.sls
+ grep -q "nginx_passenger" ${P_DIR}/top.sls || echo " - nginx_passenger" >> ${P_DIR}/top.sls
+ grep -q "nginx_${R}_configuration" ${P_DIR}/top.sls || echo " - nginx_${R}_configuration" >> ${P_DIR}/top.sls
+ ;;
+ "controller" | "websocket" | "workbench" | "workbench2" | "webshell" | "keepweb" | "keepproxy")
+ # States
+ grep -q "nginx.passenger" ${S_DIR}/top.sls || echo " - nginx.passenger" >> ${S_DIR}/top.sls
+ # Currently, only available on config_examples/multi_host/aws
+ if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - aws_credentials" >> ${S_DIR}/top.sls
+ fi
+ grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs, special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ cp -v ${CUSTOM_CERTS_DIR}/download.* "${F_DIR}/extra/extra/files/"
+ cp -v ${CUSTOM_CERTS_DIR}/collections.* "${F_DIR}/extra/extra/files/"
+ else
+ cp -v ${CUSTOM_CERTS_DIR}/${R}.* "${F_DIR}/extra/extra/files/"
+ fi
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
+
+ fi
+ # webshell role is just a nginx vhost, so it has no state
+ if [ "${R}" != "webshell" ]; then
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ fi
+ # Pillars
+ grep -q "nginx_passenger" ${P_DIR}/top.sls || echo " - nginx_passenger" >> ${P_DIR}/top.sls
+ grep -q "nginx_${R}_configuration" ${P_DIR}/top.sls || echo " - nginx_${R}_configuration" >> ${P_DIR}/top.sls
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ grep -q "nginx_download_configuration" ${P_DIR}/top.sls || echo " - nginx_download_configuration" >> ${P_DIR}/top.sls
+ grep -q "nginx_collections_configuration" ${P_DIR}/top.sls || echo " - nginx_collections_configuration" >> ${P_DIR}/top.sls
+ fi
+
+ # Currently, only available on config_examples/multi_host/aws
+ if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${P_DIR}/top.sls || echo " - aws_credentials" >> ${P_DIR}/top.sls
+ fi
+ grep -q "letsencrypt" ${P_DIR}/top.sls || echo " - letsencrypt" >> ${P_DIR}/top.sls
+ grep -q "letsencrypt_${R}_configuration" ${P_DIR}/top.sls || echo " - letsencrypt_${R}_configuration" >> ${P_DIR}/top.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ for kwsub in download collections; do
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${kwsub}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${kwsub}_configuration.sls
+ done
+ else
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${R}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${R}_configuration.sls
+ fi
+ else
+ grep -q ${R} ${P_DIR}/extra_custom_certs.sls || echo " - ${R}" >> ${P_DIR}/extra_custom_certs.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ for kwsub in download collections; do
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${kwsub}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${kwsub}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${kwsub}.key#g" \
+ ${P_DIR}/nginx_${kwsub}_configuration.sls
+ done
+ else
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${R}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${R}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${R}.key#g" \
+ ${P_DIR}/nginx_${R}_configuration.sls
+ fi
+ fi
+ ;;
+ "shell")
+ # States
+ grep -q "docker" ${S_DIR}/top.sls || echo " - docker.software" >> ${S_DIR}/top.sls
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ # Pillars
+ grep -q "" ${P_DIR}/top.sls || echo " - docker" >> ${P_DIR}/top.sls
+ ;;
+ "dispatcher")
+ # States
+ grep -q "docker" ${S_DIR}/top.sls || echo " - docker.software" >> ${S_DIR}/top.sls
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ # Pillars
+ # ATM, no specific pillar needed
+ ;;
+ "keepstore")
+ # States
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ # Pillars
+ # ATM, no specific pillar needed
+ ;;
+ *)
+ echo "Unknown role ${R}"
+ exit 1
+ ;;
+ esac
+ done
fi
-# Replace cluster and domain name in the example pillars and test files
-for f in "${SOURCE_PILLARS_DIR}"/*; do
- sed "s/__CLUSTER__/${CLUSTER}/g;
- s/__DOMAIN__/${DOMAIN}/g;
- s/__RELEASE__/${RELEASE}/g;
- s/__HOST_SSL_PORT__/${HOST_SSL_PORT}/g;
- s/__GUEST_SSL_PORT__/${GUEST_SSL_PORT}/g;
- s/__INITIAL_USER__/${INITIAL_USER}/g;
- s/__INITIAL_USER_EMAIL__/${INITIAL_USER_EMAIL}/g;
- s/__INITIAL_USER_PASSWORD__/${INITIAL_USER_PASSWORD}/g;
- s/__VERSION__/${VERSION}/g" \
- "${f}" > "${P_DIR}"/$(basename "${f}")
-done
-
-mkdir -p /tmp/cluster_tests
-# Replace cluster and domain name in the example pillars and test files
-for f in "${TESTS_DIR}"/*; do
- sed "s/__CLUSTER__/${CLUSTER}/g;
- s/__DOMAIN__/${DOMAIN}/g;
- s/__HOST_SSL_PORT__/${HOST_SSL_PORT}/g;
- s/__INITIAL_USER__/${INITIAL_USER}/g;
- s/__INITIAL_USER_EMAIL__/${INITIAL_USER_EMAIL}/g;
- s/__INITIAL_USER_PASSWORD__/${INITIAL_USER_PASSWORD}/g" \
- ${f} > /tmp/cluster_tests/$(basename ${f})
-done
-chmod 755 /tmp/cluster_tests/run-test.sh
+if [ "${DUMP_CONFIG}" = "yes" ]; then
+ # We won't run the rest of the script because we're just dumping the config
+ exit 0
+fi
# FIXME! #16992 Temporary fix for psql call in arvados-api-server
if [ -e /root/.psqlrc ]; then
# END FIXME! #16992 Temporary fix for psql call in arvados-api-server
# Leave a copy of the Arvados CA so the user can copy it where it's required
-echo "Copying the Arvados CA certificate to the installer dir, so you can import it"
-# If running in a vagrant VM, also add default user to docker group
-if [ "x${VAGRANT}" = "xyes" ]; then
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant
-
- echo "Adding the vagrant user to the docker group"
- usermod -a -G docker vagrant
-else
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}
+if [ "$DEV_MODE" = "yes" ]; then
+ echo "Copying the Arvados CA certificate to the installer dir, so you can import it"
+ # If running in a vagrant VM, also add default user to docker group
+ if [ "x${VAGRANT}" = "xyes" ]; then
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+
+ echo "Adding the vagrant user to the docker group"
+ usermod -a -G docker vagrant
+ else
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+ fi
fi
# Test that the installation finished correctly
if [ "x${TEST}" = "xyes" ]; then
- cd /tmp/cluster_tests
- ./run-test.sh
+ cd ${T_DIR}
+ # If we use RVM, we need to run this with it, or most ruby commands will fail
+ RVM_EXEC=""
+ if [ -x /usr/local/rvm/bin/rvm-exec ]; then
+ RVM_EXEC="/usr/local/rvm/bin/rvm-exec"
+ fi
+ ${RVM_EXEC} ./run-test.sh
fi
+++ /dev/null
----
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-### NGINX
-nginx:
- install_from_phusionpassenger: true
- lookup:
- passenger_package: libnginx-mod-http-passenger
- passenger_config_file: /etc/nginx/conf.d/mod-http-passenger.conf
-
- ### SERVER
- server:
- config:
- include: 'modules-enabled/*.conf'
- worker_processes: 4
-
- ### SITES
- servers:
- managed:
- # Remove default webserver
- default:
- enabled: false
#
# SPDX-License-Identifier: Apache-2.0
-export ARVADOS_API_TOKEN=changemesystemroottoken
-export ARVADOS_API_HOST=__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
+export ARVADOS_API_TOKEN=__SYSTEM_ROOT_TOKEN__
+export ARVADOS_API_HOST=__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__
export ARVADOS_API_HOST_INSECURE=true
set -o pipefail
# First, validate that the CA is installed and that we can query it with no errors.
-if ! curl -s -o /dev/null https://workbench.${ARVADOS_API_HOST}/users/welcome?return_to=%2F; then
+if ! curl -s -o /dev/null https://${ARVADOS_API_HOST}/users/welcome?return_to=%2F; then
echo "The Arvados CA was not correctly installed. Although some components will work,"
echo "others won't. Please verify that the CA cert file was installed correctly and"
echo "retry running these tests."
arv user update --uuid "${user_uuid}" --user '{"is_active": true}'
echo "Getting the user API TOKEN"
-user_api_token=$(arv api_client_authorization list --filters "[[\"owner_uuid\", \"=\", \"${user_uuid}\"],[\"kind\", \"==\", \"arvados#apiClientAuthorization\"]]" --limit=1 |jq -r .items[].api_token)
+user_api_token=$(arv api_client_authorization list | jq -r ".items[] | select( .owner_uuid == \"${user_uuid}\" ).api_token" | head -1)
if [ "x${user_api_token}" = "x" ]; then
+ echo "No existing token found for user '__INITIAL_USER__' (user_uuid: '${user_uuid}'). Creating token"
user_api_token=$(arv api_client_authorization create --api-client-authorization "{\"owner_uuid\": \"${user_uuid}\"}" | jq -r .api_token)
fi
+echo "API TOKEN FOR user '__INITIAL_USER__': '${user_api_token}'."
+
# Change to the user's token and run the workflow
+echo "Switching to user '__INITIAL_USER__'"
export ARVADOS_API_TOKEN="${user_api_token}"
echo "Running test CWL workflow"
-cwl-runner hasher-workflow.cwl hasher-workflow-job.yml
+cwl-runner --debug hasher-workflow.cwl hasher-workflow-job.yml
tc := boot.NewTestCluster(
filepath.Join(cwd, "..", ".."),
id, cfg, "127.0.0."+id[3:], c.Log)
+ tc.Super.NoWorkbench1 = true
+ tc.Start()
s.testClusters[id] = tc
- s.testClusters[id].Start()
}
for _, tc := range s.testClusters {
ok := tc.WaitReady()
"os"
"strings"
+ "git.arvados.org/arvados.git/lib/cmd"
"git.arvados.org/arvados.git/sdk/go/arvados"
)
Path string
UserID string
Verbose bool
+ CaseInsensitive bool
ParentGroupUUID string
ParentGroupName string
SysUserUUID string
* 1st: Group name
* 2nd: User identifier
* 3rd (Optional): User permission on the group: can_read, can_write or can_manage. (Default: can_write)`
- fmt.Fprintf(os.Stderr, "%s\n\n", usageStr)
- fmt.Fprintf(os.Stderr, "Usage:\n%s [OPTIONS] <input-file.csv>\n\n", os.Args[0])
- fmt.Fprintf(os.Stderr, "Options:\n")
+ fmt.Fprintf(flags.Output(), "%s\n\n", usageStr)
+ fmt.Fprintf(flags.Output(), "Usage:\n%s [OPTIONS] <input-file.csv>\n\n", os.Args[0])
+ fmt.Fprintf(flags.Output(), "Options:\n")
flags.PrintDefaults()
}
"user-id",
"email",
"Attribute by which every user is identified. Valid values are: email and username.")
+ caseInsensitive := flags.Bool(
+ "case-insensitive",
+ false,
+ "Performs case insensitive matching on user IDs. Off by default.")
verbose := flags.Bool(
"verbose",
false,
"",
"Use given group UUID as a parent for the remote groups. Should be owned by the system user. If not specified, a group named '"+config.ParentGroupName+"' will be used (and created if nonexistant).")
- // Parse args; omit the first arg which is the command name
- flags.Parse(os.Args[1:])
-
- // Print version information if requested
- if *getVersion {
+ if ok, code := cmd.ParseFlags(flags, os.Args[0], os.Args[1:], "input-file.csv", os.Stderr); !ok {
+ os.Exit(code)
+ } else if *getVersion {
fmt.Printf("%s %s\n", os.Args[0], version)
os.Exit(0)
}
config.ParentGroupUUID = *parentGroupUUID
config.UserID = *userID
config.Verbose = *verbose
+ config.CaseInsensitive = *caseInsensitive
return nil
}
}
defer f.Close()
- log.Printf("%s %s started. Using %q as users id and parent group UUID %q", os.Args[0], version, cfg.UserID, cfg.ParentGroupUUID)
+ iCaseLog := ""
+ if cfg.UserID == "username" && cfg.CaseInsensitive {
+ iCaseLog = " - username matching requested to be case-insensitive"
+ }
+ log.Printf("%s %s started. Using %q as users id and parent group UUID %q%s", os.Args[0], version, cfg.UserID, cfg.ParentGroupUUID, iCaseLog)
// Get the complete user list to minimize API Server requests
allUsers := make(map[string]arvados.User)
if err != nil {
return err
}
+ if cfg.UserID == "username" && uID != "" && cfg.CaseInsensitive {
+ uID = strings.ToLower(uID)
+ if uuid, found := userIDToUUID[uID]; found {
+ return fmt.Errorf("case insensitive collision for username %q between %q and %q", uID, u.UUID, uuid)
+ }
+ }
userIDToUUID[uID] = u.UUID
if cfg.Verbose {
log.Printf("Seen user %q (%s)", u.Username, u.UUID)
membersSkipped++
continue
}
+ if cfg.UserID == "username" && cfg.CaseInsensitive {
+ groupMember = strings.ToLower(groupMember)
+ }
if !(groupPermission == "can_read" || groupPermission == "can_write" || groupPermission == "can_manage") {
log.Printf("Warning: 3rd field should be 'can_read', 'can_write' or 'can_manage'. Found: %q at line %d, skipping.", groupPermission, lineNo)
membersSkipped++
if page.Len() == 0 {
break
}
- for _, i := range page.GetItems() {
- allItems = append(allItems, i)
- }
+ allItems = append(allItems, page.GetItems()...)
params.Offset += page.Len()
}
return allItems, nil
if err != nil {
return remoteGroups, groupNameToUUID, err
}
+ if cfg.UserID == "username" && cfg.CaseInsensitive {
+ memberID = strings.ToLower(memberID)
+ }
membersSet[memberID] = u2gLinkSet[link.HeadUUID]
}
remoteGroups[group.UUID] = &GroupInfo{
userID, _ := GetUserID(user, cfg.UserID)
return fmt.Errorf("error getting links needed to remove user %q from group %q: %s", userID, group.Name, err)
}
- for _, link := range l {
- links = append(links, link)
- }
+ links = append(links, l...)
}
for _, item := range links {
link := item.(arvados.Link)
os.Args = []string{"cmd", "somefile.csv"}
config, err := GetConfig()
c.Assert(err, IsNil)
+ config.UserID = "email"
// Confirm that the parent group was created
gl = arvados.GroupList{}
ac.RequestAndDecode(&gl, "GET", "/arvados/v1/groups", nil, params)
}},
}
ac.RequestAndDecode(&ll, "GET", "/arvados/v1/links", nil, params)
- if ll.Len() != 1 {
- return false
- }
- return true
+ return ll.Len() == 1
}
// If named group exists, return its UUID
func (s *TestSuite) TestParseFlagsWithPositionalArgument(c *C) {
cfg := ConfigParams{}
- os.Args = []string{"cmd", "-verbose", "/tmp/somefile.csv"}
+ os.Args = []string{"cmd", "-verbose", "-case-insensitive", "/tmp/somefile.csv"}
err := ParseFlags(&cfg)
c.Assert(err, IsNil)
c.Check(cfg.Path, Equals, "/tmp/somefile.csv")
c.Check(cfg.Verbose, Equals, true)
+ c.Check(cfg.CaseInsensitive, Equals, true)
}
func (s *TestSuite) TestParseFlagsWithoutPositionalArgument(c *C) {
c.Assert(GroupMembershipExists(s.cfg.Client, activeUserUUID, groupUUID, "can_write"), Equals, true)
}
-// Users listed on the file that don't exist on the system are ignored
+// Entries with missing data are ignored.
func (s *TestSuite) TestIgnoreEmptyFields(c *C) {
activeUserEmail := s.users[arvadostest.ActiveUserUUID].Email
activeUserUUID := s.users[arvadostest.ActiveUserUUID].UUID
s.cfg.Path = tmpfile.Name()
s.cfg.UserID = "username"
err = doMain(s.cfg)
- s.cfg.UserID = "email"
c.Assert(err, IsNil)
// Confirm that memberships exist
groupUUID, err = RemoteGroupExists(s.cfg, "TestGroup1")
c.Assert(groupUUID, Not(Equals), "")
c.Assert(GroupMembershipExists(s.cfg.Client, activeUserUUID, groupUUID, "can_write"), Equals, true)
}
+
+func (s *TestSuite) TestUseUsernamesWithCaseInsensitiveMatching(c *C) {
+ activeUserName := strings.ToUpper(s.users[arvadostest.ActiveUserUUID].Username)
+ activeUserUUID := s.users[arvadostest.ActiveUserUUID].UUID
+ // Confirm that group doesn't exist
+ groupUUID, err := RemoteGroupExists(s.cfg, "TestGroup1")
+ c.Assert(err, IsNil)
+ c.Assert(groupUUID, Equals, "")
+ // Create file & run command
+ data := [][]string{
+ {"TestGroup1", activeUserName},
+ }
+ tmpfile, err := MakeTempCSVFile(data)
+ c.Assert(err, IsNil)
+ defer os.Remove(tmpfile.Name()) // clean up
+ s.cfg.Path = tmpfile.Name()
+ s.cfg.UserID = "username"
+ s.cfg.CaseInsensitive = true
+ err = doMain(s.cfg)
+ c.Assert(err, IsNil)
+ // Confirm that memberships exist
+ groupUUID, err = RemoteGroupExists(s.cfg, "TestGroup1")
+ c.Assert(err, IsNil)
+ c.Assert(groupUUID, Not(Equals), "")
+ c.Assert(GroupMembershipExists(s.cfg.Client, activeUserUUID, groupUUID, "can_write"), Equals, true)
+}
+
+func (s *TestSuite) TestUsernamesCaseInsensitiveCollision(c *C) {
+ activeUserName := s.users[arvadostest.ActiveUserUUID].Username
+ activeUserUUID := s.users[arvadostest.ActiveUserUUID].UUID
+
+ nu := arvados.User{}
+ nuUsername := strings.ToUpper(activeUserName)
+ err := s.cfg.Client.RequestAndDecode(&nu, "POST", "/arvados/v1/users", nil, map[string]interface{}{
+ "user": map[string]string{
+ "username": nuUsername,
+ },
+ })
+ c.Assert(err, IsNil)
+
+ // Manually remove non-fixture user because /database/reset fails otherwise
+ defer s.cfg.Client.RequestAndDecode(nil, "DELETE", "/arvados/v1/users/"+nu.UUID, nil, nil)
+
+ c.Assert(nu.Username, Equals, nuUsername)
+ c.Assert(nu.UUID, Not(Equals), activeUserUUID)
+ c.Assert(nu.Username, Not(Equals), activeUserName)
+
+ data := [][]string{
+ {"SomeGroup", activeUserName},
+ }
+ tmpfile, err := MakeTempCSVFile(data)
+ c.Assert(err, IsNil)
+ defer os.Remove(tmpfile.Name()) // clean up
+
+ s.cfg.Path = tmpfile.Name()
+ s.cfg.UserID = "username"
+ s.cfg.CaseInsensitive = true
+ err = doMain(s.cfg)
+ // Should get an error because of "ACTIVE" and "Active" usernames
+ c.Assert(err, NotNil)
+ c.Assert(err, ErrorMatches, ".*case insensitive collision.*")
+}
--- /dev/null
+.DS_Store
+.terraform
+examples
+*backup
+*disabled
+.terraform.lock.hcl
+terraform.tfstate*
--- /dev/null
+#!/usr/bin/env python3
+#
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: CC-BY-SA-3.0
+
+import argparse
+import logging
+import random
+import string
+import sys
+
+import arvados
+import arvados.collection
+
+logger = logging.getLogger('arvados.test_collection_create')
+logger.setLevel(logging.INFO)
+
+max_manifest_size = 127*1024*1024
+
+opts = argparse.ArgumentParser(add_help=False)
+opts.add_argument('--min-files', type=int, default=30000, help="""
+Minimum number of files on each directory. Default: 30000.
+""")
+opts.add_argument('--max-files', type=int, default=30000, help="""
+Maximum number of files on each directory. Default: 30000.
+""")
+opts.add_argument('--min-depth', type=int, default=0, help="""
+Minimum depth for the created tree structure. Default: 0.
+""")
+opts.add_argument('--max-depth', type=int, default=0, help="""
+Maximum depth for the created tree structure. Default: 0.
+""")
+opts.add_argument('--min-subdirs', type=int, default=1, help="""
+Minimum number of subdirectories created at every depth level. Default: 1.
+""")
+opts.add_argument('--max-subdirs', type=int, default=10, help="""
+Maximum number of subdirectories created at every depth level. Default: 10.
+""")
+opts.add_argument('--debug', action='store_true', default=False, help="""
+Sets logging level to DEBUG.
+""")
+
+arg_parser = argparse.ArgumentParser(
+ description='Create a collection with garbage data for testing purposes.',
+ parents=[opts])
+
+adjectives = ['abandoned','able','absolute','adorable','adventurous','academic',
+ 'acceptable','acclaimed','accomplished','accurate','aching','acidic','acrobatic',
+ 'active','actual','adept','admirable','admired','adolescent','adorable','adored',
+ 'advanced','afraid','affectionate','aged','aggravating','aggressive','agile',
+ 'agitated','agonizing','agreeable','ajar','alarmed','alarming','alert','alienated',
+ 'alive','all','altruistic','amazing','ambitious','ample','amused','amusing','anchored',
+ 'ancient','angelic','angry','anguished','animated','annual','another','antique',
+ 'anxious','any','apprehensive','appropriate','apt','arctic','arid','aromatic','artistic',
+ 'ashamed','assured','astonishing','athletic','attached','attentive','attractive',
+ 'austere','authentic','authorized','automatic','avaricious','average','aware','awesome',
+ 'awful','awkward','babyish','bad','back','baggy','bare','barren','basic','beautiful',
+ 'belated','beloved','beneficial','better','best','bewitched','big','big-hearted',
+ 'biodegradable','bite-sized','bitter','black','black-and-white','bland','blank',
+ 'blaring','bleak','blind','blissful','blond','blue','blushing','bogus','boiling',
+ 'bold','bony','boring','bossy','both','bouncy','bountiful','bowed','brave','breakable',
+ 'brief','bright','brilliant','brisk','broken','bronze','brown','bruised','bubbly',
+ 'bulky','bumpy','buoyant','burdensome','burly','bustling','busy','buttery','buzzing',
+ 'calculating','calm','candid','canine','capital','carefree','careful','careless',
+ 'caring','cautious','cavernous','celebrated','charming','cheap','cheerful','cheery',
+ 'chief','chilly','chubby','circular','classic','clean','clear','clear-cut','clever',
+ 'close','closed','cloudy','clueless','clumsy','cluttered','coarse','cold','colorful',
+ 'colorless','colossal','comfortable','common','compassionate','competent','complete',
+ 'complex','complicated','composed','concerned','concrete','confused','conscious',
+ 'considerate','constant','content','conventional','cooked','cool','cooperative',
+ 'coordinated','corny','corrupt','costly','courageous','courteous','crafty','crazy',
+ 'creamy','creative','creepy','criminal','crisp','critical','crooked','crowded',
+ 'cruel','crushing','cuddly','cultivated','cultured','cumbersome','curly','curvy',
+ 'cute','cylindrical','damaged','damp','dangerous','dapper','daring','darling','dark',
+ 'dazzling','dead','deadly','deafening','dear','dearest','decent','decimal','decisive',
+ 'deep','defenseless','defensive','defiant','deficient','definite','definitive','delayed',
+ 'delectable','delicious','delightful','delirious','demanding','dense','dental',
+ 'dependable','dependent','descriptive','deserted','detailed','determined','devoted',
+ 'different','difficult','digital','diligent','dim','dimpled','dimwitted','direct',
+ 'disastrous','discrete','disfigured','disgusting','disloyal','dismal','distant',
+ 'downright','dreary','dirty','disguised','dishonest','dismal','distant','distinct',
+ 'distorted','dizzy','dopey','doting','double','downright','drab','drafty','dramatic',
+ 'dreary','droopy','dry','dual','dull','dutiful','each','eager','earnest','early',
+ 'easy','easy-going','ecstatic','edible','educated','elaborate','elastic','elated',
+ 'elderly','electric','elegant','elementary','elliptical','embarrassed','embellished',
+ 'eminent','emotional','empty','enchanted','enchanting','energetic','enlightened',
+ 'enormous','enraged','entire','envious','equal','equatorial','essential','esteemed',
+ 'ethical','euphoric','even','evergreen','everlasting','every','evil','exalted',
+ 'excellent','exemplary','exhausted','excitable','excited','exciting','exotic',
+ 'expensive','experienced','expert','extraneous','extroverted','extra-large','extra-small',
+ 'fabulous','failing','faint','fair','faithful','fake','false','familiar','famous',
+ 'fancy','fantastic','far','faraway','far-flung','far-off','fast','fat','fatal',
+ 'fatherly','favorable','favorite','fearful','fearless','feisty','feline','female',
+ 'feminine','few','fickle','filthy','fine','finished','firm','first','firsthand',
+ 'fitting','fixed','flaky','flamboyant','flashy','flat','flawed','flawless','flickering',
+ 'flimsy','flippant','flowery','fluffy','fluid','flustered','focused','fond','foolhardy',
+ 'foolish','forceful','forked','formal','forsaken','forthright','fortunate','fragrant',
+ 'frail','frank','frayed','free','French','fresh','frequent','friendly','frightened',
+ 'frightening','frigid','frilly','frizzy','frivolous','front','frosty','frozen',
+ 'frugal','fruitful','full','fumbling','functional','funny','fussy','fuzzy','gargantuan',
+ 'gaseous','general','generous','gentle','genuine','giant','giddy','gigantic','gifted',
+ 'giving','glamorous','glaring','glass','gleaming','gleeful','glistening','glittering',
+ 'gloomy','glorious','glossy','glum','golden','good','good-natured','gorgeous',
+ 'graceful','gracious','grand','grandiose','granular','grateful','grave','gray',
+ 'great','greedy','green','gregarious','grim','grimy','gripping','grizzled','gross',
+ 'grotesque','grouchy','grounded','growing','growling','grown','grubby','gruesome',
+ 'grumpy','guilty','gullible','gummy','hairy','half','handmade','handsome','handy',
+ 'happy','happy-go-lucky','hard','hard-to-find','harmful','harmless','harmonious',
+ 'harsh','hasty','hateful','haunting','healthy','heartfelt','hearty','heavenly',
+ 'heavy','hefty','helpful','helpless','hidden','hideous','high','high-level','hilarious',
+ 'hoarse','hollow','homely','honest','honorable','honored','hopeful','horrible',
+ 'hospitable','hot','huge','humble','humiliating','humming','humongous','hungry',
+ 'hurtful','husky','icky','icy','ideal','idealistic','identical','idle','idiotic',
+ 'idolized','ignorant','ill','illegal','ill-fated','ill-informed','illiterate',
+ 'illustrious','imaginary','imaginative','immaculate','immaterial','immediate',
+ 'immense','impassioned','impeccable','impartial','imperfect','imperturbable','impish',
+ 'impolite','important','impossible','impractical','impressionable','impressive',
+ 'improbable','impure','inborn','incomparable','incompatible','incomplete','inconsequential',
+ 'incredible','indelible','inexperienced','indolent','infamous','infantile','infatuated',
+ 'inferior','infinite','informal','innocent','insecure','insidious','insignificant',
+ 'insistent','instructive','insubstantial','intelligent','intent','intentional',
+ 'interesting','internal','international','intrepid','ironclad','irresponsible',
+ 'irritating','itchy','jaded','jagged','jam-packed','jaunty','jealous','jittery',
+ 'joint','jolly','jovial','joyful','joyous','jubilant','judicious','juicy','jumbo',
+ 'junior','jumpy','juvenile','kaleidoscopic','keen','key','kind','kindhearted','kindly',
+ 'klutzy','knobby','knotty','knowledgeable','knowing','known','kooky','kosher','lame',
+ 'lanky','large','last','lasting','late','lavish','lawful','lazy','leading','lean',
+ 'leafy','left','legal','legitimate','light','lighthearted','likable','likely','limited',
+ 'limp','limping','linear','lined','liquid','little','live','lively','livid','loathsome',
+ 'lone','lonely','long','long-term','loose','lopsided','lost','loud','lovable','lovely',
+ 'loving','low','loyal','lucky','lumbering','luminous','lumpy','lustrous','luxurious',
+ 'mad','made-up','magnificent','majestic','major','male','mammoth','married','marvelous',
+ 'masculine','massive','mature','meager','mealy','mean','measly','meaty','medical',
+ 'mediocre','medium','meek','mellow','melodic','memorable','menacing','merry','messy',
+ 'metallic','mild','milky','mindless','miniature','minor','minty','miserable','miserly',
+ 'misguided','misty','mixed','modern','modest','moist','monstrous','monthly','monumental',
+ 'moral','mortified','motherly','motionless','mountainous','muddy','muffled','multicolored',
+ 'mundane','murky','mushy','musty','muted','mysterious','naive','narrow','nasty','natural',
+ 'naughty','nautical','near','neat','necessary','needy','negative','neglected','negligible',
+ 'neighboring','nervous','new','next','nice','nifty','nimble','nippy','nocturnal','noisy',
+ 'nonstop','normal','notable','noted','noteworthy','novel','noxious','numb','nutritious',
+ 'nutty','obedient','obese','oblong','oily','oblong','obvious','occasional','odd',
+ 'oddball','offbeat','offensive','official','old','old-fashioned','only','open','optimal',
+ 'optimistic','opulent','orange','orderly','organic','ornate','ornery','ordinary',
+ 'original','other','our','outlying','outgoing','outlandish','outrageous','outstanding',
+ 'oval','overcooked','overdue','overjoyed','overlooked','palatable','pale','paltry',
+ 'parallel','parched','partial','passionate','past','pastel','peaceful','peppery',
+ 'perfect','perfumed','periodic','perky','personal','pertinent','pesky','pessimistic',
+ 'petty','phony','physical','piercing','pink','pitiful','plain','plaintive','plastic',
+ 'playful','pleasant','pleased','pleasing','plump','plush','polished','polite','political',
+ 'pointed','pointless','poised','poor','popular','portly','posh','positive','possible',
+ 'potable','powerful','powerless','practical','precious','present','prestigious',
+ 'pretty','precious','previous','pricey','prickly','primary','prime','pristine','private',
+ 'prize','probable','productive','profitable','profuse','proper','proud','prudent',
+ 'punctual','pungent','puny','pure','purple','pushy','putrid','puzzled','puzzling',
+ 'quaint','qualified','quarrelsome','quarterly','queasy','querulous','questionable',
+ 'quick','quick-witted','quiet','quintessential','quirky','quixotic','quizzical',
+ 'radiant','ragged','rapid','rare','rash','raw','recent','reckless','rectangular',
+ 'ready','real','realistic','reasonable','red','reflecting','regal','regular',
+ 'reliable','relieved','remarkable','remorseful','remote','repentant','required',
+ 'respectful','responsible','repulsive','revolving','rewarding','rich','rigid',
+ 'right','ringed','ripe','roasted','robust','rosy','rotating','rotten','rough',
+ 'round','rowdy','royal','rubbery','rundown','ruddy','rude','runny','rural','rusty',
+ 'sad','safe','salty','same','sandy','sane','sarcastic','sardonic','satisfied',
+ 'scaly','scarce','scared','scary','scented','scholarly','scientific','scornful',
+ 'scratchy','scrawny','second','secondary','second-hand','secret','self-assured',
+ 'self-reliant','selfish','sentimental','separate','serene','serious','serpentine',
+ 'several','severe','shabby','shadowy','shady','shallow','shameful','shameless',
+ 'sharp','shimmering','shiny','shocked','shocking','shoddy','short','short-term',
+ 'showy','shrill','shy','sick','silent','silky','silly','silver','similar','simple',
+ 'simplistic','sinful','single','sizzling','skeletal','skinny','sleepy','slight',
+ 'slim','slimy','slippery','slow','slushy','small','smart','smoggy','smooth','smug',
+ 'snappy','snarling','sneaky','sniveling','snoopy','sociable','soft','soggy','solid',
+ 'somber','some','spherical','sophisticated','sore','sorrowful','soulful','soupy',
+ 'sour','Spanish','sparkling','sparse','specific','spectacular','speedy','spicy',
+ 'spiffy','spirited','spiteful','splendid','spotless','spotted','spry','square',
+ 'squeaky','squiggly','stable','staid','stained','stale','standard','starchy','stark',
+ 'starry','steep','sticky','stiff','stimulating','stingy','stormy','straight','strange',
+ 'steel','strict','strident','striking','striped','strong','studious','stunning',
+ 'stupendous','stupid','sturdy','stylish','subdued','submissive','substantial','subtle',
+ 'suburban','sudden','sugary','sunny','super','superb','superficial','superior',
+ 'supportive','sure-footed','surprised','suspicious','svelte','sweaty','sweet','sweltering',
+ 'swift','sympathetic','tall','talkative','tame','tan','tangible','tart','tasty',
+ 'tattered','taut','tedious','teeming','tempting','tender','tense','tepid','terrible',
+ 'terrific','testy','thankful','that','these','thick','thin','third','thirsty','this',
+ 'thorough','thorny','those','thoughtful','threadbare','thrifty','thunderous','tidy',
+ 'tight','timely','tinted','tiny','tired','torn','total','tough','traumatic','treasured',
+ 'tremendous','tragic','trained','tremendous','triangular','tricky','trifling','trim',
+ 'trivial','troubled','true','trusting','trustworthy','trusty','truthful','tubby',
+ 'turbulent','twin','ugly','ultimate','unacceptable','unaware','uncomfortable',
+ 'uncommon','unconscious','understated','unequaled','uneven','unfinished','unfit',
+ 'unfolded','unfortunate','unhappy','unhealthy','uniform','unimportant','unique',
+ 'united','unkempt','unknown','unlawful','unlined','unlucky','unnatural','unpleasant',
+ 'unrealistic','unripe','unruly','unselfish','unsightly','unsteady','unsung','untidy',
+ 'untimely','untried','untrue','unused','unusual','unwelcome','unwieldy','unwilling',
+ 'unwitting','unwritten','upbeat','upright','upset','urban','usable','used','useful',
+ 'useless','utilized','utter','vacant','vague','vain','valid','valuable','vapid',
+ 'variable','vast','velvety','venerated','vengeful','verifiable','vibrant','vicious',
+ 'victorious','vigilant','vigorous','villainous','violet','violent','virtual',
+ 'virtuous','visible','vital','vivacious','vivid','voluminous','wan','warlike','warm',
+ 'warmhearted','warped','wary','wasteful','watchful','waterlogged','watery','wavy',
+ 'wealthy','weak','weary','webbed','wee','weekly','weepy','weighty','weird','welcome',
+ 'well-documented','well-groomed','well-informed','well-lit','well-made','well-off',
+ 'well-to-do','well-worn','wet','which','whimsical','whirlwind','whispered','white',
+ 'whole','whopping','wicked','wide','wide-eyed','wiggly','wild','willing','wilted',
+ 'winding','windy','winged','wiry','wise','witty','wobbly','woeful','wonderful',
+ 'wooden','woozy','wordy','worldly','worn','worried','worrisome','worse','worst',
+ 'worthless','worthwhile','worthy','wrathful','wretched','writhing','wrong','wry',
+ 'yawning','yearly','yellow','yellowish','young','youthful','yummy','zany','zealous',
+ 'zesty','zigzag']
+nouns = ['people','history','way','art','world','information','map','two','family',
+ 'government','health','system','computer','meat','year','thanks','music','person',
+ 'reading','method','data','food','understanding','theory','law','bird','literature',
+ 'problem','software','control','knowledge','power','ability','economics','love',
+ 'internet','television','science','library','nature','fact','product','idea',
+ 'temperature','investment','area','society','activity','story','industry','media',
+ 'thing','oven','community','definition','safety','quality','development','language',
+ 'management','player','variety','video','week','security','country','exam','movie',
+ 'organization','equipment','physics','analysis','policy','series','thought','basis',
+ 'boyfriend','direction','strategy','technology','army','camera','freedom','paper',
+ 'environment','child','instance','month','truth','marketing','university','writing',
+ 'article','department','difference','goal','news','audience','fishing','growth',
+ 'income','marriage','user','combination','failure','meaning','medicine','philosophy',
+ 'teacher','communication','night','chemistry','disease','disk','energy','nation',
+ 'road','role','soup','advertising','location','success','addition','apartment','education',
+ 'math','moment','painting','politics','attention','decision','event','property',
+ 'shopping','student','wood','competition','distribution','entertainment','office',
+ 'population','president','unit','category','cigarette','context','introduction',
+ 'opportunity','performance','driver','flight','length','magazine','newspaper',
+ 'relationship','teaching','cell','dealer','finding','lake','member','message','phone',
+ 'scene','appearance','association','concept','customer','death','discussion','housing',
+ 'inflation','insurance','mood','woman','advice','blood','effort','expression','importance',
+ 'opinion','payment','reality','responsibility','situation','skill','statement','wealth',
+ 'application','city','county','depth','estate','foundation','grandmother','heart',
+ 'perspective','photo','recipe','studio','topic','collection','depression','imagination',
+ 'passion','percentage','resource','setting','ad','agency','college','connection',
+ 'criticism','debt','description','memory','patience','secretary','solution','administration',
+ 'aspect','attitude','director','personality','psychology','recommendation','response',
+ 'selection','storage','version','alcohol','argument','complaint','contract','emphasis',
+ 'highway','loss','membership','possession','preparation','steak','union','agreement',
+ 'cancer','currency','employment','engineering','entry','interaction','mixture','preference',
+ 'region','republic','tradition','virus','actor','classroom','delivery','device',
+ 'difficulty','drama','election','engine','football','guidance','hotel','owner',
+ 'priority','protection','suggestion','tension','variation','anxiety','atmosphere',
+ 'awareness','bath','bread','candidate','climate','comparison','confusion','construction',
+ 'elevator','emotion','employee','employer','guest','height','leadership','mall','manager',
+ 'operation','recording','sample','transportation','charity','cousin','disaster','editor',
+ 'efficiency','excitement','extent','feedback','guitar','homework','leader','mom','outcome',
+ 'permission','presentation','promotion','reflection','refrigerator','resolution','revenue',
+ 'session','singer','tennis','basket','bonus','cabinet','childhood','church','clothes','coffee',
+ 'dinner','drawing','hair','hearing','initiative','judgment','lab','measurement','mode','mud',
+ 'orange','poetry','police','possibility','procedure','queen','ratio','relation','restaurant',
+ 'satisfaction','sector','signature','significance','song','tooth','town','vehicle','volume','wife',
+ 'accident','airport','appointment','arrival','assumption','baseball','chapter','committee',
+ 'conversation','database','enthusiasm','error','explanation','farmer','gate','girl','hall',
+ 'historian','hospital','injury','instruction','maintenance','manufacturer','meal','perception','pie',
+ 'poem','presence','proposal','reception','replacement','revolution','river','son','speech','tea',
+ 'village','warning','winner','worker','writer','assistance','breath','buyer','chest','chocolate',
+ 'conclusion','contribution','cookie','courage','dad','desk','drawer','establishment','examination',
+ 'garbage','grocery','honey','impression','improvement','independence','insect','inspection',
+ 'inspector','king','ladder','menu','penalty','piano','potato','profession','professor','quantity',
+ 'reaction','requirement','salad','sister','supermarket','tongue','weakness','wedding','affair',
+ 'ambition','analyst','apple','assignment','assistant','bathroom','bedroom','beer','birthday',
+ 'celebration','championship','cheek','client','consequence','departure','diamond','dirt','ear',
+ 'fortune','friendship','funeral','gene','girlfriend','hat','indication','intention','lady',
+ 'midnight','negotiation','obligation','passenger','pizza','platform','poet','pollution',
+ 'recognition','reputation','shirt','sir','speaker','stranger','surgery','sympathy','tale','throat',
+ 'trainer','uncle','youth','time','work','film','water','money','example','while','business','study',
+ 'game','life','form','air','day','place','number','part','field','fish','back','process','heat',
+ 'hand','experience','job','book','end','point','type','home','economy','value','body','market',
+ 'guide','interest','state','radio','course','company','price','size','card','list','mind','trade',
+ 'line','care','group','risk','word','fat','force','key','light','training','name','school','top',
+ 'amount','level','order','practice','research','sense','service','piece','web','boss','sport','fun',
+ 'house','page','term','test','answer','sound','focus','matter','kind','soil','board','oil','picture',
+ 'access','garden','range','rate','reason','future','site','demand','exercise','image','case','cause',
+ 'coast','action','age','bad','boat','record','result','section','building','mouse','cash','class',
+ 'nothing','period','plan','store','tax','side','subject','space','rule','stock','weather','chance',
+ 'figure','man','model','source','beginning','earth','program','chicken','design','feature','head',
+ 'material','purpose','question','rock','salt','act','birth','car','dog','object','scale','sun',
+ 'note','profit','rent','speed','style','war','bank','craft','half','inside','outside','standard',
+ 'bus','exchange','eye','fire','position','pressure','stress','advantage','benefit','box','frame',
+ 'issue','step','cycle','face','item','metal','paint','review','room','screen','structure','view',
+ 'account','ball','discipline','medium','share','balance','bit','black','bottom','choice','gift',
+ 'impact','machine','shape','tool','wind','address','average','career','culture','morning','pot',
+ 'sign','table','task','condition','contact','credit','egg','hope','ice','network','north','square',
+ 'attempt','date','effect','link','post','star','voice','capital','challenge','friend','self','shot',
+ 'brush','couple','debate','exit','front','function','lack','living','plant','plastic','spot',
+ 'summer','taste','theme','track','wing','brain','button','click','desire','foot','gas','influence',
+ 'notice','rain','wall','base','damage','distance','feeling','pair','savings','staff','sugar',
+ 'target','text','animal','author','budget','discount','file','ground','lesson','minute','officer',
+ 'phase','reference','register','sky','stage','stick','title','trouble','bowl','bridge','campaign',
+ 'character','club','edge','evidence','fan','letter','lock','maximum','novel','option','pack','park',
+ 'plenty','quarter','skin','sort','weight','baby','background','carry','dish','factor','fruit',
+ 'glass','joint','master','muscle','red','strength','traffic','trip','vegetable','appeal','chart',
+ 'gear','ideal','kitchen','land','log','mother','net','party','principle','relative','sale','season',
+ 'signal','spirit','street','tree','wave','belt','bench','commission','copy','drop','minimum','path',
+ 'progress','project','sea','south','status','stuff','ticket','tour','angle','blue','breakfast',
+ 'confidence','daughter','degree','doctor','dot','dream','duty','essay','father','fee','finance',
+ 'hour','juice','limit','luck','milk','mouth','peace','pipe','seat','stable','storm','substance',
+ 'team','trick','afternoon','bat','beach','blank','catch','chain','consideration','cream','crew',
+ 'detail','gold','interview','kid','mark','match','mission','pain','pleasure','score','screw','sex',
+ 'shop','shower','suit','tone','window','agent','band','block','bone','calendar','cap','coat',
+ 'contest','corner','court','cup','district','door','east','finger','garage','guarantee','hole',
+ 'hook','implement','layer','lecture','lie','manner','meeting','nose','parking','partner','profile',
+ 'respect','rice','routine','schedule','swimming','telephone','tip','winter','airline','bag','battle',
+ 'bed','bill','bother','cake','code','curve','designer','dimension','dress','ease','emergency',
+ 'evening','extension','farm','fight','gap','grade','holiday','horror','horse','host','husband',
+ 'loan','mistake','mountain','nail','noise','occasion','package','patient','pause','phrase','proof',
+ 'race','relief','sand','sentence','shoulder','smoke','stomach','string','tourist','towel','vacation',
+ 'west','wheel','wine','arm','aside','associate','bet','blow','border','branch','breast','brother',
+ 'buddy','bunch','chip','coach','cross','document','draft','dust','expert','floor','god','golf',
+ 'habit','iron','judge','knife','landscape','league','mail','mess','native','opening','parent',
+ 'pattern','pin','pool','pound','request','salary','shame','shelter','shoe','silver','tackle','tank',
+ 'trust','assist','bake','bar','bell','bike','blame','boy','brick','chair','closet','clue','collar',
+ 'comment','conference','devil','diet','fear','fuel','glove','jacket','lunch','monitor','mortgage',
+ 'nurse','pace','panic','peak','plane','reward','row','sandwich','shock','spite','spray','surprise',
+ 'till','transition','weekend','welcome','yard','alarm','bend','bicycle','bite','blind','bottle',
+ 'cable','candle','clerk','cloud','concert','counter','flower','grandfather','harm','knee','lawyer',
+ 'leather','load','mirror','neck','pension','plate','purple','ruin','ship','skirt','slice','snow',
+ 'specialist','stroke','switch','trash','tune','zone','anger','award','bid','bitter','boot','bug',
+ 'camp','candy','carpet','cat','champion','channel','clock','comfort','cow','crack','engineer',
+ 'entrance','fault','grass','guy','hell','highlight','incident','island','joke','jury','leg','lip',
+ 'mate','motor','nerve','passage','pen','pride','priest','prize','promise','resident','resort','ring',
+ 'roof','rope','sail','scheme','script','sock','station','toe','tower','truck','witness','a','you',
+ 'it','can','will','if','one','many','most','other','use','make','good','look','help','go','great',
+ 'being','few','might','still','public','read','keep','start','give','human','local','general','she',
+ 'specific','long','play','feel','high','tonight','put','common','set','change','simple','past','big',
+ 'possible','particular','today','major','personal','current','national','cut','natural','physical',
+ 'show','try','check','second','call','move','pay','let','increase','single','individual','turn',
+ 'ask','buy','guard','hold','main','offer','potential','professional','international','travel','cook',
+ 'alternative','following','special','working','whole','dance','excuse','cold','commercial','low',
+ 'purchase','deal','primary','worth','fall','necessary','positive','produce','search','present',
+ 'spend','talk','creative','tell','cost','drive','green','support','glad','remove','return','run',
+ 'complex','due','effective','middle','regular','reserve','independent','leave','original','reach',
+ 'rest','serve','watch','beautiful','charge','active','break','negative','safe','stay','visit',
+ 'visual','affect','cover','report','rise','walk','white','beyond','junior','pick','unique',
+ 'anything','classic','final','lift','mix','private','stop','teach','western','concern','familiar',
+ 'fly','official','broad','comfortable','gain','maybe','rich','save','stand','young','fail','heavy',
+ 'hello','lead','listen','valuable','worry','handle','leading','meet','release','sell','finish',
+ 'normal','press','ride','secret','spread','spring','tough','wait','brown','deep','display','flow',
+ 'hit','objective','shoot','touch','cancel','chemical','cry','dump','extreme','push','conflict','eat',
+ 'fill','formal','jump','kick','opposite','pass','pitch','remote','total','treat','vast','abuse',
+ 'beat','burn','deposit','print','raise','sleep','somewhere','advance','anywhere','consist','dark',
+ 'double','draw','equal','fix','hire','internal','join','kill','sensitive','tap','win','attack',
+ 'claim','constant','drag','drink','guess','minor','pull','raw','soft','solid','wear','weird',
+ 'wonder','annual','count','dead','doubt','feed','forever','impress','nobody','repeat','round','sing',
+ 'slide','strip','whereas','wish','combine','command','dig','divide','equivalent','hang','hunt',
+ 'initial','march','mention','smell','spiritual','survey','tie','adult','brief','crazy','escape',
+ 'gather','hate','prior','repair','rough','sad','scratch','sick','strike','employ','external','hurt',
+ 'illegal','laugh','lay','mobile','nasty','ordinary','respond','royal','senior','split','strain',
+ 'struggle','swim','train','upper','wash','yellow','convert','crash','dependent','fold','funny',
+ 'grab','hide','miss','permit','quote','recover','resolve','roll','sink','slip','spare','suspect',
+ 'sweet','swing','twist','upstairs','usual','abroad','brave','calm','concentrate','estimate','grand',
+ 'male','mine','prompt','quiet','refuse','regret','reveal','rush','shake','shift','shine','steal',
+ 'suck','surround','anybody','bear','brilliant','dare','dear','delay','drunk','female','hurry',
+ 'inevitable','invite','kiss','neat','pop','punch','quit','reply','representative','resist','rip',
+ 'rub','silly','smile','spell','stretch','stupid','tear','temporary','tomorrow','wake','wrap',
+ 'yesterday']
+
+def get_random_name(with_ext=True):
+ return "{}_{}_{}{}".format(
+ random.choice(adjectives),
+ random.choice(nouns),
+ random.randint(0, 50000),
+ with_ext and '.txt' or '')
+
+def get_random_file(max_filesize):
+ file_start = random.randint(0, (max_filesize - 1025))
+ file_size = random.randint(0, (max_filesize - file_start))
+ file_name = get_random_name()
+ return "{}:{}:{}".format(file_start, file_size, file_name)
+
+def get_stream(name, max_filesize, data_loc, args):
+ files = []
+ for _ in range(random.randint(args.min_files, args.max_files)):
+ files.append(get_random_file(max_filesize))
+ stream = "{} {} {}".format(name, data_loc, ' '.join(files))
+ return stream
+
+def create_substreams(depth, base_stream_name, max_filesize, data_loc, args, current_size=0):
+ current_stream = get_stream(base_stream_name, max_filesize, data_loc, args)
+ current_size += len(current_stream)
+ streams = [current_stream]
+
+ if current_size >= max_manifest_size:
+ logger.debug("Maximum manifest size reached -- finishing early at {}".format(base_stream_name))
+ elif depth == 0:
+ logger.debug("Finished stream {}".format(base_stream_name))
+ else:
+ for _ in range(random.randint(args.min_subdirs, args.max_subdirs)):
+ stream_name = base_stream_name+'/'+get_random_name(False)
+ substreams = create_substreams(depth-1, stream_name, max_filesize,
+ data_loc, args, current_size)
+ current_size += sum([len(x) for x in substreams])
+ if current_size >= max_manifest_size:
+ break
+ streams.extend(substreams)
+ return streams
+
+def parse_arguments(arguments):
+ args = arg_parser.parse_args(arguments)
+ if args.debug:
+ logger.setLevel(logging.DEBUG)
+ if args.max_files < args.min_files:
+ arg_parser.error("--min-files={} should be less or equal than max-files={}".format(args.min_files, args.max_files))
+ if args.min_depth < 0:
+ arg_parser.error("--min-depth should be at least 0")
+ if args.max_depth < 0 or args.max_depth < args.min_depth:
+ arg_parser.error("--max-depth should be at >= 0 and >= min-depth={}".format(args.min_depth))
+ if args.max_subdirs < args.min_subdirs:
+ arg_parser.error("--min-subdirs={} should be less or equal than max-subdirs={}".format(args.min_subdirs, args.max_subdirs))
+ return args
+
+def main(arguments=None):
+ args = parse_arguments(arguments)
+ logger.info("Creating test collection with (min={}, max={}) files per directory and a tree depth of (min={}, max={}) and (min={}, max={}) subdirs in each depth level...".format(args.min_files, args.max_files, args.min_depth, args.max_depth, args.min_subdirs, args.max_subdirs))
+ api = arvados.api('v1', timeout=5*60)
+ max_filesize = 1024*1024
+ data_block = ''.join([random.choice(string.printable) for i in range(max_filesize)])
+ data_loc = arvados.KeepClient(api).put(data_block)
+ streams = create_substreams(random.randint(args.min_depth, args.max_depth),
+ '.', max_filesize, data_loc, args)
+ manifest = ''
+ for s in streams:
+ if len(manifest)+len(s) > max_manifest_size:
+ logger.info("Skipping stream {} to avoid making a manifest bigger than 128MiB".format(s.split(' ')[0]))
+ break
+ manifest += s + '\n'
+ try:
+ coll_name = get_random_name(False)
+ coll = api.collections().create(
+ body={"collection": {
+ "name": coll_name,
+ "manifest_text": manifest
+ },
+ }).execute()
+ except:
+ logger.info("ERROR creating collection with name '{}' and manifest:\n'{}...'\nSize: {}".format(coll_name, manifest[0:1024], len(manifest)))
+ raise
+ logger.info("Created collection {} - manifest size: {}".format(coll["uuid"], len(manifest)))
+ return 0
+
+if __name__ == "__main__":
+ sys.exit(main())
\ No newline at end of file
arv.config()["Services"]["Workbench1"]["ExternalURL"],
uuid, prof)
+collectionNameCache = {}
+def getCollectionName(arv, uuid):
+ if uuid not in collectionNameCache:
+ u = arv.collections().get(uuid=uuid).execute()
+ collectionNameCache[uuid] = u["name"]
+ return collectionNameCache[uuid]
+
def getname(u):
return "\"%s\" (%s)" % (u["name"], u["uuid"])
else:
users[owner].append("%s Deleted collection %s %s" % (event_at, getname(e["properties"]["old_attributes"]), loguuid))
+ elif e["event_type"] == "file_download":
+ users[e["object_uuid"]].append("%s Downloaded file \"%s\" from \"%s\" (%s) (%s)" % (event_at,
+ e["properties"].get("collection_file_path") or e["properties"].get("reqPath"),
+ getCollectionName(arv, e["properties"].get("collection_uuid")),
+ e["properties"].get("collection_uuid"),
+ e["properties"].get("portable_data_hash")))
+
+ elif e["event_type"] == "file_upload":
+ users[e["object_uuid"]].append("%s Uploaded file \"%s\" to \"%s\" (%s)" % (event_at,
+ e["properties"].get("collection_file_path") or e["properties"].get("reqPath"),
+ getCollectionName(arv, e["properties"].get("collection_uuid")),
+ e["properties"].get("collection_uuid")))
+
else:
users[owner].append("%s %s %s %s" % (e["event_type"], e["object_kind"], e["object_uuid"], loguuid))