Veritas Genetics, Inc. <*@veritasgenetics.com>
Curii Corporation, Inc. <*@curii.com>
Dante Tsang <dante@dantetsang.com>
-Codex Genetics Ltd <info@codexgenetics.com>
\ No newline at end of file
+Codex Genetics Ltd <info@codexgenetics.com>
+Bruno P. Kinoshita <brunodepaulak@yahoo.com.br>
Those interested in contributing should begin by joining the [Arvados community
channel](https://gitter.im/arvados/community) and telling us about your interest.
-Contributers should also create an account at https://dev.arvados.org
+Contributors should also create an account at https://dev.arvados.org
to be able to create and comment on bug tracker issues. The
Arvados public bug tracker is located at
https://dev.arvados.org/projects/arvados/issues .
-Contributers may also be interested in the [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1).
+Contributors may also be interested in the [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1).
# Development
Git repositories for primary development are located at
https://git.arvados.org/ and can also be browsed at
https://dev.arvados.org/projects/arvados/repository . Every push to
-the master branch is also mirrored to Github at
+the main branch is also mirrored to Github at
https://github.com/arvados/arvados .
Visit [Hacking Arvados](https://dev.arvados.org/projects/arvados/wiki/Hacking) for
2. Clone your fork, make your changes, commit to your fork.
3. Every commit message must have a DCO sign-off and every file must have a SPDX license (see below).
4. Add yourself to the [AUTHORS](AUTHORS) file
-5. When your fork is ready, through Github, Create a Pull Request against `arvados:master`
+5. When your fork is ready, through Github, Create a Pull Request against `arvados:main`
6. Notify the core team about your pull request through the [Arvados development
channel](https://gitter.im/arvados/development) or by other means.
7. A member of the core team will review the pull request. They may have questions or comments, or request changes.
8. When the contribution is ready, a member of the core team will
-merge the pull request into the master branch, which will
+merge the pull request into the main branch, which will
automatically resolve the pull request.
The Arvados project does not require a contributor agreement in advance, but does require each commit message include a [Developer Certificate of Origin](https://dev.arvados.org/projects/arvados/wiki/Developer_Certificate_Of_Origin). Please ensure *every git commit message* includes `Arvados-DCO-1.1-Signed-off-by`. If you have already made commits without it, fix them with `git commit --amend` or `git rebase`.
Continuous integration is hosted at https://ci.arvados.org/
-Currently, external contributers cannot trigger builds. We are investigating integration with Github pull requests for the future.
+Currently, external contributors cannot trigger builds. We are investigating integration with Github pull requests for the future.
[![Build Status](https://ci.arvados.org/buildStatus/icon?job=run-tests)](https://ci.arvados.org/job/run-tests/)
One is by "configuring (system-wide) the collection's idle time":{{site.baseurl}}/admin/collection-versioning.html. This idle time is checked against the @modified_at@ attribute so that the version is saved when one or more of the previously enumerated attributes get updated and the @modified_at@ is at least at the configured idle time in the past. This way, a frequently updated collection won't create lots of version records that may not be useful.
-The other way to trigger a version save, is by setting @preserve_version@ to @true@ on the current version collection record: this ensures that the current state will be preserved as a version the next time it gets updated.
+The other way to trigger a version save, is by setting @preserve_version@ to @true@ on the current version collection record: this ensures that the current state will be preserved as a version the next time it gets updated. This includes either creating a new collection or updating a preexisting one. In the case of using @preserve_version = true@ on a collection's create call, the new record state will be preserved as a snapshot on the next update.
h3. Collection's past versions behavior & limitations
"Collections.BlobTrashCheckInterval": false,
"Collections.BlobTrashConcurrency": false,
"Collections.BlobTrashLifetime": false,
- "Collections.CollectionVersioning": false,
+ "Collections.CollectionVersioning": true,
"Collections.DefaultReplication": true,
"Collections.DefaultTrashLifetime": true,
"Collections.ForwardSlashNameSubstitution": true,
# file to determine what version of cwltool and schema-salad to
# build.
install_requires=[
- 'cwltool==3.1.20211020155521',
- 'schema-salad==8.2.20211020114435',
+ 'cwltool==3.1.20211107152837',
+ 'schema-salad==8.2.20211116214159',
'arvados-python-client{}'.format(pysdk_dep),
'setuptools',
'ciso8601 >= 2.0.0',
storage_classes=None,
trash_at=None,
merge=True,
- num_retries=None):
+ num_retries=None,
+ preserve_version=False):
"""Save collection to an existing collection record.
Commit pending buffer blocks to Keep, merge with remote record (if
:num_retries:
Retry count on API calls (if None, use the collection default)
+ :preserve_version:
+ If True, indicate that the collection content being saved right now
+ should be preserved in a version snapshot if the collection record is
+ updated in the future. Requires that the API server has
+ Collections.CollectionVersioning enabled, if not, setting this will
+ raise an exception.
+
"""
if properties and type(properties) is not dict:
raise errors.ArgumentError("properties must be dictionary type.")
if trash_at and type(trash_at) is not datetime.datetime:
raise errors.ArgumentError("trash_at must be datetime type.")
+ if preserve_version and not self._my_api().config()['Collections'].get('CollectionVersioning', False):
+ raise errors.ArgumentError("preserve_version is not supported when CollectionVersioning is not enabled.")
+
body={}
if properties:
body["properties"] = properties
if trash_at:
t = trash_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
body["trash_at"] = t
+ if preserve_version:
+ body["preserve_version"] = preserve_version
if not self.committed():
if self._has_remote_blocks:
storage_classes=None,
trash_at=None,
ensure_unique_name=False,
- num_retries=None):
+ num_retries=None,
+ preserve_version=False):
"""Save collection to a new collection record.
Commit pending buffer blocks to Keep and, when create_collection_record
:num_retries:
Retry count on API calls (if None, use the collection default)
+ :preserve_version:
+ If True, indicate that the collection content being saved right now
+ should be preserved in a version snapshot if the collection record is
+ updated in the future. Requires that the API server has
+ Collections.CollectionVersioning enabled, if not, setting this will
+ raise an exception.
+
"""
if properties and type(properties) is not dict:
raise errors.ArgumentError("properties must be dictionary type.")
if trash_at and type(trash_at) is not datetime.datetime:
raise errors.ArgumentError("trash_at must be datetime type.")
+ if preserve_version and not self._my_api().config()['Collections'].get('CollectionVersioning', False):
+ raise errors.ArgumentError("preserve_version is not supported when CollectionVersioning is not enabled.")
+
if self._has_remote_blocks:
# Copy any remote blocks to the local cluster.
self._copy_remote_blocks(remote_blocks={})
if trash_at:
t = trash_at.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
body["trash_at"] = t
+ if preserve_version:
+ body["preserve_version"] = preserve_version
self._remember_api_response(self._my_api().collections().create(ensure_unique_name=ensure_unique_name, body=body).execute(num_retries=num_retries))
text = self._api_response["manifest_text"]
'future',
'google-api-python-client >=1.6.2, <2',
'google-auth<2',
- 'httplib2 >=0.9.2',
+ 'httplib2 >=0.9.2, <0.20.2',
'pycurl >=7.19.5.1',
'ruamel.yaml >=0.15.54, <0.17.11',
'setuptools',
"UserProfileNotificationAddress": "arvados@example.com",
},
"Collections": {
+ "CollectionVersioning": True,
"BlobSigningKey": "zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc",
"TrustAllContent": False,
"ForwardSlashNameSubstitution": "/",
class NewCollectionTestCaseWithServers(run_test_server.TestCaseWithServers):
+ def test_preserve_version_on_save(self):
+ c = Collection()
+ c.save_new(preserve_version=True)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 1)
+ self.assertEqual(coll_record['preserve_version'], True)
+ with c.open("foo.txt", "wb") as foo:
+ foo.write(b"foo")
+ c.save(preserve_version=True)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 2)
+ self.assertEqual(coll_record['preserve_version'], True)
+ with c.open("bar.txt", "wb") as foo:
+ foo.write(b"bar")
+ c.save(preserve_version=False)
+ coll_record = arvados.api().collections().get(uuid=c.manifest_locator()).execute()
+ self.assertEqual(coll_record['version'], 3)
+ self.assertEqual(coll_record['preserve_version'], False)
+
def test_get_manifest_text_only_committed(self):
c = Collection()
with c.open("count.txt", "wb") as f:
skip_before_action :find_object_by_uuid, only: :shared
skip_before_action :render_404_if_no_object, only: :shared
+ TRASHABLE_CLASSES = ['project']
+
def self._index_requires_parameters
(super rescue {}).
merge({
end
end
+ def destroy
+ if !TRASHABLE_CLASSES.include?(@object.group_class)
+ return @object.destroy
+ show
+ else
+ super # Calls destroy from TrashableController module
+ end
+ end
+
def render_404_if_no_object
if params[:action] == 'contents'
if !params[:uuid]
@offset = offset_all
end
- protected
-
def exclude_home objectlist, klass
# select records that are readable by current user AND
# the owner_uuid is a user (but not the current user) OR
assert_includes(owners, groups(:asubproject).uuid)
end
+ [:afiltergroup, :private_role].each do |grp|
+ test "delete non-project group #{grp}" do
+ authorize_with :admin
+ assert_not_nil Group.find_by_uuid(groups(grp).uuid)
+ assert !Group.find_by_uuid(groups(grp).uuid).is_trashed
+ post :destroy, params: {
+ id: groups(grp).uuid,
+ format: :json,
+ }
+ assert_response :success
+ # Should not be trashed
+ assert_nil Group.find_by_uuid(groups(grp).uuid)
+ end
+ end
+
### trashed project tests ###
#
// e4de7a2810f5554cd39b36d8ddb132ff+67108864 1388701136
//
func (v *UnixVolume) IndexTo(prefix string, w io.Writer) error {
- var lastErr error
rootdir, err := v.os.Open(v.Root)
if err != nil {
return err
}
- defer rootdir.Close()
v.os.stats.TickOps("readdir")
v.os.stats.Tick(&v.os.stats.ReaddirOps)
- for {
- names, err := rootdir.Readdirnames(1)
- if err == io.EOF {
- return lastErr
- } else if err != nil {
- return err
- }
- if !strings.HasPrefix(names[0], prefix) && !strings.HasPrefix(prefix, names[0]) {
+ subdirs, err := rootdir.Readdirnames(-1)
+ rootdir.Close()
+ if err != nil {
+ return err
+ }
+ for _, subdir := range subdirs {
+ if !strings.HasPrefix(subdir, prefix) && !strings.HasPrefix(prefix, subdir) {
// prefix excludes all blocks stored in this dir
continue
}
- if !blockDirRe.MatchString(names[0]) {
+ if !blockDirRe.MatchString(subdir) {
continue
}
- blockdirpath := filepath.Join(v.Root, names[0])
+ blockdirpath := filepath.Join(v.Root, subdir)
blockdir, err := v.os.Open(blockdirpath)
if err != nil {
v.logger.WithError(err).Errorf("error reading %q", blockdirpath)
- lastErr = fmt.Errorf("error reading %q: %s", blockdirpath, err)
- continue
+ return fmt.Errorf("error reading %q: %s", blockdirpath, err)
}
v.os.stats.TickOps("readdir")
v.os.stats.Tick(&v.os.stats.ReaddirOps)
- for {
- fileInfo, err := blockdir.Readdir(1)
- if err == io.EOF {
- break
+ // ReadDir() (compared to Readdir(), which returns
+ // FileInfo structs) helps complete the sequence of
+ // readdirent calls as quickly as possible, reducing
+ // the likelihood of NFS EBADCOOKIE (523) errors.
+ dirents, err := blockdir.ReadDir(-1)
+ blockdir.Close()
+ if err != nil {
+ v.logger.WithError(err).Errorf("error reading %q", blockdirpath)
+ return fmt.Errorf("error reading %q: %s", blockdirpath, err)
+ }
+ for _, dirent := range dirents {
+ fileInfo, err := dirent.Info()
+ if os.IsNotExist(err) {
+ // File disappeared between ReadDir() and now
+ continue
} else if err != nil {
- v.logger.WithError(err).Errorf("error reading %q", blockdirpath)
- lastErr = fmt.Errorf("error reading %q: %s", blockdirpath, err)
- break
+ v.logger.WithError(err).Errorf("error getting FileInfo for %q in %q", dirent.Name(), blockdirpath)
+ return err
}
- name := fileInfo[0].Name()
+ name := fileInfo.Name()
if !strings.HasPrefix(name, prefix) {
continue
}
}
_, err = fmt.Fprint(w,
name,
- "+", fileInfo[0].Size(),
- " ", fileInfo[0].ModTime().UnixNano(),
+ "+", fileInfo.Size(),
+ " ", fileInfo.ModTime().UnixNano(),
"\n")
if err != nil {
- blockdir.Close()
return fmt.Errorf("error writing: %s", err)
}
}
- blockdir.Close()
}
+ return nil
}
// Trash trashes the block data from the unix storage