<div class="releasenotes">
</notextile>
- h2(#main). development main (as of 2021-10-27)
+ h2(#main). development main (as of 2021-11-10)
"previous: Upgrading from 2.3.0":#v2_3_0
* If you already have a robust permanent keepstore infrastructure, you can set @Containers.LocalKeepBlobBuffersPerVCPU@ to 0 to disable this feature and preserve the previous behavior of sending container I/O traffic to your separately provisioned keepstore servers.
* This feature is enabled only if no volumes use @AccessViaHosts@, and no volumes have underlying @Replication@ less than @Collections.DefaultReplication@. If the feature is configured but cannot be enabled due to an incompatible volume configuration, this will be noted in the @crunch-run.txt@ file in the container log.
+ h3. Backend support for vocabulary checking
+
+ If your installation uses the vocabulary feature on Workbench2, you will need to update the cluster configuration by moving the vocabulary definition file to the node where @controller@ runs, and set the @API.VocabularyPath@ configuration parameter to the local path where the file was placed.
+ This will enable the vocabulary checking cluster-wide, including Workbench2. The @Workbench.VocabularyURL@ configuration parameter is deprecated and will be removed in a future release.
+ You can read more about how this feature works on the "admin page":{{site.baseurl}}/admin/metadata-vocabulary.html.
+
h2(#v2_3_0). v2.3.0 (2021-10-27)
"previous: Upgrading to 2.2.0":#v2_2_0
h3. New property vocabulary format for Workbench2
- (feature "#14151":https://dev.arvados.org/issues/14151) Workbench2 supports a new vocabulary format and it isn't compatible with the previous one, please read the "workbench2 vocabulary format admin page":{{site.baseurl}}/admin/workbench2-vocabulary.html for more information.
+ (feature "#14151":https://dev.arvados.org/issues/14151) Workbench2 supports a new vocabulary format and it isn't compatible with the previous one, please read the "metadata vocabulary format admin page":{{site.baseurl}}/admin/metadata-vocabulary.html for more information.
h3. Cloud installations only: node manager replaced by arvados-dispatch-cloud
As part of story "#9945":https://dev.arvados.org/issues/9945, the distribution packaging (deb/rpm) of our Python packages has changed. These packages now include a built-in virtualenv to reduce dependencies on system packages. We have also stopped packaging and publishing backports for all the Python dependencies of our packages, as they are no longer needed.
-One practical consequence of this change is that the use of the Arvados Python SDK (aka "import arvados") will require a tweak if the SDK was installed from a distribution package. It now requires the loading of the virtualenv environment from our packages. The "Install documentation for the Arvados Python SDK":/sdk/python/sdk-python.html reflects this change. This does not affect the use of the command line tools (e.g. arv-get, etc.).
+One practical consequence of this change is that the use of the Arvados Python SDK (aka "import arvados") will require a tweak if the SDK was installed from a distribution package. It now requires the loading of the virtualenv environment from our packages. The "Install documentation for the Arvados Python SDK":{{ site.baseurl }}/sdk/python/sdk-python.html reflects this change. This does not affect the use of the command line tools (e.g. arv-get, etc.).
Python scripts that rely on the distribution Arvados Python SDK packages to import the Arvados SDK will need to be tweaked to load the correct Python environment.
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/auth"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
)
type Conn struct {
remotes map[string]backend
}
- func New(cluster *arvados.Cluster) *Conn {
+ func New(cluster *arvados.Cluster, healthFuncs *map[string]health.Func) *Conn {
local := localdb.NewConn(cluster)
remotes := map[string]backend{}
for id, remote := range cluster.RemoteClusters {
if !remote.Proxy || id == cluster.ClusterID {
continue
}
- conn := rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(local, id))
+ conn := rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(cluster, local, id))
// Older versions of controller rely on the Via header
// to detect loops.
conn.SendHeader = http.Header{"Via": {"HTTP/1.1 arvados-controller"}}
remotes[id] = conn
}
+ if healthFuncs != nil {
+ hf := map[string]health.Func{"vocabulary": local.LastVocabularyError}
+ *healthFuncs = hf
+ }
+
return &Conn{
cluster: cluster,
local: local,
// tokens from an incoming request context, determines whether they
// should (and can) be salted for the given remoteID, and returns the
// resulting tokens.
-func saltedTokenProvider(local backend, remoteID string) rpc.TokenProvider {
+func saltedTokenProvider(cluster *arvados.Cluster, local backend, remoteID string) rpc.TokenProvider {
return func(ctx context.Context) ([]string, error) {
var tokens []string
incoming, ok := auth.FromContext(ctx)
return nil, errors.New("no token provided")
}
for _, token := range incoming.Tokens {
+ if strings.HasPrefix(token, "v2/"+cluster.ClusterID+"-") && remoteID == cluster.Login.LoginCluster {
+ // If we did this, the login cluster
+ // would call back to us and then
+ // reject our response because the
+ // user UUID prefix (i.e., the
+ // LoginCluster prefix) won't match
+ // the token UUID prefix (i.e., our
+ // prefix).
+ return nil, httpErrorf(http.StatusUnauthorized, "cannot use a locally issued token to forward a request to our login cluster (%s)", remoteID)
+ }
salted, err := auth.SaltToken(token, remoteID)
switch err {
case nil:
return json.RawMessage(buf.Bytes()), err
}
+ func (conn *Conn) VocabularyGet(ctx context.Context) (arvados.Vocabulary, error) {
+ return conn.chooseBackend(conn.cluster.ClusterID).VocabularyGet(ctx)
+ }
+
func (conn *Conn) Login(ctx context.Context, options arvados.LoginOptions) (arvados.LoginResponse, error) {
if id := conn.cluster.Login.LoginCluster; id != "" && id != conn.cluster.ClusterID {
// defer entire login procedure to designated cluster
return conn.chooseBackend(options.UUID).GroupUntrash(ctx, options)
}
+ func (conn *Conn) LinkCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.ClusterID).LinkCreate(ctx, options)
+ }
+
+ func (conn *Conn) LinkUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkUpdate(ctx, options)
+ }
+
+ func (conn *Conn) LinkGet(ctx context.Context, options arvados.GetOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkGet(ctx, options)
+ }
+
+ func (conn *Conn) LinkList(ctx context.Context, options arvados.ListOptions) (arvados.LinkList, error) {
+ return conn.generated_LinkList(ctx, options)
+ }
+
+ func (conn *Conn) LinkDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.Link, error) {
+ return conn.chooseBackend(options.UUID).LinkDelete(ctx, options)
+ }
+
func (conn *Conn) SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
return conn.generated_SpecimenList(ctx, options)
}