arv:UsePreemptible:
usePreemptible: true
+
+ arv:OutOfMemoryRetry:
+ memoryRetryMultipler: 2
+ memoryErrorRegex: "custom memory error"
{% endcodeblock %}
h2(#RunInSingleContainer). arv:RunInSingleContainer
|_. Field |_. Type |_. Description |
|usePreemptible|boolean|Required, true to opt-in to using preemptible instances, false to opt-out.|
+
+h2(#OutOfMemoryRetry). arv:OutOfMemoryRetry
+
+Request that when a workflow step appears to have failed because it did not request enough RAM, it should be re-submitted with more RAM. Out of memory conditions are detected either by the container being unexpectedly killed (exit code 137) or by matching a pattern in the container's output (see @memoryErrorRegex@). Retrying will increase the base RAM request by the value of @memoryRetryMultipler@. For example, if the original RAM request was 10 GiB and the multiplier is 1.5, then it will re-submit with 15 GiB. Containers are only re-submitted once. If it fails a second time after increasing RAM, then the worklow step will still fail.
+
+table(table table-bordered table-condensed).
+|_. Field |_. Type |_. Description |
+|memoryRetryMultipler|float|Required, the retry will multiply the base memory request by this factor to get the retry memory request.|
+|memoryErrorRegex|string|Optional, a custom regex that, if found in the stdout, stderr or crunch-run logging of a program, will trigger a retry with greater RAM. If not provided, the default pattern matches "out of memory" (with or without spaces), "memory error" (with or without spaces), "bad_alloc" and "container using over 90% of memory"|
+
h2. arv:dockerCollectionPDH
This is an optional extension field appearing on the standard @DockerRequirement@. It specifies the portable data hash of the Arvados collection containing the Docker image. If present, it takes precedence over @dockerPull@ or @dockerImageId@.
logger.warning("%s API revision is %s, revision %s is required to support setting properties on output collections.",
self.arvrunner.label(self), self.arvrunner.api._rootDesc["revision"], "20220510")
- ramMultiplier = [1]
+ ram_multiplier = [1]
oom_retry_req, _ = self.get_requirement("http://arvados.org/cwl#OutOfMemoryRetry")
if oom_retry_req and oom_retry_req.get('memoryRetryMultipler'):
- ramMultiplier.append(oom_retry_req.get('memoryRetryMultipler'))
+ ram_multiplier.append(oom_retry_req.get('memoryRetryMultipler'))
if runtimeContext.runnerjob.startswith("arvwf:"):
wfuuid = runtimeContext.runnerjob[6:runtimeContext.runnerjob.index("#")]
self.uuid = runtimeContext.submit_request_uuid
- for i in ramMultiplier:
+ for i in ram_multiplier:
runtime_constraints["ram"] = ram * i
if self.uuid:
break
if response["container_uuid"] is None:
- runtime_constraints["ram"] = ram * ramMultiplier[self.attempt_count]
+ runtime_constraints["ram"] = ram * ram_multiplier[self.attempt_count]
container_request["state"] = "Committed"
response = self.arvrunner.api.container_requests().update(
processStatus = "permanentFail"
if processStatus == "permanentFail" and self.attempt_count == 1 and self.out_of_memory_retry(record, container):
- logger.info("%s Container failed with out of memory error, retrying with more RAM.",
+ logger.warning("%s Container failed with out of memory error, retrying with more RAM.",
self.arvrunner.label(self))
self.job_runtime.submit_request_uuid = None
self.uuid = None
return
if rcode == 137:
- logger.warning("%s Container may have been killed for using too much RAM. Try resubmitting with a higher 'ramMin'.",
+ logger.warning("%s Container may have been killed for using too much RAM. Try resubmitting with a higher 'ramMin' or use the arv:OutOfMemoryRetry feature.",
self.arvrunner.label(self))
else:
processStatus = "permanentFail"