See https://docs.nvidia.com/deploy/cuda-compatibility/ for
details.
- cudaComputeCapabilityMin:
- type: string
- doc: Minimum CUDA hardware capability required to run the software, in X.Y format.
- deviceCountMin:
- type: int?
+ cudaComputeCapability:
+ type:
+ - 'string'
+ - 'string[]'
+ doc: |
+ CUDA hardware capability required to run the software, in X.Y
+ format.
+
+ * If this is a single value, it defines only the minimum
+ compute capability. GPUs with higher capability are also
+ accepted.
+
+ * If it is an array value, then only select GPUs with compute
+ capabilities that explicitly appear in the array.
+ cudaDeviceCountMin:
+ type: ['null', int, cwl:Expression]
default: 1
- doc: Minimum number of GPU devices to request, default 1.
- deviceCountMax:
- type: int?
- doc: Maximum number of GPU devices to request. If not specified, same as `deviceCountMin`.
+ doc: |
+ Minimum number of GPU devices to request. If not specified,
+ same as `cudaDeviceCountMax`. If neither are specified,
+ default 1.
+ cudaDeviceCountMax:
+ type: ['null', int, cwl:Expression]
+ doc: |
+ Maximum number of GPU devices to request. If not specified,
+ same as `cudaDeviceCountMin`.
+
+- name: UsePreemptible
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify a workflow step should opt-in or opt-out of using preemptible (spot) instances.
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:UsePreemptible"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ usePreemptible: boolean
+
+- name: OutputCollectionProperties
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Specify metadata properties that will be set on the output
+ collection associated with this workflow or step.
+ fields:
+ class:
+ type: string
+ doc: "Always 'arv:OutputCollectionProperties"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ outputProperties:
+ type: PropertyDef[]
+ jsonldPredicate:
+ mapSubject: propertyName
+ mapPredicate: propertyValue
+
+
+- name: KeepCacheType
+ type: enum
+ symbols:
+ - ram_cache
+ - disk_cache
+ doc:
+ - |
+ ram_cache: Keep blocks will be cached in RAM only.
+ - |
+ disk_cache: Keep blocks will be cached to disk and
+ memory-mapped. The disk cache leverages the kernel's virtual
+ memory system so "hot" data will generally still be kept in
+ RAM.
+
+- name: KeepCacheTypeRequirement
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Choose keep cache strategy.
+ fields:
+ - name: class
+ type: string
+ doc: "'arv:KeepCacheTypeRequirement'"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ - name: keepCacheType
+ type: KeepCacheType?
+ doc: |
+ Whether Keep blocks loaded by arv-mount should be kept in RAM
+ only or written to disk and memory-mapped. The disk cache
+ leverages the kernel's virtual memory system so "hot" data will
+ generally still be kept in RAM.
+
+- name: OutOfMemoryRetry
+ type: record
+ extends: cwl:ProcessRequirement
+ inVocab: false
+ doc: |
+ Detect when a failed tool run may have run out of memory, and
+ re-submit the container with more RAM.
+ fields:
+ - name: class
+ type: string
+ doc: "'arv:OutOfMemoryRetry"
+ jsonldPredicate:
+ _id: "@type"
+ _type: "@vocab"
+ - name: memoryErrorRegex
+ type: string?
+ doc: |
+ A regular expression that will be used on the text of stdout
+ and stderr produced by the tool to determine if a failed job
+ should be retried with more RAM. By default, searches for the
+ substrings 'bad_alloc' and 'OutOfMemory'.
+ - name: memoryRetryMultipler
+ type: float
+ doc: |
+ If the container failed on its first run, re-submit the
+ container with the RAM request multiplied by this factor.