From 92fba2405fc9cd6bbb6b1c4f5a7774f15b242696 Mon Sep 17 00:00:00 2001 From: Peter Amstutz Date: Wed, 25 Jul 2018 15:09:38 -0400 Subject: [PATCH] 13570: Doc tweaks * Capitalize "Keep" * Clarify usage of cgroups * Clarify "Available compute node types" Arvados-DCO-1.1-Signed-off-by: Peter Amstutz --- doc/api/execution.html.textile.liquid | 12 ++++++------ doc/user/cwl/cwl-style.html.textile.liquid | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/doc/api/execution.html.textile.liquid b/doc/api/execution.html.textile.liquid index f7772cb2f7..cada9ab1b8 100644 --- a/doc/api/execution.html.textile.liquid +++ b/doc/api/execution.html.textile.liquid @@ -24,17 +24,17 @@ h2. Container API h2(#RAM). Understanding RAM requests for containers -The @runtime_constraints@ section of a container specifies working RAM (@ram@) and keep cache (@keep_cache_ram@). If not specified, containers get a default keep cache (@container_default_keep_cache_ram@, default 256 MiB). The total RAM requested for a container is the sum of working RAM, keep cache, and an additional RAM reservation configured by the admin (@ReserveExtraRAM@ in the dispatcher configuration, default zero). +The @runtime_constraints@ section of a container specifies working RAM (@ram@) and Keep cache (@keep_cache_ram@). If not specified, containers get a default Keep cache (@container_default_keep_cache_ram@, default 256 MiB). The total RAM requested for a container is the sum of working RAM, Keep cache, and an additional RAM reservation configured by the admin (@ReserveExtraRAM@ in the dispatcher configuration, default zero). -The total RAM request is used to schedule containers onto compute nodes. On HPC systems, multiple containers may run on a multi-core node. RAM allocation limits may be enforced using kernel controls such as cgroups. +The total RAM request is used to schedule containers onto compute nodes. RAM allocation limits are enforced using kernel controls such as cgroups. A container which requests 1 GiB RAM will only be permitted to allocate up to 1 GiB of RAM, even if scheduled on a 4 GiB node. On HPC systems, a multi-core node may run multiple containers at a time. When running on the cloud, the memory request (along with CPU and disk) is used to select (and possibly boot) an instance type with adequate resources to run the container. Instance type RAM is derated 5% from the published specification to accomodate virtual machine, kernel and system services overhead. h3. Calculate minimum instance type RAM for a container - (RAM request + keep cache + ReserveExtraRAM) * (100/95) + (RAM request + Keep cache + ReserveExtraRAM) * (100/95) -For example, for a 3 GiB request, default keep cache, and no extra RAM reserved: +For example, for a 3 GiB request, default Keep cache, and no extra RAM reserved: (3072 + 256) * 1.0526 = 3494 MiB @@ -42,9 +42,9 @@ To run this container, the instance type must have a published RAM size of at le h3. Calculate the maximum requestable RAM for an instance type - (Instance type RAM * (95/100)) - keep cache - ReserveExtraRAM + (Instance type RAM * (95/100)) - Keep cache - ReserveExtraRAM -For example, for a 3.75 GiB node, default keep cache, and no extra RAM reserved: +For example, for a 3.75 GiB node, default Keep cache, and no extra RAM reserved: (3840 * 0.95) - 256 = 3392 MiB diff --git a/doc/user/cwl/cwl-style.html.textile.liquid b/doc/user/cwl/cwl-style.html.textile.liquid index 5d84f1d512..6b5147ddcb 100644 --- a/doc/user/cwl/cwl-style.html.textile.liquid +++ b/doc/user/cwl/cwl-style.html.textile.liquid @@ -113,7 +113,7 @@ steps: tmpdirMin: 90000 -* Available compute nodes vary over time and across different cloud providers, so try to limit the RAM requirement to what the program actually needs. However, if you need to target a specific compute node type, see this discussion on "calculating RAM request and choosing instance type for containers.":{{site.baseurl}}/api/execution.html#RAM +* Available compute nodes types vary over time and across different cloud providers, so try to limit the RAM requirement to what the program actually needs. However, if you need to target a specific compute node type, see this discussion on "calculating RAM request and choosing instance type for containers.":{{site.baseurl}}/api/execution.html#RAM * Instead of scattering separate steps, prefer to scatter over a subworkflow. -- 2.30.2