3 navsection: installguide
4 title: Install the cloud dispatcher
7 Copyright (C) The Arvados Authors. All rights reserved.
9 SPDX-License-Identifier: CC-BY-SA-3.0
12 {% include 'notebox_begin_warning' %}
13 @arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
14 {% include 'notebox_end' %}
16 # "Introduction":#introduction
17 # "Create compute node VM image":#create-image
18 # "Update config.yml":#update-config
19 # "Install arvados-dispatch-cloud":#install-packages
20 # "Start the service":#start-service
21 # "Restart the API server and controller":#restart-api
22 # "Confirm working installation":#confirm-working
24 h2(#introduction). Introduction
26 The cloud dispatch service is for running containers on cloud VMs. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine.
28 The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs. It is not resource-intensive, so you can run it on the API server node.
30 More detail about the internal operation of the dispatcher can be found in the "architecture section":{{site.baseurl}}/architecture/dispatchcloud.html.
32 h2(#update-config). Update config.yml
34 h3. Configure CloudVMs
36 Add or update the following portions of your cluster configuration file, @config.yml@. Refer to "config.defaults.yml":{{site.baseurl}}/admin/config.html for information about additional configuration options. The @DispatchPrivateKey@ should be the *private* key generated in "Create a SSH keypair":install-compute-node.html#sshkeypair .
42 "http://localhost:9006": {}
45 # BootProbeCommand is a shell command that succeeds when an instance is ready for service
46 BootProbeCommand: "sudo systemctl status docker"
48 <b># --- driver-specific configuration goes here --- see Amazon and Azure examples below ---</b>
51 -----BEGIN RSA PRIVATE KEY-----
52 MIIEpQIBAAKCAQEAqXoCzcOBkFQ7w4dvXf9B++1ctgZRqEbgRYL3SstuMV4oawks
53 ttUuxJycDdsPmeYcHsKo8vsEZpN6iYsX6ZZzhkO5nEayUTU8sBjmg1ZCTo4QqKXr
54 FJ+amZ7oYMDof6QEdwl6KNDfIddL+NfBCLQTVInOAaNss7GRrxLTuTV7HcRaIUUI
55 jYg0Ibg8ZZTzQxCvFXXnjseTgmOcTv7CuuGdt91OVdoq8czG/w8TwOhymEb7mQlt
56 lXuucwQvYgfoUgcnTgpJr7j+hafp75g2wlPozp8gJ6WQ2yBWcfqL2aw7m7Ll88Nd
58 oFyAjVoexx0RBcH6BveTfQtJKbktP1qBO4mXo2dP0cacuZEtlAqW9Eb06Pvaw/D9
59 foktmqOY8MyctzFgXBpGTxPliGjqo8OkrOyQP2g+FL7v+Km31Xs61P8=
60 -----END RSA PRIVATE KEY-----
63 ProviderType: x1.medium
69 ProviderType: x1.large
72 IncludedScratch: 128GB
77 h3(#GPUsupport). NVIDIA GPU support
79 To specify instance types with NVIDIA GPUs, the compute image must be built with CUDA support (this means setting @arvados_compute_nvidia: true@ in @host_config.yml@ when "building the compute image":install-compute-node.html). You must include an additional @GPU@ section for each instance type that includes GPUs:
82 <pre><code> InstanceTypes:
84 ProviderType: g4dn.xlarge
87 IncludedScratch: 125GB
98 The @DriverVersion@ is the version of the CUDA toolkit installed in your compute image (in "X.Y" format, do not include the patchlevel).
100 The @HardwareTarget@ is the "CUDA compute capability of the GPUs available for this instance type":https://developer.nvidia.com/cuda-gpus in "X.Y" format.
102 The @DeviceCount@ is the number of GPU cores available for this instance type.
104 @VRAM@ is the amount of VRAM available per GPU device.
106 h3(#ROCmGPUsupport). AMD GPU support
108 To specify instance types with AMD GPUs, the compute image must be built with ROCm support (currently, installing ROCm automatically is not supported by the Arvados compute image Ansible playbook, but can be added manually after the fact). You must include an additional @GPU@ section for each instance type that includes GPUs:
111 <pre><code> InstanceTypes:
113 ProviderType: g4da.xlarge
116 IncludedScratch: 125GB
121 HardwareTarget: "gfx1100"
127 @DriverVersion@ is the version of the ROCm toolkit installed in your compute image (in "X.Y" format, do not include the patchlevel).
129 @HardwareTarget@ (e.g. gfx1100) corresponds to the GPU architecture of the device. Use @rocminfo@ to determine your hardware target. See also https://rocm.docs.amd.com/en/latest/reference/gpu-arch-specs.html (use the column "LLVM target name").
131 @DeviceCount@ is the number of GPU cores available for this instance type.
133 @VRAM@ is the amount of VRAM available per GPU device.
135 h3(#aws-ebs-autoscaler). EBS Autoscale configuration
137 See "Autoscaling compute node scratch space":install-compute-node.html#aws-ebs-autoscaler for details about compute image configuration.
139 The @Containers.InstanceTypes@ list should be modified so that all @AddedScratch@ lines are removed, and the @IncludedScratch@ value should be set to 5 TB. This way, the scratch space requirements will be met by all the defined instance type. For example:
141 <notextile><pre><code> InstanceTypes:
143 ProviderType: c5.large
149 ProviderType: m5.large
155 </code></pre></notextile>
157 You will also need to create an IAM role in AWS with these permissions:
159 <notextile><pre><code>{
165 "ec2:DescribeVolumeStatus",
166 "ec2:DescribeVolumes",
168 "ec2:ModifyInstanceAttribute",
169 "ec2:DescribeVolumeAttribute",
178 </code></pre></notextile>
180 Then set @Containers.CloudVMs.DriverParameters.IAMInstanceProfile@ to the name of the IAM role. This will make @arvados-dispatch-cloud@ pass an IAM instance profile to the compute nodes when they start up, giving them sufficient permissions to attach and grow EBS volumes.
182 h3. AWS Credentials for Local Keepstore on Compute node
184 When @Containers.LocalKeepBlobBuffersPerVCPU@ is non-zero, the compute node will spin up a local Keepstore service for direct storage access. If Keep is backed by S3, the compute node will need to be able to access the S3 bucket.
186 If the AWS credentials for S3 access are configured in @config.yml@ (i.e. @Volumes.DriverParameters.AccessKeyID@ and @Volumes.DriverParameters.SecretAccessKey@), these credentials will be made available to the local Keepstore on the compute node to access S3 directly and no further configuration is necessary.
188 If @config.yml@ does not have @Volumes.DriverParameters.AccessKeyID@ and @Volumes.DriverParameters.SecretAccessKey@ defined, Keepstore uses instance metadata to retrieve IAM role credentials. The @CloudVMs.DriverParameters.IAMInstanceProfile@ parameter must be configured with the name of a profile whose IAM role has permission to access the S3 bucket(s). With this setup, @arvados-dispatch-cloud@ will attach the IAM role to the compute node as it is created. The instance profile name is "often identical to the name of the IAM role":https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile.
190 *If you are also using EBS Autoscale feature, the role in @IAMInstanceProfile@ must have both ec2 and s3 permissions.*
192 h3. Minimal configuration example for Amazon EC2
194 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#aws.
197 <pre><code> Containers:
199 ImageID: <span class="userinput">ami-01234567890abcdef</span>
202 # If you are not using an IAM role for authentication, specify access
203 # credentials here. Otherwise, omit or set AccessKeyID and
204 # SecretAccessKey to an empty value.
205 AccessKeyID: XXXXXXXXXXXXXXXXXXXX
206 SecretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
210 SubnetID: subnet-0123abcd
213 AdminUsername: arvados
217 h3(#IAM). Example IAM policy for cloud dispatcher
219 Example policy for the IAM role used by the cloud dispatcher:
224 "Id": "arvados-dispatch-cloud policy",
237 "ec2:TerminateInstances",
238 "ec2:ModifyInstanceAttribute",
239 "ec2:CreateSecurityGroup",
240 "ec2:DeleteSecurityGroup",
250 h3. Minimal configuration example for Azure
254 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure.
257 <pre><code> Containers:
259 ImageID: <span class="userinput">"zzzzz-compute-v1597349873"</span>
261 # (azure) managed disks: set MaxConcurrentInstanceCreateOps to 20 to avoid timeouts, cf
262 # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image
263 MaxConcurrentInstanceCreateOps: 20
266 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
267 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
268 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
269 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
271 # Data center where VMs will be allocated
274 # The resource group where the VM and virtual NIC will be
277 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
279 Subnet: xxxxx-subnet-private
281 # The resource group where the disk image is stored, only needs to
282 # be specified if it is different from ResourceGroup
283 ImageResourceGroup: aaaaa
288 Azure recommends using managed images. If you plan to start more than 20 VMs simultaneously, Azure recommends using a shared image gallery instead to avoid slowdowns and timeouts during the creation of the VMs.
290 Using an image from a shared image gallery:
293 <pre><code> Containers:
295 ImageID: <span class="userinput">"shared_image_gallery_image_definition_name"</span>
299 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
300 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
301 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
302 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
304 # Data center where VMs will be allocated
307 # The resource group where the VM and virtual NIC will be
310 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
312 Subnet: xxxxx-subnet-private
314 # The resource group where the disk image is stored, only needs to
315 # be specified if it is different from ResourceGroup
316 ImageResourceGroup: aaaaa
318 # (azure) shared image gallery: the name of the gallery
319 SharedImageGalleryName: "shared_image_gallery_1"
320 # (azure) shared image gallery: the version of the image definition
321 SharedImageGalleryImageVersion: "0.0.1"
326 Using unmanaged disks (deprecated):
328 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure.
331 <pre><code> Containers:
333 ImageID: <span class="userinput">"https://zzzzzzzz.blob.core.windows.net/system/Microsoft.Compute/Images/images/zzzzz-compute-osDisk.55555555-5555-5555-5555-555555555555.vhd"</span>
337 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
338 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
339 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
340 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
342 # Data center where VMs will be allocated
345 # The resource group where the VM and virtual NIC will be
348 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
350 Subnet: xxxxx-subnet-private
352 # Where to store the VM VHD blobs
353 StorageAccount: example
359 Get the @SubscriptionID@ and @TenantID@:
365 "cloudName": "AzureCloud",
366 "id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX",
368 "name": "Your Subscription",
370 "tenantId": "YYYYYYYY-YYYY-YYYY-YYYYYYYY",
372 "name": "you@example.com",
379 You will need to create a "service principal" to use as a delegated authority for API access.
381 <notextile><pre><code>$ az ad app create --display-name "Arvados Dispatch Cloud (<span class="userinput">ClusterID</span>)" --homepage "https://arvados.org" --identifier-uris "https://<span class="userinput">ClusterID.example.com</span>" --end-date 2299-12-31 --password <span class="userinput">Your_Password</span>
382 $ az ad sp create "<span class="userinput">appId</span>"
383 (appId is part of the response of the previous command)
384 $ az role assignment create --assignee "<span class="userinput">objectId</span>" --role Owner --scope /subscriptions/{subscriptionId}/
385 (objectId is part of the response of the previous command)
386 </code></pre></notextile>
388 Now update your @config.yml@ file:
390 @ClientID@ is the 'appId' value.
392 @ClientSecret@ is what was provided as <span class="userinput">Your_Password</span>.
394 h3. Test your configuration
396 Run the @cloudtest@ tool to verify that your configuration works. This creates a new cloud VM, confirms that it boots correctly and accepts your configured SSH private key, and shuts it down.
399 <pre><code>~$ <span class="userinput">arvados-server cloudtest && echo "OK!"</span>
403 Refer to the "cloudtest tool documentation":../../admin/cloudtest.html for more information.
405 {% assign arvados_component = 'arvados-dispatch-cloud' %}
407 {% include 'install_packages' %}
409 {% include 'start_service' %}
411 {% include 'restart_api' %}
413 h2(#confirm-working). Confirm working installation
415 On the dispatch node, start monitoring the arvados-dispatch-cloud logs:
418 <pre><code># <span class="userinput">journalctl -o cat -fu arvados-dispatch-cloud.service</span>
422 In another terminal window, use the diagnostics tool to run a simple container.
425 <pre><code># <span class="userinput">arvados-client sudo diagnostics</span>
426 INFO 5: running health check (same as `arvados-server check`)
427 INFO 10: getting discovery document from https://zzzzz.arvadosapi.com/discovery/v1/apis/arvados/v1/rest
429 INFO 160: running a container
430 INFO ... container request submitted, waiting up to 10m for container to run
434 After performing a number of other quick tests, this will submit a new container request and wait for it to finish.
436 While the diagnostics tool is waiting, the @arvados-dispatch-cloud@ logs will show details about creating a cloud instance, waiting for it to be ready, and scheduling the new container on it.
438 You can also use the "arvados-dispatch-cloud API":{{site.baseurl}}/api/dispatch.html to get a list of queued and running jobs and cloud instances. Use your @ManagementToken@ to test the dispatcher's endpoint. For example, when one container is running:
441 <pre><code>~$ <span class="userinput">curl -sH "Authorization: Bearer $token" http://localhost:9006/arvados/v1/dispatch/containers</span>
446 "uuid": "zzzzz-dz642-hdp2vpu9nq14tx0",
449 "scheduling_parameters": {
451 "preemptible": false,
455 "runtime_status": null,
460 "Name": "Standard_D2s_v3",
461 "ProviderType": "Standard_D2s_v3",
464 "Scratch": 16000000000,
465 "IncludedScratch": 16000000000,
476 A similar request can be made to the @http://localhost:9006/arvados/v1/dispatch/instances@ endpoint.
478 After the container finishes, you can get the container record by UUID *from a shell server* to see its results:
481 <pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>
485 "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166",
486 "output":"d41d8cd98f00b204e9800998ecf8427e+0",
493 You can use standard Keep tools to view the container's output and logs from their corresponding fields. For example, to see the logs from the collection referenced in the @log@ field:
496 <pre><code>~$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span>
500 ~$ <span class="userinput">arv-get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span>
501 2016-08-05T13:53:06.201011Z Hello, Crunch!
505 If the container does not dispatch successfully, refer to the @arvados-dispatch-cloud@ logs for information about why it failed.