3 navsection: installguide
4 title: Install the cloud dispatcher
7 Copyright (C) The Arvados Authors. All rights reserved.
9 SPDX-License-Identifier: CC-BY-SA-3.0
12 {% include 'notebox_begin_warning' %}
13 @arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
14 {% include 'notebox_end' %}
16 # "Introduction":#introduction
17 # "Create compute node VM image":#create-image
18 # "Update config.yml":#update-config
19 # "Install arvados-dispatch-cloud":#install-packages
20 # "Start the service":#start-service
21 # "Restart the API server and controller":#restart-api
22 # "Confirm working installation":#confirm-working
24 h2(#introduction). Introduction
26 The cloud dispatch service is for running containers on cloud VMs. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine.
28 The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs. It is not resource-intensive, so you can run it on the API server node.
30 More detail about the internal operation of the dispatcher can be found in the "architecture section":{{site.baseurl}}/architecture/dispatchcloud.html.
32 h2(#update-config). Update config.yml
34 h3. Configure CloudVMs
36 Add or update the following portions of your cluster configuration file, @config.yml@. Refer to "config.defaults.yml":{{site.baseurl}}/admin/config.html for information about additional configuration options. The @DispatchPrivateKey@ should be the *private* key generated in "Create a SSH keypair":install-compute-node.html#sshkeypair .
42 "http://localhost:9006": {}
45 # BootProbeCommand is a shell command that succeeds when an instance is ready for service
46 BootProbeCommand: "sudo systemctl status docker"
48 <b># --- driver-specific configuration goes here --- see Amazon and Azure examples below ---</b>
51 -----BEGIN RSA PRIVATE KEY-----
52 MIIEpQIBAAKCAQEAqXoCzcOBkFQ7w4dvXf9B++1ctgZRqEbgRYL3SstuMV4oawks
53 ttUuxJycDdsPmeYcHsKo8vsEZpN6iYsX6ZZzhkO5nEayUTU8sBjmg1ZCTo4QqKXr
54 FJ+amZ7oYMDof6QEdwl6KNDfIddL+NfBCLQTVInOAaNss7GRrxLTuTV7HcRaIUUI
55 jYg0Ibg8ZZTzQxCvFXXnjseTgmOcTv7CuuGdt91OVdoq8czG/w8TwOhymEb7mQlt
56 lXuucwQvYgfoUgcnTgpJr7j+hafp75g2wlPozp8gJ6WQ2yBWcfqL2aw7m7Ll88Nd
58 oFyAjVoexx0RBcH6BveTfQtJKbktP1qBO4mXo2dP0cacuZEtlAqW9Eb06Pvaw/D9
59 foktmqOY8MyctzFgXBpGTxPliGjqo8OkrOyQP2g+FL7v+Km31Xs61P8=
60 -----END RSA PRIVATE KEY-----
63 ProviderType: x1.medium
69 ProviderType: x1.large
72 IncludedScratch: 128GB
77 h3(#GPUsupport). NVIDIA GPU support
79 To specify instance types with NVIDIA GPUs, "the compute image must be built with CUDA support":install-compute-node.html#nvidia , and you must include an additional @CUDA@ section:
82 <pre><code> InstanceTypes:
84 ProviderType: g4dn.xlarge
87 IncludedScratch: 125GB
91 HardwareCapability: "7.5"
96 The @DriverVersion@ is the version of the CUDA toolkit installed in your compute image (in X.Y format, do not include the patchlevel). The @HardwareCapability@ is the "CUDA compute capability of the GPUs available for this instance type":https://developer.nvidia.com/cuda-gpus. The @DeviceCount@ is the number of GPU cores available for this instance type.
98 h3(#aws-ebs-autoscaler). EBS Autoscale configuration
100 See "Autoscaling compute node scratch space":install-compute-node.html#aws-ebs-autoscaler for details about compute image configuration.
102 The @Containers.InstanceTypes@ list should be modified so that all @AddedScratch@ lines are removed, and the @IncludedScratch@ value should be set to 5 TB. This way, the scratch space requirements will be met by all the defined instance type. For example:
104 <notextile><pre><code> InstanceTypes:
106 ProviderType: c5.large
112 ProviderType: m5.large
118 </code></pre></notextile>
120 You will also need to create an IAM role in AWS with these permissions:
122 <notextile><pre><code>{
128 "ec2:DescribeVolumeStatus",
129 "ec2:DescribeVolumes",
131 "ec2:ModifyInstanceAttribute",
132 "ec2:DescribeVolumeAttribute",
141 </code></pre></notextile>
143 Then set @Containers.CloudVMs.DriverParameters.IAMInstanceProfile@ to the name of the IAM role. This will make @arvados-dispatch-cloud@ pass an IAM instance profile to the compute nodes when they start up, giving them sufficient permissions to attach and grow EBS volumes.
145 h3. AWS Credentials for Local Keepstore on Compute node
147 When @Containers.LocalKeepBlobBuffersPerVCPU@ is non-zero, the compute node will spin up a local Keepstore service for direct storage access. If Keep is backed by S3, the compute node will need to be able to access the S3 bucket.
149 If the AWS credentials for S3 access are configured in @config.yml@ (i.e. @Volumes.DriverParameters.AccessKeyID@ and @Volumes.DriverParameters.SecretAccessKey@), these credentials will be made available to the local Keepstore on the compute node to access S3 directly and no further configuration is necessary.
151 If @config.yml@ does not have @Volumes.DriverParameters.AccessKeyID@ and @Volumes.DriverParameters.SecretAccessKey@ defined, Keepstore uses instance metadata to retrieve IAM role credentials. The @CloudVMs.DriverParameters.IAMInstanceProfile@ parameter must be configured with the name of a profile whose IAM role has permission to access the S3 bucket(s). With this setup, @arvados-dispatch-cloud@ will attach the IAM role to the compute node as it is created. The instance profile name is "often identical to the name of the IAM role":https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile.
153 *If you are also using EBS Autoscale feature, the role in @IAMInstanceProfile@ must have both ec2 and s3 permissions.*
155 h3. Minimal configuration example for Amazon EC2
157 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#aws.
160 <pre><code> Containers:
162 ImageID: <span class="userinput">ami-01234567890abcdef</span>
165 # If you are not using an IAM role for authentication, specify access
166 # credentials here. Otherwise, omit or set AccessKeyID and
167 # SecretAccessKey to an empty value.
168 AccessKeyID: XXXXXXXXXXXXXXXXXXXX
169 SecretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
173 SubnetID: subnet-0123abcd
176 AdminUsername: arvados
180 h3(#IAM). Example IAM policy for cloud dispatcher
182 Example policy for the IAM role used by the cloud dispatcher:
187 "Id": "arvados-dispatch-cloud policy",
200 "ec2:TerminateInstances",
201 "ec2:ModifyInstanceAttribute",
202 "ec2:CreateSecurityGroup",
203 "ec2:DeleteSecurityGroup",
213 h3. Minimal configuration example for Azure
217 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure.
220 <pre><code> Containers:
222 ImageID: <span class="userinput">"zzzzz-compute-v1597349873"</span>
224 # (azure) managed disks: set MaxConcurrentInstanceCreateOps to 20 to avoid timeouts, cf
225 # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image
226 MaxConcurrentInstanceCreateOps: 20
229 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
230 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
231 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
232 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
234 # Data center where VMs will be allocated
237 # The resource group where the VM and virtual NIC will be
240 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
242 Subnet: xxxxx-subnet-private
244 # The resource group where the disk image is stored, only needs to
245 # be specified if it is different from ResourceGroup
246 ImageResourceGroup: aaaaa
251 Azure recommends using managed images. If you plan to start more than 20 VMs simultaneously, Azure recommends using a shared image gallery instead to avoid slowdowns and timeouts during the creation of the VMs.
253 Using an image from a shared image gallery:
256 <pre><code> Containers:
258 ImageID: <span class="userinput">"shared_image_gallery_image_definition_name"</span>
262 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
263 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
264 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
265 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
267 # Data center where VMs will be allocated
270 # The resource group where the VM and virtual NIC will be
273 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
275 Subnet: xxxxx-subnet-private
277 # The resource group where the disk image is stored, only needs to
278 # be specified if it is different from ResourceGroup
279 ImageResourceGroup: aaaaa
281 # (azure) shared image gallery: the name of the gallery
282 SharedImageGalleryName: "shared_image_gallery_1"
283 # (azure) shared image gallery: the version of the image definition
284 SharedImageGalleryImageVersion: "0.0.1"
289 Using unmanaged disks (deprecated):
291 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure.
294 <pre><code> Containers:
296 ImageID: <span class="userinput">"https://zzzzzzzz.blob.core.windows.net/system/Microsoft.Compute/Images/images/zzzzz-compute-osDisk.55555555-5555-5555-5555-555555555555.vhd"</span>
300 SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
301 ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
302 ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
303 TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
305 # Data center where VMs will be allocated
308 # The resource group where the VM and virtual NIC will be
311 NetworkResourceGroup: yyyyy # only if different from ResourceGroup
313 Subnet: xxxxx-subnet-private
315 # Where to store the VM VHD blobs
316 StorageAccount: example
322 Get the @SubscriptionID@ and @TenantID@:
328 "cloudName": "AzureCloud",
329 "id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX",
331 "name": "Your Subscription",
333 "tenantId": "YYYYYYYY-YYYY-YYYY-YYYYYYYY",
335 "name": "you@example.com",
342 You will need to create a "service principal" to use as a delegated authority for API access.
344 <notextile><pre><code>$ az ad app create --display-name "Arvados Dispatch Cloud (<span class="userinput">ClusterID</span>)" --homepage "https://arvados.org" --identifier-uris "https://<span class="userinput">ClusterID.example.com</span>" --end-date 2299-12-31 --password <span class="userinput">Your_Password</span>
345 $ az ad sp create "<span class="userinput">appId</span>"
346 (appId is part of the response of the previous command)
347 $ az role assignment create --assignee "<span class="userinput">objectId</span>" --role Owner --scope /subscriptions/{subscriptionId}/
348 (objectId is part of the response of the previous command)
349 </code></pre></notextile>
351 Now update your @config.yml@ file:
353 @ClientID@ is the 'appId' value.
355 @ClientSecret@ is what was provided as <span class="userinput">Your_Password</span>.
357 h3. Test your configuration
359 Run the @cloudtest@ tool to verify that your configuration works. This creates a new cloud VM, confirms that it boots correctly and accepts your configured SSH private key, and shuts it down.
362 <pre><code>~$ <span class="userinput">arvados-server cloudtest && echo "OK!"</span>
366 Refer to the "cloudtest tool documentation":../../admin/cloudtest.html for more information.
368 {% assign arvados_component = 'arvados-dispatch-cloud' %}
370 {% include 'install_packages' %}
372 {% include 'start_service' %}
374 {% include 'restart_api' %}
376 h2(#confirm-working). Confirm working installation
378 On the dispatch node, start monitoring the arvados-dispatch-cloud logs:
381 <pre><code># <span class="userinput">journalctl -o cat -fu arvados-dispatch-cloud.service</span>
385 In another terminal window, use the diagnostics tool to run a simple container.
388 <pre><code># <span class="userinput">arvados-client sudo diagnostics</span>
389 INFO 5: running health check (same as `arvados-server check`)
390 INFO 10: getting discovery document from https://zzzzz.arvadosapi.com/discovery/v1/apis/arvados/v1/rest
392 INFO 160: running a container
393 INFO ... container request submitted, waiting up to 10m for container to run
397 After performing a number of other quick tests, this will submit a new container request and wait for it to finish.
399 While the diagnostics tool is waiting, the @arvados-dispatch-cloud@ logs will show details about creating a cloud instance, waiting for it to be ready, and scheduling the new container on it.
401 You can also use the "arvados-dispatch-cloud API":{{site.baseurl}}/api/dispatch.html to get a list of queued and running jobs and cloud instances. Use your @ManagementToken@ to test the dispatcher's endpoint. For example, when one container is running:
404 <pre><code>~$ <span class="userinput">curl -sH "Authorization: Bearer $token" http://localhost:9006/arvados/v1/dispatch/containers</span>
409 "uuid": "zzzzz-dz642-hdp2vpu9nq14tx0",
412 "scheduling_parameters": {
414 "preemptible": false,
418 "runtime_status": null,
423 "Name": "Standard_D2s_v3",
424 "ProviderType": "Standard_D2s_v3",
427 "Scratch": 16000000000,
428 "IncludedScratch": 16000000000,
439 A similar request can be made to the @http://localhost:9006/arvados/v1/dispatch/instances@ endpoint.
441 After the container finishes, you can get the container record by UUID *from a shell server* to see its results:
444 <pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>
448 "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166",
449 "output":"d41d8cd98f00b204e9800998ecf8427e+0",
456 You can use standard Keep tools to view the container's output and logs from their corresponding fields. For example, to see the logs from the collection referenced in the @log@ field:
459 <pre><code>~$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span>
463 ~$ <span class="userinput">arv-get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span>
464 2016-08-05T13:53:06.201011Z Hello, Crunch!
468 If the container does not dispatch successfully, refer to the @arvados-dispatch-cloud@ logs for information about why it failed.