--- layout: default navsection: installguide title: Install the cloud dispatcher ... {% comment %} Copyright (C) The Arvados Authors. All rights reserved. SPDX-License-Identifier: CC-BY-SA-3.0 {% endcomment %} {% include 'notebox_begin_warning' %} @arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF. {% include 'notebox_end' %} # "Introduction":#introduction # "Create compute node VM image":#create-image # "Update config.yml":#update-config # "Install arvados-dispatch-cloud":#install-packages # "Start the service":#start-service # "Restart the API server and controller":#restart-api # "Confirm working installation":#confirm-working h2(#introduction). Introduction The cloud dispatch service is for running containers on cloud VMs. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine. The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs. It is not resource-intensive, so you can run it on the API server node. More detail about the internal operation of the dispatcher can be found in the "architecture section":{{site.baseurl}}/architecture/dispatchcloud.html. h2(#update-config). Update config.yml h3. Configure CloudVMs Add or update the following portions of your cluster configuration file, @config.yml@. Refer to "config.defaults.yml":{{site.baseurl}}/admin/config.html for information about additional configuration options. The @DispatchPrivateKey@ should be the *private* key generated in "Create a SSH keypair":install-compute-node.html#sshkeypair . <notextile> <pre><code> Services: DispatchCloud: InternalURLs: "http://localhost:9006": {} Containers: CloudVMs: # BootProbeCommand is a shell command that succeeds when an instance is ready for service BootProbeCommand: "sudo systemctl status docker" <b># --- driver-specific configuration goes here --- see Amazon and Azure examples below ---</b> DispatchPrivateKey: | -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAqXoCzcOBkFQ7w4dvXf9B++1ctgZRqEbgRYL3SstuMV4oawks ttUuxJycDdsPmeYcHsKo8vsEZpN6iYsX6ZZzhkO5nEayUTU8sBjmg1ZCTo4QqKXr FJ+amZ7oYMDof6QEdwl6KNDfIddL+NfBCLQTVInOAaNss7GRrxLTuTV7HcRaIUUI jYg0Ibg8ZZTzQxCvFXXnjseTgmOcTv7CuuGdt91OVdoq8czG/w8TwOhymEb7mQlt lXuucwQvYgfoUgcnTgpJr7j+hafp75g2wlPozp8gJ6WQ2yBWcfqL2aw7m7Ll88Nd [...] oFyAjVoexx0RBcH6BveTfQtJKbktP1qBO4mXo2dP0cacuZEtlAqW9Eb06Pvaw/D9 foktmqOY8MyctzFgXBpGTxPliGjqo8OkrOyQP2g+FL7v+Km31Xs61P8= -----END RSA PRIVATE KEY----- InstanceTypes: x1md: ProviderType: x1.medium VCPUs: 8 RAM: 64GiB IncludedScratch: 64GB Price: 0.62 x1lg: ProviderType: x1.large VCPUs: 16 RAM: 128GiB IncludedScratch: 128GB Price: 1.23 </code></pre> </notextile> h3(#GPUsupport). NVIDIA GPU support To specify instance types with NVIDIA GPUs, "the compute image must be built with CUDA support":install-compute-node.html#nvidia , and you must include an additional @CUDA@ section: <notextile> <pre><code> InstanceTypes: g4dn: ProviderType: g4dn.xlarge VCPUs: 4 RAM: 16GiB IncludedScratch: 125GB Price: 0.56 CUDA: DriverVersion: "11.4" HardwareCapability: "7.5" DeviceCount: 1 </code></pre> </notextile> The @DriverVersion@ is the version of the CUDA toolkit installed in your compute image (in X.Y format, do not include the patchlevel). The @HardwareCapability@ is the "CUDA compute capability of the GPUs available for this instance type":https://developer.nvidia.com/cuda-gpus. The @DeviceCount@ is the number of GPU cores available for this instance type. h3(#aws-ebs-autoscaler). EBS Autoscale configuration See "Autoscaling compute node scratch space":install-compute-node.html#aws-ebs-autoscaler for details about compute image configuration. The @Containers.InstanceTypes@ list should be modified so that all @AddedScratch@ lines are removed, and the @IncludedScratch@ value should be set to 5 TB. This way, the scratch space requirements will be met by all the defined instance type. For example: <notextile><pre><code> InstanceTypes: c5large: ProviderType: c5.large VCPUs: 2 RAM: 4GiB IncludedScratch: 5TB Price: 0.085 m5large: ProviderType: m5.large VCPUs: 2 RAM: 8GiB IncludedScratch: 5TB Price: 0.096 ... </code></pre></notextile> You will also need to create an IAM role in AWS with these permissions: <notextile><pre><code>{ "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeTags", "ec2:ModifyInstanceAttribute", "ec2:DescribeVolumeAttribute", "ec2:CreateVolume", "ec2:DeleteVolume", "ec2:CreateTags" ], "Resource": "*" } ] } </code></pre></notextile> Then set @Containers.CloudVMs.DriverParameters.IAMInstanceProfile@ to the name of the IAM role. This will make @arvados-dispatch-cloud@ pass an IAM instance profile to the compute nodes when they start up, giving them sufficient permissions to attach and grow EBS volumes. h3. AWS Credentials for Local Keepstore on Compute node When @Containers.LocalKeepBlobBuffersPerVCPU@ is non-zero, the compute node will spin up a local Keepstore service for direct storage access. If Keep is backed by S3, the compute node will need to be able to access the S3 bucket. If the AWS credentials for S3 access are configured in @config.yml@ (i.e. @Volumes.DriverParameters.AccessKeyID@ and @Volumes.DriverParameters.SecretAccessKey@), these credentials will be made available to the local Keepstore on the compute node to access S3 directly and no further configuration is necessary. Alternatively, if an IAM role is configured in @config.yml@ (i.e. @Volumes.DriverParameters.IAMRole@), the name of an instance profile that corresponds to this role ("often identical to the name of the IAM role":https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile) must be configured in the @CloudVMs.DriverParameters.IAMInstanceProfile@ parameter. *If you are also using EBS Autoscale feature, the role in IAMInstanceProfile must have both ec2 and s3 permissions.* Finally, if @config.yml@ does not have @Volumes.DriverParameters.AccessKeyID@, @Volumes.DriverParameters.SecretAccessKey@ or @Volumes.DriverParameters.IAMRole@ defined, Keepstore uses the IAM role attached to the node, whatever it may be called. The @CloudVMs.DriverParameters.IAMInstanceProfile@ parameter must then still be configured with the name of a profile whose IAM role has permission to access the S3 bucket(s). That way, @arvados-dispatch-cloud@ can attach the IAM role to the compute node as it is created. h3. Minimal configuration example for Amazon EC2 The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#aws. <notextile> <pre><code> Containers: CloudVMs: ImageID: <span class="userinput">ami-01234567890abcdef</span> Driver: ec2 DriverParameters: # If you are not using an IAM role for authentication, specify access # credentials here. Otherwise, omit or set AccessKeyID and # SecretAccessKey to an empty value. AccessKeyID: XXXXXXXXXXXXXXXXXXXX SecretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY SecurityGroupIDs: - sg-0123abcd SubnetID: subnet-0123abcd Region: us-east-1 EBSVolumeType: gp2 AdminUsername: arvados </code></pre> </notextile> h3(#IAM). Example IAM policy for cloud dispatcher Example policy for the IAM role used by the cloud dispatcher: <notextile> <pre> { "Id": "arvados-dispatch-cloud policy", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:Describe*", "ec2:CreateImage", "ec2:CreateKeyPair", "ec2:ImportKeyPair", "ec2:DeleteKeyPair", "ec2:RunInstances", "ec2:StopInstances", "ec2:TerminateInstances", "ec2:ModifyInstanceAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "iam:PassRole" ], "Resource": "*" } ] } </pre> </notextile> h3. Minimal configuration example for Azure Using managed disks: The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure. <notextile> <pre><code> Containers: CloudVMs: ImageID: <span class="userinput">"zzzzz-compute-v1597349873"</span> Driver: azure # (azure) managed disks: set MaxConcurrentInstanceCreateOps to 20 to avoid timeouts, cf # https://docs.microsoft.com/en-us/azure/virtual-machines/linux/capture-image MaxConcurrentInstanceCreateOps: 20 DriverParameters: # Credentials. SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX # Data center where VMs will be allocated Location: centralus # The resource group where the VM and virtual NIC will be # created. ResourceGroup: zzzzz NetworkResourceGroup: yyyyy # only if different from ResourceGroup Network: xxxxx Subnet: xxxxx-subnet-private # The resource group where the disk image is stored, only needs to # be specified if it is different from ResourceGroup ImageResourceGroup: aaaaa </code></pre> </notextile> Azure recommends using managed images. If you plan to start more than 20 VMs simultaneously, Azure recommends using a shared image gallery instead to avoid slowdowns and timeouts during the creation of the VMs. Using an image from a shared image gallery: <notextile> <pre><code> Containers: CloudVMs: ImageID: <span class="userinput">"shared_image_gallery_image_definition_name"</span> Driver: azure DriverParameters: # Credentials. SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX # Data center where VMs will be allocated Location: centralus # The resource group where the VM and virtual NIC will be # created. ResourceGroup: zzzzz NetworkResourceGroup: yyyyy # only if different from ResourceGroup Network: xxxxx Subnet: xxxxx-subnet-private # The resource group where the disk image is stored, only needs to # be specified if it is different from ResourceGroup ImageResourceGroup: aaaaa # (azure) shared image gallery: the name of the gallery SharedImageGalleryName: "shared_image_gallery_1" # (azure) shared image gallery: the version of the image definition SharedImageGalleryImageVersion: "0.0.1" </code></pre> </notextile> Using unmanaged disks (deprecated): The <span class="userinput">ImageID</span> value is the compute node image that was built in "the previous section":install-compute-node.html#azure. <notextile> <pre><code> Containers: CloudVMs: ImageID: <span class="userinput">"https://zzzzzzzz.blob.core.windows.net/system/Microsoft.Compute/Images/images/zzzzz-compute-osDisk.55555555-5555-5555-5555-555555555555.vhd"</span> Driver: azure DriverParameters: # Credentials. SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX # Data center where VMs will be allocated Location: centralus # The resource group where the VM and virtual NIC will be # created. ResourceGroup: zzzzz NetworkResourceGroup: yyyyy # only if different from ResourceGroup Network: xxxxx Subnet: xxxxx-subnet-private # Where to store the VM VHD blobs StorageAccount: example BlobContainer: vhds </code></pre> </notextile> Get the @SubscriptionID@ and @TenantID@: <pre> $ az account list [ { "cloudName": "AzureCloud", "id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX", "isDefault": true, "name": "Your Subscription", "state": "Enabled", "tenantId": "YYYYYYYY-YYYY-YYYY-YYYYYYYY", "user": { "name": "you@example.com", "type": "user" } } ] </pre> You will need to create a "service principal" to use as a delegated authority for API access. <notextile><pre><code>$ az ad app create --display-name "Arvados Dispatch Cloud (<span class="userinput">ClusterID</span>)" --homepage "https://arvados.org" --identifier-uris "https://<span class="userinput">ClusterID.example.com</span>" --end-date 2299-12-31 --password <span class="userinput">Your_Password</span> $ az ad sp create "<span class="userinput">appId</span>" (appId is part of the response of the previous command) $ az role assignment create --assignee "<span class="userinput">objectId</span>" --role Owner --scope /subscriptions/{subscriptionId}/ (objectId is part of the response of the previous command) </code></pre></notextile> Now update your @config.yml@ file: @ClientID@ is the 'appId' value. @ClientSecret@ is what was provided as <span class="userinput">Your_Password</span>. h3. Test your configuration Run the @cloudtest@ tool to verify that your configuration works. This creates a new cloud VM, confirms that it boots correctly and accepts your configured SSH private key, and shuts it down. <notextile> <pre><code>~$ <span class="userinput">arvados-server cloudtest && echo "OK!"</span> </code></pre> </notextile> Refer to the "cloudtest tool documentation":../../admin/cloudtest.html for more information. {% assign arvados_component = 'arvados-dispatch-cloud' %} {% include 'install_packages' %} {% include 'start_service' %} {% include 'restart_api' %} h2(#confirm-working). Confirm working installation On the dispatch node, start monitoring the arvados-dispatch-cloud logs: <notextile> <pre><code># <span class="userinput">journalctl -o cat -fu arvados-dispatch-cloud.service</span> </code></pre> </notextile> In another terminal window, use the diagnostics tool to run a simple container. <notextile> <pre><code># <span class="userinput">arvados-client sudo diagnostics</span> INFO 5: running health check (same as `arvados-server check`) INFO 10: getting discovery document from https://zzzzz.arvadosapi.com/discovery/v1/apis/arvados/v1/rest ... INFO 160: running a container INFO ... container request submitted, waiting up to 10m for container to run </code></pre> </notextile> After performing a number of other quick tests, this will submit a new container request and wait for it to finish. While the diagnostics tool is waiting, the @arvados-dispatch-cloud@ logs will show details about creating a cloud instance, waiting for it to be ready, and scheduling the new container on it. You can also use the "arvados-dispatch-cloud API":{{site.baseurl}}/api/dispatch.html to get a list of queued and running jobs and cloud instances. Use your @ManagementToken@ to test the dispatcher's endpoint. For example, when one container is running: <notextile> <pre><code>~$ <span class="userinput">curl -sH "Authorization: Bearer $token" http://localhost:9006/arvados/v1/dispatch/containers</span> { "items": [ { "container": { "uuid": "zzzzz-dz642-hdp2vpu9nq14tx0", ... "state": "Running", "scheduling_parameters": { "partitions": null, "preemptible": false, "max_run_time": 0 }, "exit_code": 0, "runtime_status": null, "started_at": null, "finished_at": null }, "instance_type": { "Name": "Standard_D2s_v3", "ProviderType": "Standard_D2s_v3", "VCPUs": 2, "RAM": 8589934592, "Scratch": 16000000000, "IncludedScratch": 16000000000, "AddedScratch": 0, "Price": 0.11, "Preemptible": false } } ] } </code></pre> </notextile> A similar request can be made to the @http://localhost:9006/arvados/v1/dispatch/instances@ endpoint. After the container finishes, you can get the container record by UUID *from a shell server* to see its results: <notextile> <pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span> { ... "exit_code":0, "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166", "output":"d41d8cd98f00b204e9800998ecf8427e+0", "state":"Complete", ... } </code></pre> </notextile> You can use standard Keep tools to view the container's output and logs from their corresponding fields. For example, to see the logs from the collection referenced in the @log@ field: <notextile> <pre><code>~$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span> ./crunch-run.txt ./stderr.txt ./stdout.txt ~$ <span class="userinput">arv-get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span> 2016-08-05T13:53:06.201011Z Hello, Crunch! </code></pre> </notextile> If the container does not dispatch successfully, refer to the @arvados-dispatch-cloud@ logs for information about why it failed.