id
stringlengths
8
78
source
stringclasses
743 values
chunk_id
int64
1
5.05k
text
stringlengths
593
49.7k
sync-dg-069
sync-dg.pdf
69
similar to filters but let you identify exactly which files or objects to transfer instead of data that matches a filter pattern. Creating your manifest A manifest is a comma-separated values (CSV)-formatted file that lists the files or objects in your source location that you want DataSync to transfer. If your source is an S3 bucket, you can also include which version of an object to transfer. Topics • Guidelines • Example manifests Guidelines Use these guidelines to help you create a manifest that works with DataSync. Do • Specify the full path of each file or object that you want to transfer. You can't specify only a directory or folder with the intention of transferring all of its contents. For these situations, consider using an include filter instead of a manifest. • Make sure that each file or object path is relative to the mount path, folder, directory, or prefix that you specified when configuring your DataSync source location. For example, let's say you configure an S3 location with a prefix named photos. That prefix includes an object my-picture.png that you want to transfer. In the manifest, you then only need to specify the object (my-picture.png) instead of the prefix and object (photos/ my-picture.png). • To specify Amazon S3 object version IDs, separate the object's path and version ID by using a comma. Choosing what data to transfer 204 AWS DataSync User Guide The following example shows a manifest entry with two fields. The first field includes an object named picture1.png. The second field is separated by a comma and includes a version ID of 111111: picture1.png,111111 • Use quotes in the following situations: • When a path contains special characters (commas, quotes, and line endings): "filename,with,commas.txt" • When a path spans multiple lines: "this is a filename.txt" • When a path includes quotes: filename""with""quotes.txt This represents a path named filename"with"quotes.txt. These quote rules also apply to version ID fields. In general, if a manifest field has a quote, you must escape it with another quote. • Separate each file or object entry with a new line. You can separate lines by using Linux (line feed or carriage return) or Windows (carriage return followed by a line feed) style line breaks. • Save your manifest (for example, my-manifest.csv or my-manifest.txt). • Upload the manifest to an S3 bucket that DataSync can access. This bucket doesn't have to be in the same AWS Region or account where you're using DataSync. Don't • Specify only a directory or folder with the intention of transferring all of its contents. Choosing what data to transfer 205 AWS DataSync User Guide A manifest can only include full paths to the files or objects that you want to transfer. If you configure your source location to use a specific mount path, folder, directory, or prefix, you don't have to include that in your manifest. • Specify a file or object path that exceeds 4,096 characters. • Specify a file path, object path, or Amazon S3 object version ID that exceeds 1,024 bytes. • Specify duplicate file or object paths. • Include an object version ID if your source location isn't an S3 bucket. • Include more than two fields in a manifest entry. An entry can include only a file or object path and (if applicable) an Amazon S3 object version ID. • Include characters that don't conform to UTF-8 encoding. • Include unintentional spaces in your entry fields outside of quotes. Example manifests Use these examples to help you create a manifest that works with DataSync. Manifest with full file or object paths The following example shows a manifest with full file or object paths to transfer. photos/picture1.png photos/picture2.png photos/picture3.png Manifest with only object keys The following example shows a manifest with objects to transfer from an Amazon S3 source location. Since the location is configured with the prefix photos, only the object keys are specified. picture1.png picture2.png picture3.png Choosing what data to transfer 206 AWS DataSync User Guide Manifest with object paths and version IDs The first two entries in the following manifest example include specific Amazon S3 object versions to transfer. photos/picture1.png,111111 photos/picture2.png,121212 photos/picture3.png Manifest with UTF-8 characters The following example shows a manifest with files that include UTF-8 characters. documents/résumé1.pdf documents/résumé2.pdf documents/résumé3.pdf Providing DataSync access to your manifest You need an AWS Identity and Access Management (IAM) role that gives DataSync access to your manifest in its S3 bucket. This role must include the following permissions: • s3:GetObject • s3:GetObjectVersion You can generate this role automatically in the DataSync console or create the role yourself. Note If your manifest is in a different AWS account, you must create this role manually. Creating the IAM role automatically When creating or starting a transfer task in the console, DataSync can create an IAM
sync-dg-070
sync-dg.pdf
70
a manifest with files that include UTF-8 characters. documents/résumé1.pdf documents/résumé2.pdf documents/résumé3.pdf Providing DataSync access to your manifest You need an AWS Identity and Access Management (IAM) role that gives DataSync access to your manifest in its S3 bucket. This role must include the following permissions: • s3:GetObject • s3:GetObjectVersion You can generate this role automatically in the DataSync console or create the role yourself. Note If your manifest is in a different AWS account, you must create this role manually. Creating the IAM role automatically When creating or starting a transfer task in the console, DataSync can create an IAM role for you with the s3:GetObject and s3:GetObjectVersion permissions that you need to access your manifest. Choosing what data to transfer 207 AWS DataSync User Guide Required permissions to automatically create the role To automatically create the role, make sure that the role that you're using to access the DataSync console has the following permissions: • iam:CreateRole • iam:CreatePolicy • iam:AttachRolePolicy Creating the IAM role (same account) You can manually create the IAM role that DataSync needs to access your manifest. The following instructions assume that you're in the same AWS account where you use DataSync and your manifest's S3 bucket is located. 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 3. On the Select trusted entity page, for Trusted entity type, choose AWS service. 4. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 5. On the Add permissions page, choose Next. Give your role a name and choose Create role. 6. On the Roles page, search for the role that you just created and choose its name. 7. On the role's details page, choose the Permissions tab. Choose Add permissions then Create inline policy. 8. Choose the JSON tab and paste the following sample policy into the policy editor: { "Version": "2012-10-17", "Statement": [{ "Sid": "DataSyncAccessManifest", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/my-manifest.csv" }] } Choosing what data to transfer 208 AWS DataSync User Guide 9. In the sample policy that you just pasted, replace the following values with your own: a. Replace amzn-s3-demo-bucket with the name of the S3 bucket that's hosting your manifest. b. Replace my-manifest.csv with the file name of your manifest. 10. Choose Next. Give your policy a name and choose Create policy. 11. (Recommended) To prevent the cross-service confused deputy problem, do the following: a. On the role's details page, choose the Trust relationships tab. Choose Edit trust policy. b. Update the trust policy by using the following example, which includes the aws:SourceArn and aws:SourceAccount global condition context keys: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "account-id" }, "StringLike": { "aws:SourceArn": "arn:aws:datasync:region:account-id:*" } } }] } • Replace each instance account-id with the AWS account ID where you're using DataSync. • Replace region with the AWS Region where you're using DataSync. c. Choose Update policy. You've created an IAM role that allows DataSync to access your manifest. Specify this role when creating or starting your task. Choosing what data to transfer 209 AWS DataSync User Guide Creating the IAM role (different account) If your manifest is in an S3 bucket that belongs to a different AWS account, you must manually create the IAM role that DataSync uses to access the manifest. Then, in the AWS account where your manifest is located, you need to include the role in the S3 bucket policy. Creating the role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 3. On the Select trusted entity page, for Trusted entity type, choose AWS service. 4. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 5. On the Add permissions page, choose Next. Give your role a name and choose Create role. 6. On the Roles page, search for the role that you just created and choose its name. 7. On the role's details page, choose the Permissions tab. Choose Add permissions then Create inline policy. 8. Choose the JSON tab and paste the following sample policy into the policy editor: { "Version": "2012-10-17", "Statement": [{ "Sid": "DataSyncAccessManifest", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/my-manifest.csv" }] } 9. In the sample policy that you just pasted, replace the following values with your own: a. Replace amzn-s3-demo-bucket with the name of the S3 bucket that's hosting your manifest. b. Replace my-manifest.csv with the file name of your manifest. 10. Choose Next. Give your policy a name and choose Create policy. Choosing what data to transfer 210 AWS DataSync User Guide 11. (Recommended) To prevent the cross-service
sync-dg-071
sync-dg.pdf
71
Choose the JSON tab and paste the following sample policy into the policy editor: { "Version": "2012-10-17", "Statement": [{ "Sid": "DataSyncAccessManifest", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/my-manifest.csv" }] } 9. In the sample policy that you just pasted, replace the following values with your own: a. Replace amzn-s3-demo-bucket with the name of the S3 bucket that's hosting your manifest. b. Replace my-manifest.csv with the file name of your manifest. 10. Choose Next. Give your policy a name and choose Create policy. Choosing what data to transfer 210 AWS DataSync User Guide 11. (Recommended) To prevent the cross-service confused deputy problem, do the following: a. On the role's details page, choose the Trust relationships tab. Choose Edit trust policy. b. Update the trust policy by using the following example, which includes the aws:SourceArn and aws:SourceAccount global condition context keys: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "account-id" }, "StringLike": { "aws:SourceArn": "arn:aws:datasync:region:account-id:*" } } }] } • Replace each instance of account-id with the AWS account ID where you're using DataSync. • Replace region with the AWS Region where you're using DataSync. c. Choose Update policy. You created the IAM role that you can include in your S3 bucket policy. Updating your S3 bucket policy with the role Once you've created the IAM role, you must add it to the S3 bucket policy in the other AWS account where your manifest is located. 1. In the AWS Management Console, switch over to the account with your manfiest's S3 bucket. 2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 3. On the bucket's detail page, choose the Permissions tab. Choosing what data to transfer 211 AWS DataSync User Guide 4. Under Bucket policy, choose Edit and do the following to modify your S3 bucket policy: a. Update what's in the editor to include the following policy statements: { "Version": "2008-10-17", "Statement": [ { "Sid": "DataSyncAccessManifestBucket", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:role/datasync-role" }, "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::amzn-s3-demo-bucket" } ] } b. Replace account-id with the AWS account ID for the account that you're using DataSync with. c. Replace datasync-role with the IAM role that you just created that allows DataSync to access your manifest. d. Replace amzn-s3-demo-bucket with the name of the S3 bucket that's hosting your manifest in the other AWS account. 5. Choose Save changes. You've created an IAM role that allows DataSync to access your manifest in the other account. Specify this role when creating or starting your task. Specifying your manifest when creating a task You can specify the manifest that you want DataSync to use when creating a task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, choose Tasks, and then choose Create task. Choosing what data to transfer 212 AWS DataSync User Guide 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? For Contents to scan, choose Specific files, objects, and folders, then select Using a manifest. For S3 URI, choose your manifest that's hosted on an S3 bucket. 4. 5. Alternatively, you can enter the URI (for example, s3://bucket/prefix/my- manifest.csv). 6. For Object version, choose the version of the manifest that you want DataSync to use. By default, DataSync uses the latest version of the object. 7. For Manifest access role, do one of the following: • Choose Autogenerate for DataSync to automatically create an IAM role with the permissions required to access your manifest in its S3 bucket. • Choose an existing IAM role that can access your manifest. For more information, see Providing DataSync access to your manifest. 8. Configure any other task settings you need, then choose Next. 9. Choose Create task. Using the AWS CLI 1. Copy the following create-task command: aws datasync create-task \ --source-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh \ --destination-location-arn arn:aws:datasync:us-east-1:123456789012:location/loc- abcdefgh12345678 \ --manifest-config { "Source": { "S3": { "ManifestObjectPath": "s3-object-key-of-manifest", "BucketAccessRoleArn": "bucket-iam-role", "S3BucketArn": "amzn-s3-demo-bucket-arn", "ManifestObjectVersionId": "manifest-version-to-use" } Choosing what data to transfer 213 AWS DataSync } } User Guide 2. 3. For the --source-location-arn parameter, specify the Amazon Resource Name (ARN) of the location that you're transferring data from. For the --destination-location-arn parameter, specify the ARN of the location that you're transferring data to. 4. For the --manifest-config parameter, do the following: • ManifestObjectPath – Specify the S3 object key of your manifest. • BucketAccessRoleArn – Specify the IAM role that allows DataSync to access your manifest in its S3 bucket. For more information, see Providing DataSync access to your manifest. • S3BucketArn – Specify the ARN of the S3 bucket that's hosting your manifest. • ManifestObjectVersionId – Specify the version of the manifest that you want DataSync to use. By default,
sync-dg-072
sync-dg.pdf
72
the location that you're transferring data from. For the --destination-location-arn parameter, specify the ARN of the location that you're transferring data to. 4. For the --manifest-config parameter, do the following: • ManifestObjectPath – Specify the S3 object key of your manifest. • BucketAccessRoleArn – Specify the IAM role that allows DataSync to access your manifest in its S3 bucket. For more information, see Providing DataSync access to your manifest. • S3BucketArn – Specify the ARN of the S3 bucket that's hosting your manifest. • ManifestObjectVersionId – Specify the version of the manifest that you want DataSync to use. By default, DataSync uses the latest version of the object. 5. Run the create-task command to create your task. When you're ready, you can start your transfer task. Specifying your manifest when starting a task You can specify the manifest that you want DataSync to use when executing a task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, choose Tasks, and then choose the task that you want to start. In the task overview page, choose Start, and then choose Start with overriding options. For Contents to scan, choose Specific files, objects, and folders, then select Using a manifest. For S3 URI, choose your manifest that's hosted on an S3 bucket. Alternatively, you can enter the URI (for example, s3://bucket/prefix/my- manifest.csv). Choosing what data to transfer 214 AWS DataSync User Guide 6. For Object version, choose the version of the manifest that you want DataSync to use. By default, DataSync uses the latest version of the object. 7. For Manifest access role, do one of the following: • Choose Autogenerate for DataSync to automatically create an IAM role to access your manifest in its S3 bucket. • Choose an existing IAM role that can access your manifest. For more information, see Providing DataSync access to your manifest. 8. Choose Start to begin your transfer. Using the AWS CLI 1. Copy the following start-task-execution command: aws datasync start-task-execution \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh \ --manifest-config { "Source": { "S3": { "ManifestObjectPath": "s3-object-key-of-manifest", "BucketAccessRoleArn": "bucket-iam-role", "S3BucketArn": "amzn-s3-demo-bucket-arn", "ManifestObjectVersionId": "manifest-version-to-use" } } } 2. For the --task-arn parameter, specify the Amazon Resource Name (ARN) of the task that you're starting. 3. For the --manifest-config parameter, do the following: • ManifestObjectPath – Specify the S3 object key of your manifest. • BucketAccessRoleArn – Specify the IAM role that allows DataSync to access your manifest in its S3 bucket. For more information, see Providing DataSync access to your manifest. Choosing what data to transfer 215 AWS DataSync User Guide • S3BucketArn – Specify the ARN of the S3 bucket that's hosting your manifest. • ManifestObjectVersionId – Specify the version of the manifest that you want DataSync to use. By default, DataSync uses the latest version of the object. 4. Run the start-task-execution command to begin your transfer. Limitations • You can't use a manifest together with filters. • You can't specify only a directory or folder with the intention of transferring all of its contents. For these situations, consider using an include filter instead of a manifest. • You can't use the Keep deleted files task option (PreserveDeletedFiles in the API) to maintain files or objects in the destination that aren't in the source. DataSync only transfers what's listed in your manifest and doesn't delete anything in the destination. Troubleshooting If you're transferring objects with specific version IDs from an S3 bucket, you might see an error related to HeadObject or GetObjectTagging. For example, here's an error related to GetObjectTagging: [WARN] Failed to read metadata for file /picture1.png (versionId: 111111): S3 Get Object Tagging Failed [ERROR] S3 Exception: op=GetObjectTagging photos/picture1.png, code=403, type=15, exception=AccessDenied, msg=Access Denied req-hdrs: content-type=application/xml, x-amz-api-version=2006-03-01 rsp-hdrs: content-type=application/xml, date=Wed, 07 Feb 2024 20:16:14 GMT, server=AmazonS3, transfer-encoding=chunked, x-amz-id-2=IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km, x-amz- request-id=79104EXAMPLEB723 If you see either of these errors, validate that the IAM role that DataSync uses to access your S3 source location has the following permissions: • s3:GetObjectVersion Choosing what data to transfer 216 AWS DataSync User Guide • s3:GetObjectVersionTagging If you need to update your role with these permissions, see Creating an IAM role for DataSync to access your Amazon S3 location. Next steps If you haven't already, start your task. Otherwise, monitor your task's activity. Transferring specific files, objects, and folders by using filters AWS DataSync lets you apply filters to include or exclude data from your source location in a transfer. For example, if you don't want to transfer temporary files that end with .tmp, you can create an exclude filter so that these files don't make their way to your destination location. You can use a combination of exclude and include filters in the same transfer task. If you modify a task's filters, those changes are applied the next time you run
sync-dg-073
sync-dg.pdf
73
you haven't already, start your task. Otherwise, monitor your task's activity. Transferring specific files, objects, and folders by using filters AWS DataSync lets you apply filters to include or exclude data from your source location in a transfer. For example, if you don't want to transfer temporary files that end with .tmp, you can create an exclude filter so that these files don't make their way to your destination location. You can use a combination of exclude and include filters in the same transfer task. If you modify a task's filters, those changes are applied the next time you run the task. Filtering terms, definitions, and syntax Familiarize yourself with the concepts related to DataSync filtering: Filter The whole string that makes up a particular filter (for example, *.tmp|*.temp or /folderA|/ folderB). Filters are made up of patterns delimited by using a pipe (|). You don't need a delimiter when you add patterns in the DataSync console because you add each pattern separately. Note Filters are case sensitive. For example, filter /folderA won't match /FolderA. Pattern A pattern within a filter. For example, *.tmp is a pattern that's part of the *.tmp|*.temp filter. If your filter has multiple patterns, you delimit each pattern by using a pipe (|). Choosing what data to transfer 217 AWS DataSync Folders User Guide • All filters are relative to the source location path. For example, suppose that you specify / my_source/ as the source path when you create your source location and task and specify the include filter /transfer_this/. In this case, DataSync transfers only the directory / my_source/transfer_this/ and its contents. • To specify a folder directly under the source location, include a forward slash (/) in front of the folder name. In the example preceding, the pattern uses /transfer_this, not transfer_this. • DataSync interprets the following patterns the same way and matches both the folder and its content. /dir /dir/ • When you are transferring data from or to an Amazon S3 bucket, DataSync treats the / character in the object key as the equivalent of a folder on a file system. Special characters Following are special characters for use with filtering. Special character Description * (wildcard) A character used to match zero or more characters. For | (pipe delimiter) example, /movies_folder* matches both /movies_f older and /movies_folder1 . A character used as a delimiter between patterns. It enables specifying multiple patterns, any of which can match the filter. For example, *.tmp|*.temp matches files ending with either tmp or temp. Note This delimiter isn't needed when you add patterns on the console because you add each pattern on a separate line. Choosing what data to transfer 218 AWS DataSync User Guide Special character Description \ (backslash) A character used for escaping special characters (*, |, \) in a file or object name. A double backslash (\\) is required when a backslash is part of a file name. Similarly, \\\\ represents two consecutive backslashes in a file name. A backslash followed by a pipe (\|) is required when a pipe is part of a file name. A backslash (\) followed by any other character, or at the end of a pattern, is ignored. Example filters The following examples show common filters you can use with DataSync. Note There are limits to how many characters you can use in a filter. For more information, see DataSync quotas. Exclude some folders from your source location In some cases, you want might exclude folders in your source location to not copy them to your destination location. For example, if you have temporary work-in-progress folders, you can use something like the following filter: */.temp To exclude folders with similar content (such as /reports2021 and /reports2022)), you can use an exclude filter like the following: /reports* To exclude folders at any level in the file hierarchy, you can use an exclude filter like the following. Choosing what data to transfer 219 AWS DataSync User Guide */folder-to-exclude-1|*/folder-to-exclude-2 To exclude folders at the top level of the source location, you can use an exclude filter like the following. /top-level-folder-to-exclude-1|/top-level-folder-to-exclude-2 Include a subset of the folders on your source location In some cases, your source location might be a large share and you need to transfer a subset of the folders under the root. To include specific folders, start a task execution with an include filter like the following. /folder-to-transfer/* Exclude specific file types To exclude certain file types from the transfer, you can create a task execution with an exclude filter such as *.temp. Transfer individual files you specify To transfer a list of individual files, start a task execution with an include filter like the following: "/folder/subfolder/file1.txt|/folder/subfolder/file2.txt|/folder/subfolder/ file2.txt" Creating include filters Include filters define the files, objects, and folders that you want DataSync to transfer. You can configure include
sync-dg-074
sync-dg.pdf
74
and you need to transfer a subset of the folders under the root. To include specific folders, start a task execution with an include filter like the following. /folder-to-transfer/* Exclude specific file types To exclude certain file types from the transfer, you can create a task execution with an exclude filter such as *.temp. Transfer individual files you specify To transfer a list of individual files, start a task execution with an include filter like the following: "/folder/subfolder/file1.txt|/folder/subfolder/file2.txt|/folder/subfolder/ file2.txt" Creating include filters Include filters define the files, objects, and folders that you want DataSync to transfer. You can configure include filters when you create, edit, or start a task. DataSync scans and transfers only files and folders that match the include filters. For example, to include a subset of your source folders, you might specify /important_folder_1|/ important_folder_2. Note Include filters support the wildcard (*) character only as the rightmost character in a pattern. For example, /documents*|/code* is supported, but *.txt isn't. Choosing what data to transfer 220 AWS DataSync Using the DataSync console User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. 5. For Contents to scan, choose Specific files, objects, and folders, then select Using filters. For Includes, enter your filter (for example, /important_folders to include an important directory), then choose Add pattern. 6. Add other include filters as needed. Using the AWS CLI When using the AWS CLI, you must use single quotation marks (') around the filter and a | (pipe) as a delimiter if you have more than one filter. The following example specifies two include filters /important_folder1 and / important_folder2 when running the create-task command. aws datasync create-task --source-location-arn 'arn:aws:datasync:region:account-id:location/location-id' \ --destination-location-arn 'arn:aws:datasync:region:account-id:location/location-id' \ --includes FilterType=SIMPLE_PATTERN,Value='/important_folder1|/important_folder2' Creating exclude filters Exclude filters define the files, objects, and folders in your source location that you don't want DataSync to transfer. You can configure these filters when you create, edit, or start a task. Topics • Data excluded by default Data excluded by default DataSync automatically excludes some data from being transferred: Choosing what data to transfer 221 AWS DataSync User Guide • .snapshot – DataSync ignores any path ending with .snapshot, which typically is used for point-in-time snapshots of a storage system's files or directories. • /.aws-datasync and /.awssync – DataSync creates these folders in your location to help facilitate your transfer. • /.zfs – You might see this folder with Amazon FSx for OpenZFS locations. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For Excludes, enter your filter (for example, */temp to exclude temporary folders), then choose Add pattern. 5. Add other exclude filters as needed. 6. If needed, add include filters. Using the AWS CLI When using the AWS CLI, you must use single quotation marks (') around the filter and a | (pipe) as a delimiter if you have more than one filter. The following example specifies two exclude filters */temp and */tmp when running the create- task command. aws datasync create-task \ --source-location-arn 'arn:aws:datasync:region:account-id:location/location-id' \ --destination-location-arn 'arn:aws:datasync:region:account-id:location/location-id' \ --excludes FilterType=SIMPLE_PATTERN,Value='*/temp|*/tmp' Understanding how DataSync handles file and object metadata AWS DataSync can preserve your file or object metadata during a data transfer. How your metadata gets copied depends on your transfer locations and if those locations use similar types of metadata. Choosing what data to transfer 222 AWS DataSync System-level metadata User Guide In general, DataSync doesn't copy system-level metadata. For example, when transferring from an SMB file server, the permissions you configured at the file system level aren't copied to the destination storage system. There are exceptions. When transferring between Amazon S3 and other object storage, DataSync does copy some system-defined object metadata. Metadata copied in Amazon S3 transfers The following tables describe what metadata DataSync can copy when a transfer involves an Amazon S3 location. Topics • To Amazon S3 • Between Amazon S3 and other object storage • Between Amazon S3 and HDFS To Amazon S3 When copying from one of these locations • NFS • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) To this location DataSync can copy • Amazon S3 The following as Amazon S3 user metadata: • File and folder modification timestamps • File and folder access timestamps (DataSync can only do this on a best-effort basis) • User ID and group ID • POSIX permissions Choosing what data to transfer 223 AWS DataSync
sync-dg-075
sync-dg.pdf
75
To Amazon S3 • Between Amazon S3 and other object storage • Between Amazon S3 and HDFS To Amazon S3 When copying from one of these locations • NFS • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) To this location DataSync can copy • Amazon S3 The following as Amazon S3 user metadata: • File and folder modification timestamps • File and folder access timestamps (DataSync can only do this on a best-effort basis) • User ID and group ID • POSIX permissions Choosing what data to transfer 223 AWS DataSync User Guide When copying from one of these locations To this location DataSync can copy The file metadata stored in Amazon S3 user metadata is interoperable with NFS shares on file gateways using AWS Storage Gateway. A file gateway enables low-laten cy access from on-premis es networks to data that was copied to Amazon S3 by DataSync. This metadata is also interoperable with FSx for Lustre. When DataSync copies objects that contain this metadata back to an NFS server, the file metadata is restored. Restoring metadata requires granting elevated permissions to the NFS server. For more information, see Configuring AWS DataSync transfers with an NFS file server. Between Amazon S3 and other object storage When copying between these locations DataSync can copy • Object storage • Amazon S3 • Microsoft Azure Blob Storage • User-defined object metadata • Object tags • The following system-defined object metadata: Choosing what data to transfer 224 AWS DataSync User Guide When copying between these locations DataSync can copy • Amazon S3 • Content-Disposition • Content-Encoding • Content-Language • Content-Type Note: DataSync copies system-level metadata for all objects during an initial transfer. If you configure your task to transfer only data that has changed, DataSync won't copy system metadata in subsequent transfers unless an object's content or user metadata has also been modified. DataSync doesn't copy other object metadata, such as object access control lists (ACLs), prior object versions, or the Last-Modified key. Between Amazon S3 and HDFS When copying between these locations DataSync can copy • Hadoop Distributed File System (HDFS) The following as Amazon S3 user metadata: • Amazon S3 • File and folder modification timestamps • File and folder access timestamps (DataSync can only do this on a best-effort basis) • User ID and group ID • POSIX permissions Choosing what data to transfer 225 AWS DataSync User Guide When copying between these locations DataSync can copy HDFS uses strings to store file and folder user and group ownership, rather than numeric identifiers, such as UIDs and GIDs. Metadata copied in NFS transfers The following table describes what metadata DataSync can copy between locations that use Network File System (NFS). When copying between these locations DataSync can copy • NFS • Amazon EFS • Amazon FSx for Lustre • Amazon FSx for OpenZFS • File and folder modification timestamps • File and folder access timestamps (DataSync can only do this on a best-effort basis) • User ID (UID) and group ID (GID) • Amazon FSx for NetApp ONTAP (using NFS) • POSIX permissions Metadata copied in SMB transfers The following table describes what metadata DataSync can copy between locations that use Server Message Block (SMB). When copying between these locations DataSync can copy • SMB • File timestamps: access time, modification • Amazon FSx for Windows File Server time, and creation time • FSx for ONTAP (using SMB) • File owner security identifier (SID) • Standard file attributes: read-only (R), archive (A), system (S), hidden (H), compressed (C), not content indexed (I), encrypted (E), temporary (T), offline (O), and sparse (P) Choosing what data to transfer 226 AWS DataSync User Guide When copying between these locations DataSync can copy DataSync attempts to copy the archive (A), compressed (C), not context indexed (I), sparse (P), and temporary (T) attributes on a best-effort basis. If these attributes aren't applied on the destination, they're ignored during task verification. • NTFS discretionary access lists (DACLs), which determine whether to grant access to an object. • NTFS system access control lists (SACLs), which are used by administrators to log attempts to access a secured object. Note: SACLs are not copied if you use SMB version 1.0. Copying DACLs and SACLs requires granting specific permissions to the Windows user that DataSync uses to access your location using SMB. For more informati on, see creating a location for SMB, FSx for Windows File Server, or FSx for ONTAP (depending on the type of location in your transfer). Metadata copied in other transfer scenarios DataSync handles metadata the following ways when copying between these storage systems (most of which have different metadata structures). Choosing what data to transfer 227 AWS DataSync User Guide When copying from one of these locations
sync-dg-076
sync-dg.pdf
76
are not copied if you use SMB version 1.0. Copying DACLs and SACLs requires granting specific permissions to the Windows user that DataSync uses to access your location using SMB. For more informati on, see creating a location for SMB, FSx for Windows File Server, or FSx for ONTAP (depending on the type of location in your transfer). Metadata copied in other transfer scenarios DataSync handles metadata the following ways when copying between these storage systems (most of which have different metadata structures). Choosing what data to transfer 227 AWS DataSync User Guide When copying from one of these locations To one of these locations DataSync can copy • SMB • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for ONTAP (using SMB) • FSx for OpenZFS • FSx for ONTAP (using NFS) • Amazon S3 Default POSIX metadata for all files and folders on the destination file system or objects in the destination S3 bucket. This approach includes using the default • Object storage POSIX user ID and group ID • Azure Blob Storage values. • NFS • Object storage • Amazon S3 • Amazon EFS • FSx for Lustre • Azure Blob Storage • FSx for OpenZFS • FSx for ONTAP (using NFS) • Azure Blob Storage • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • HDFS • Amazon EFS • FSx for Lustre Windows-based metadata (such as ACLs) is not preserved. Default POSIX metadata on the destination files and folders. This approach includes using the default POSIX user ID and group ID values. The following as user-defined metadata: • File and folder modification timestamps • File and folder access timestamps (DataSync can only do this on a best-effort basis) • User ID and group ID • POSIX permissions • File and folder modification timestamps Choosing what data to transfer 228 AWS DataSync User Guide When copying from one of these locations To one of these locations DataSync can copy • FSx for OpenZFS • File and folder access • FSx for ONTAP (using NFS) • HDFS • Amazon S3 • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for Windows File Server • FSx for ONTAP timestamps (DataSync can only do this on a best-effort basis) • POSIX permissions HDFS stores file and folder user and group ownership as strings rather than numeric identifiers (such as UIDs and GIDs). Default values for UIDs and GIDs are applied on the destination file system. For more information, see Understanding when and how DataSync applies default POSIX metadata. File and folder timestamp s from the source location. The file or folder owner is set based on the HDFS user or Kerberos principal you specified when creating the HDFS transfer location. The Groups Mapping configura tion on the Hadoop cluster determines the group. Choosing what data to transfer 229 AWS DataSync User Guide When copying from one of these locations • Amazon S3 • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • Object storage • NFS • HDFS To one of these locations DataSync can copy • SMB • FSx for Windows File Server • FSx for ONTAP (using SMB) File and folder timestamp s from the source location. Ownership is set based on the Windows user that was specified in DataSync to access the Amazon FSx or SMB share. Permissions are inherited from the parent directory. • Azure Blob Storage • FSx for Windows File Server • FSx for ONTAP (using SMB) Understanding when and how DataSync applies default POSIX metadata DataSync applies default POSIX metadata in the following situations: • When your transfer's source and destination locations don't have similar metadata structures • When metadata is missing from the source location The following table describes how DataSync applies default POSIX metadata during these types of transfers: Source Destination File permissions Folder permissions UID GID • Amazon • Amazon 0755 0755 65534 65534 S31 • Object storage1 EFS • FSx for Lustre • FSx for OpenZFS Choosing what data to transfer 230 AWS DataSync User Guide Source Destination File permissions Folder permissions UID GID • Microsoft • FSx for Azure Blob Storage1 ONTAP (using NFS) • NFS • SMB • Amazon S3 0644 0755 65534 65534 • Object storage • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • NFS • HDFS • Amazon 0644 0755 65534 65534 EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • NFS 1 In cases where the objects don't have metadata that was previously applied by DataSync. Choosing what data to transfer 231 AWS DataSync User Guide Links and directories copied
sync-dg-077
sync-dg.pdf
77
UID GID • Microsoft • FSx for Azure Blob Storage1 ONTAP (using NFS) • NFS • SMB • Amazon S3 0644 0755 65534 65534 • Object storage • Amazon EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • NFS • HDFS • Amazon 0644 0755 65534 65534 EFS • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP (using NFS) • NFS 1 In cases where the objects don't have metadata that was previously applied by DataSync. Choosing what data to transfer 231 AWS DataSync User Guide Links and directories copied by AWS DataSync AWS DataSync handles hard links, symbolic links, and directories differently depending on the storage locations involved in your transfer. Hard links Here's how DataSync handles hard links in some common transfer scenarios: • When transferring between an NFS file server, FSx for Lustre, FSx for OpenZFS, FSx for ONTAP (using NFS), and Amazon EFS, hard links are preserved. • When transferring to Amazon S3, each underlying file referenced by a hard link is transferred only once. During incremental transfers, separate objects are created in your S3 bucket. If a hard link is unchanged in Amazon S3, it's correctly restored when transferred to an NFS file server, FSx for Lustre, FSx for OpenZFS, FSx for ONTAP (using NFS), or Amazon EFS file system. • When transferring to Microsoft Azure Blob Storage, each underlying file referenced by a hard link is transferred only once. During incremental transfers, separate objects are created in your blob storage if there are new references in the source. When transferring from Azure Blob Storage, DataSync transfers hard links as if they are individual files. • When transferring between an SMB file server, FSx for Windows File Server, and FSx for ONTAP (using SMB), hard links aren't supported. If DataSync encounters hard links in these situations, the transfer task completes with an error. To learn more, check your CloudWatch logs. • When transferring to HDFS, hard links aren't supported. CloudWatch logs show these links as skipped. Symbolic links Here's how DataSync handles symbolic links in some common transfer scenarios: • When transferring between an NFS file server, FSx for Lustre, FSx for OpenZFS, FSx for ONTAP (using NFS), and Amazon EFS, symbolic links are preserved. • When transferring to Amazon S3, the link target path is stored in the Amazon S3 object. The link is correctly restored when transferred to an NFS file server, FSx for Lustre, FSx for OpenZFS, FSx for ONTAP, or Amazon EFS file system. • When transferring to Azure Blob Storage, symbolic links aren't supported. CloudWatch logs show these links as skipped. Choosing what data to transfer 232 AWS DataSync User Guide • When transferring between an SMB file server, FSx for Windows File Server, and FSx for ONTAP (using SMB), symbolic links aren't supported. DataSync doesn't transfer a symbolic link itself but instead a file referenced by the symbolic link. To recognize duplicate files and deduplicate them with symbolic links, you must configure deduplication on your destination file system. • When transferring to HDFS, symbolic links aren't supported. CloudWatch logs show these links as skipped. Directories In general, DataSync preserves directories when transferring between storage systems. This isn’t the case in the following situations: • When transferring to Amazon S3, directories are represented as empty objects that have prefixes and end with a forward slash (/). • When transferring to Azure Blob Storage without a hierarchical namespace, directories don't exist. What looks like a directory is just part of an object name. Configuring how to handle files, objects, and metadata You can configure how AWS DataSync handles your files, objects, and their associated metadata when transferring between locations. For example, with recurring transfers, you might want to overwrite files in your destination with changes in the source to keep the locations in sync. You can copy properties such as POSIX permissions for files and folders, tags associated with objects, and access control lists (ACLs). Transfer mode options You can configure whether DataSync transfers only the data (including metadata) that's changed following an initial copy or all data every time you run the task. If you're planning on recurring transfers, you might only want to transfer what's changed since your previous task execution. Option in console Option in API Description Transfer only data that has changed TransferMode set to CHANGED After your initial full transfer, DataSync copies only the data Choosing what data to transfer 233 AWS DataSync User Guide Option in console Option in API Description Transfer all data TransferMode set to ALL and metadata that differs between the source and destination location. DataSync copies everything in the source to the destination without comparing differenc es between the locations. File and object handling options You can control some aspects of
sync-dg-078
sync-dg.pdf
78
might only want to transfer what's changed since your previous task execution. Option in console Option in API Description Transfer only data that has changed TransferMode set to CHANGED After your initial full transfer, DataSync copies only the data Choosing what data to transfer 233 AWS DataSync User Guide Option in console Option in API Description Transfer all data TransferMode set to ALL and metadata that differs between the source and destination location. DataSync copies everything in the source to the destination without comparing differenc es between the locations. File and object handling options You can control some aspects of how DataSync treats your files or objects in the destination location. For example, DataSync can delete files in the destination that aren't in the source. Option in console Option in API Description Keep deleted files PreserveDeletedFiles Specifies whether DataSync maintains files or objects in the destination location that don't exist in the source. If you configure your task to delete objects from your Amazon S3 bucket, you might incur minimum storage duration charges for certain storage classes. For detailed information, see Storage class considerations with Amazon S3 transfers. Warning You can't configure your task to delete data in the destinati Choosing what data to transfer 234 AWS DataSync User Guide Option in console Option in API Description Overwrite files OverwriteMode on and also transfer all data. When you transfer all data, DataSync doesn't scan your destination location and doesn't know what to delete. Specifies whether DataSync modifies data in the destinati on location when the source data or metadata has changed. If you don't configure your task to overwrite data, the destinati on data isn't overwritten even if the source data differs. If your task overwrites objects, you might incur additional charges for certain storage classes (for example, for retrieval or early deletion) . For detailed information, see Storage class considerations with Amazon S3 transfers. Metadata handling options DataSync can preserve file and object metadata during a transfer. The metadata that DataSync can preserve depends on the storage systems involved and whether those systems use a similar metadata structure. Before configuring your task, make sure that you understand how DataSync handles metadata and special files when transferring between your source and destination locations. Choosing what data to transfer 235 AWS DataSync User Guide Option in console Option in API Description Copy ownership Gid and Uid Copy permissions PosixPermissions Copy timestamps Atime and Mtime Copy object tags ObjectTags Specifies whether DataSync copies POSIX file and folder ownership, such as the group ID of the file's owners and the user ID of the file's owner. Specifies whether DataSync copies POSIX permissions for files and folders from the source to the destination. Specifies whether DataSync copies the timestamp metadata from the source to the destination. Specifies whether DataSync preserves the tags associate d with your objects when transferring between object storage systems. Copy ownership, DACLs, and SACLs SecurityDescriptorCopyFlags set to OWNER_DACL_SACL DataSync copies the following : • The object owner. • NTFS discretionary access lists (DACLs), which determine whether to grant access to an object. • NTFS system access control lists (SACLs), which are used by administrators to log attempts to access a secured object. Choosing what data to transfer 236 AWS DataSync User Guide Option in console Option in API Description Note: SACLs are not copied if you use SMB version 1.0. Copying DACLs and SACLs requires granting specific permissions to the Windows user that DataSync uses to access your location using SMB. For more information, see creating a location for SMB, FSx for Windows File Server, or FSx for ONTAP (depending on the type of location in your transfer). DataSync copies the following : • The object owner. • DACLs, which determine whether to grant access to an object. DataSync won't copy SACLs when you choose this option. Copy ownership and DACLs SecurityDescriptorCopyFlags set to OWNER_DACL Choosing what data to transfer 237 AWS DataSync User Guide Option in console Option in API Description Do not copy ownership or ACLs SecurityDescriptorCopyFlags set to NONE DataSync doesn't copy any ownership or permissio ns data. The objects that DataSync writes to your destination location are owned by the user whose credentials are provided for DataSync to access the destination. Destinati on object permissions are determined based on the permissions configured on the destination server. Configuring file, object, and metadata handling options You can configure how DataSync handles files, objects, and metadata when creating, editing, or starting your transfer task. Using the DataSync console The following instructions describe how to configure file, object, and metadata handling options when creating a task. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For
sync-dg-079
sync-dg.pdf
79
DataSync to access the destination. Destinati on object permissions are determined based on the permissions configured on the destination server. Configuring file, object, and metadata handling options You can configure how DataSync handles files, objects, and metadata when creating, editing, or starting your transfer task. Using the DataSync console The following instructions describe how to configure file, object, and metadata handling options when creating a task. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For Transfer mode, choose one of the following options: • Transfer only data that has changed • Transfer all data Choosing what data to transfer 238 AWS DataSync User Guide For more information about these options, see Transfer mode options. 5. Select Keep deleted files if you want DataSync to maintain files or objects in the destination location that don't exist in the source. If you don't choose this option and your task deletes objects from your Amazon S3 bucket, you might incur minimum storage duration charges for certain storage classes. For detailed information, see Storage class considerations with Amazon S3 transfers. Warning You can't deselect this option and enable Transfer all data. When you transfer all data, DataSync doesn't scan your destination location and doesn't know what to delete. 6. Select Overwrite files if you want DataSync to modify data in the destination location when the source data or metadata has changed. If your task overwrites objects, you might incur additional charges for certain storage classes (for example, for retrieval or early deletion). For detailed information, see Storage class considerations with Amazon S3 transfers. If you don't choose this option, the destination data isn't overwritten even if the source data differs. 7. Under Transfer options, select how you want DataSync to handle metadata. For more information about the options, see Metadata handling options. Important The options you see in the console depend on your task's source and destination locations. You might have to expand Additional settings to see some of these options. • Copy ownership • Copy permissions • Copy timestamps • Copy object tags • Copy ownership, DACLs, and SACLs Choosing what data to transfer 239 AWS DataSync User Guide • Copy ownership and DACLs • Do not copy ownership or ACLs Using the DataSync API You can configure file, object, and metadata handling options by using the Options parameter with any of the following operations: • CreateTask • StartTaskExecution • UpdateTask Configuring how AWS DataSync verifies data integrity During a transfer, AWS DataSync uses checksum verification to verify the integrity of the data that you copy between locations. You also can configure DataSync to perform additional verification at the end of your transfer. Data verification options Use the following information to help you decide if and how you want DataSync to perform these additional checks. Console option API option Description Verify only transferred data (recommended) VerifyMode set to ONLY_FILE S_TRANSFERRED DataSync calculates the checksum of transferred data (including metadata) at the source location. At the end of your transfer, DataSync compares this checksum to the checksum calculate d on that same data at the destination. We recommend this option when transferring to S3 Verifying data integrity 240 AWS DataSync User Guide Console option API option Description Verify all data VerifyMode set to POINT_IN_ TIME_CONSISTENT Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For more information, see Storage class considerations with Amazon S3 transfers. At the end of your transfer, DataSync checks the entire source and destination to verify that both locations are fully synchronized. Note Not supported when your task uses Enhanced mode. If you use a manifest, DataSync only scans and verifies what's listed in the manifest. You can't use this option when transferring to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For more information, see Storage class considerations with Amazon S3 transfers. Verifying data integrity 241 AWS DataSync User Guide Console option API option Description Don't verify data after transfer VerifyMode set to NONE DataSync performs data integrity checks only during your transfer. Unlike other options, there's no additional verification at the end of your transfer. Configuring data verification You can configure data verification options when creating a task, updating a task, or starting a task execution. Using the DataSync console The following instructions describe how to configure data verification options when creating a task. To configure data verification by using the console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations.
sync-dg-080
sync-dg.pdf
80
data integrity checks only during your transfer. Unlike other options, there's no additional verification at the end of your transfer. Configuring data verification You can configure data verification options when creating a task, updating a task, or starting a task execution. Using the DataSync console The following instructions describe how to configure data verification options when creating a task. To configure data verification by using the console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For Verification, choose one of the following: • Verify only transferred data (recommended) • Verify all data • Don't verify data after transfer Using the DataSync API You can configure how DataSync verifies data by using the VerifyMode parameter with any of the following operations: • CreateTask Verifying data integrity 242 AWS DataSync • UpdateTask • StartTaskExecution User Guide Setting bandwidth limits for your AWS DataSync task You can configure network bandwidth limits for your AWS DataSync task and each of its executions. Note Not applicable to Enhanced mode tasks. Limiting bandwidth for a task Set a bandwidth limit when creating, editing, or starting a task. Using the DataSync console The following instructions describe how to configure a bandwidth limit for your task when you're creating it. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For Bandwidth limit, choose one of the following: • Select Use available to use all of the available network bandwidth for each task execution. • Select Set bandwidth limit (MiB/s) and enter the maximum bandwidth that you want DataSync to use for each task execution. Using the DataSync API You can configure a task's bandwidth limit by using the BytesPerSecond parameter with any of the following operations: Setting bandwidth limits 243 AWS DataSync • CreateTask • UpdateTask • StartTaskExecution User Guide Throttling bandwidth for a task execution You can modify the bandwidth limit for a running or queued task execution. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the navigation pane, expand Data transfer. then choose Tasks. 3. Choose the task and then select History to view the task's executions. 4. Choose the task execution that you want to modify and then choose Edit. 5. In the dialog box, choose one of the following: • Select Use available to use all of the available network bandwidth for the task execution. • Select Set bandwidth limit (MiB/s) and enter the maximum bandwidth that you want DataSync to use for the task execution. 6. Choose Save changes. The new bandwidth limit takes effect within 60 seconds. Using the DataSync API You can modify the bandwidth limit for a running or queued task execution by using the BytesPerSecond parameter with the UpdateTaskExecution operation. Scheduling when your AWS DataSync task runs You can set up an AWS DataSync task schedule to periodically transfer data between storage locations. How DataSync task scheduling works A scheduled DataSync task runs at a frequency that you specify, with a minimum interval of 1 hour. You can create a task schedule by using a cron or rate expressions. Scheduling your task 244 AWS DataSync Important You can't schedule a task to run at an interval faster than 1 hour. User Guide Using cron expressions Use cron expressions for task schedules that run on a specific time and day. For example, here's how you can configure a task schedule in the AWS CLI that runs at 12:00 PM UTC every Sunday and Wednesday. cron(0 12 ? * SUN,WED *) Using rate expressions Use rate expressions for task schedules that run on a regular interval, such as every 12 hours. For example, here's how you can configure a task schedule in the AWS CLI that runs every 12 hours: rate(12 hours) Tip For more information about cron and rate expression syntax, see the Amazon EventBridge User Guide. Creating a DataSync task schedule You can schedule how frequently your task runs by using the DataSync console, AWS CLI, or DataSync API. Using the DataSync console The following instructions describe how to set up a schedule when creating a task. You can modify the schedule later when editing the task. Scheduling your task 245 AWS DataSync User Guide In the console, some scheduling options let you specify the exact time that your task runs (such as daily at 10:30 PM). If you don't include a time for these options,
sync-dg-081
sync-dg.pdf
81
and rate expression syntax, see the Amazon EventBridge User Guide. Creating a DataSync task schedule You can schedule how frequently your task runs by using the DataSync console, AWS CLI, or DataSync API. Using the DataSync console The following instructions describe how to set up a schedule when creating a task. You can modify the schedule later when editing the task. Scheduling your task 245 AWS DataSync User Guide In the console, some scheduling options let you specify the exact time that your task runs (such as daily at 10:30 PM). If you don't include a time for these options, your task runs at the time that you create (or update) the task. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For schedule Frequency, do one of the following: • Choose Not scheduled if you don't want your task to run on a schedule. • Choose Hourly, then choose the minute during the hour that you want your task to run. • Choose Daily and enter the UTC time that you want your task to run. • Choose Weekly and the day of the week and enter the UTC time that you want the task to run. • Choose Days of the week, choose a specific day or days, and enter the UTC time that the task should run in the format HH:MM. • Choose Custom, and then select Cron expression or Rate expression. Enter your task schedule with a minimum interval of 1 hour. Using the AWS CLI You can create a schedule for your DataSync task by using the --schedule parameter with the create-task, update-task, or start-task-execution command. The following instructions describe how to do this with the create-task command. 1. Copy the following create-task command: aws datasync create-task \ --source-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh \ --destination-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-abcdefgh12345678 \ --schedule '{ "ScheduleExpression": "cron(0 12 ? * SUN,WED *)" Scheduling your task 246 AWS DataSync }' User Guide 2. 3. For the --source-location-arn parameter, specify the Amazon Resource Name (ARN) of the location that you're transferring data from. For the --destination-location-arn parameter, specify the ARN of the location that you're transferring data to. 4. For the --schedule parameter, specify a cron or rate expression for your schedule. In the example, the cron expression cron(0 12 ? * SUN,WED *) sets a task schedule that runs at 12:00 PM UTC every Sunday and Wednesday. 5. Run the create-task command to create your task with the schedule. Pausing a DataSync task schedule There can be situations where you need to pause your DataSync task schedule. For example, you might need to temporarily disable a recurring transfer to fix an issue with your task or perform maintenance on your storage system. DataSync might disable your task schedule automatically for the following reasons: • Your task fails repeatedly with the same error. • You disable an AWS Region that your task is using. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, and then choose Tasks. 3. Choose the task that you want to pause the schedule for, and then choose Edit. 4. For Schedule, turn off Enable schedule. Choose Save changes. Using the AWS CLI 1. Copy the following update-task command: aws datasync update-task \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh \ --schedule '{ Scheduling your task 247 AWS DataSync User Guide "ScheduleExpression": "cron(0 12 ? * SUN,WED *)", "Status": "DISABLED" }' 2. For the --task-arn parameter, specify the ARN of the task that you want to pause the schedule for. 3. For the --schedule parameter, do the following: • For ScheduleExpression, specify a cron or rate expression for your schedule. In the example, the expression cron(0 12 ? * SUN,WED *) sets a task schedule that runs at 12:00 PM UTC every Sunday and Wednesday. • For Status, specify DISABLED to pause the task schedule. 4. Run the update-task command. 5. To resume the schedule, run the same update-task command with Status set to ENABLED. Checking the status of a DataSync task schedule You can see whether your DataSync task schedule is enabled. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, and then choose Tasks. In the Schedule column, check whether the task's schedule is enabled or disabled. Using the AWS CLI 1. Copy the following describe-task command: aws datasync describe-task \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh 2. For the --task-arn parameter, specify the ARN of the task that you want information about. 3. Run the describe-task command. Scheduling your task 248 AWS DataSync
sync-dg-082
sync-dg.pdf
82
Checking the status of a DataSync task schedule You can see whether your DataSync task schedule is enabled. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, and then choose Tasks. In the Schedule column, check whether the task's schedule is enabled or disabled. Using the AWS CLI 1. Copy the following describe-task command: aws datasync describe-task \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh 2. For the --task-arn parameter, specify the ARN of the task that you want information about. 3. Run the describe-task command. Scheduling your task 248 AWS DataSync User Guide You get a response that provides details about your task, including its schedule. (The following example focuses primarily on the task schedule configuration and doesn't show a full describe- task response.) The example shows that the task's schedule is manually disabled. If the schedule is disabled by the DataSync SERVICE, you see an error message for DisabledReason to help you understand why the task keeps failing. For more information, see ???. { "TaskArn": "arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh", "Status": "AVAILABLE", "Schedule": { "ScheduleExpression": "cron(0 12 ? * SUN,WED *)", "Status": "DISABLED", "StatusUpdateTime": 1697736000, "DisabledBy": "USER", "DisabledReason": "Manually disabled by user." }, ... } Tagging your AWS DataSync tasks Tags are key-value pairs that help you manage, filter, and search for your AWS DataSync resources. You can add up to 50 tags to each DataSync task and task execution. For example, you might create a task for a large data migration and tag the task with the key Project and value Large Migration. To further organize the migration, you could tag one run of the task with the key Transfer Date and value May 2021 (subsequent task executions might be tagged June 2021, July 2021, and so on). Tagging your DataSync task You can tag your DataSync task only when creating the task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. Tagging your tasks 249 AWS DataSync User Guide 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. On the Configure settings page, choose Add new tag to tag your task. Using the AWS CLI 1. Copy the following create-task command: aws datasync create-task \ --source-location-arn 'arn:aws:datasync:region:account-id:location/source- location-id' \ --destination-location-arn 'arn:aws:datasync:region:account- id:location/destination-location-id' \ --tags Key=tag-key,Value=tag-value 2. Specify the following parameters in the command: • --source-location-arn – Specify the Amazon Resource Name (ARN) of the source location in your transfer. • --destination-location-arn – Specify the ARN of the destination location in your transfer. • --tags – Specify the tags that you want to apply to the task. For more than one tag, separate each key-value pair with a space. 3. (Optional) Specify other parameters that make sense for your transfer scenario. For a list of --options, see the create-task command. 4. Run the create-task command. You get a response that shows the task that you just created. { "TaskArn": "arn:aws:datasync:us-east-2:123456789012:task/task- abcdef01234567890" } To view the tags you added to this task, you can use the list-tags-for-resource command. Tagging your tasks 250 AWS DataSync User Guide Tagging your DataSync task execution You can tag each run of your DataSync task. If your task already has tags, remember the following about using tags with task executions: • If you start your task with the console, its user-created tags are applied automatically to the task execution. However, system-created tags that begin with aws: are not applied. • If you start your task with the DataSync API or AWS CLI, its tags are not applied automatically to the task execution. Using the DataSync console To add, edit, or remove tags from a task execution, you must start the task with overriding options. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks. 3. Choose the task. 4. Choose Start, then choose one of the following options: • Start with defaults – Applies any tags associated with your task. • Start with overriding options – Allows you to add, edit, or remove tags for this particular task execution. Using the AWS CLI 1. Copy the following start-task-execution command: aws datasync start-task-execution \ --task-arn 'arn:aws:datasync:region:account-id:task/task-id' \ --tags Key=tag-key,Value=tag-value 2. Specify the following parameters in the command: • --task-arn – Specify the ARN of the task that you want to start. • --tags – Specify the tags that you want to apply to this specific run of the task. For more than one tag, separate each key-value pair with a space. Tagging your tasks 251 AWS DataSync User Guide 3. (Optional) Specify other parameters that make sense for your situation. For more information, see the start-task-execution command.
sync-dg-083
sync-dg.pdf
83
particular task execution. Using the AWS CLI 1. Copy the following start-task-execution command: aws datasync start-task-execution \ --task-arn 'arn:aws:datasync:region:account-id:task/task-id' \ --tags Key=tag-key,Value=tag-value 2. Specify the following parameters in the command: • --task-arn – Specify the ARN of the task that you want to start. • --tags – Specify the tags that you want to apply to this specific run of the task. For more than one tag, separate each key-value pair with a space. Tagging your tasks 251 AWS DataSync User Guide 3. (Optional) Specify other parameters that make sense for your situation. For more information, see the start-task-execution command. 4. Run the start-task-execution command. You get a response that shows the task execution that you just started. { "TaskExecutionArn": "arn:aws:datasync:us-east-2:123456789012:task/task- abcdef01234567890" } To view the tags you added to this task, you can use the list-tags-for-resource command. Starting a task to transfer your data Once you create your AWS DataSync transfer task, you can start moving data. Each run of a task is called a task execution. For information about what happens during a task execution, see How DataSync transfers files, objects, and directories. Important If you're planning to transfer data to or from an Amazon S3 location, review how DataSync can affect your S3 request charges and the DataSync pricing page before you begin. Starting your task Once you've created your task, you can begin moving data right away. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks. 3. Choose the task that you want to run. Make sure that the task has an Available status. You also can select multiple tasks. 4. Choose Actions and then choose one of the following options: Starting a task to transfer data 252 AWS DataSync User Guide • Start – Runs the task (or tasks if you selected more than one). • Start with overriding options – Allows you to modify some of your task settings before you begin moving data. When you're ready, choose Start. 5. Choose See execution details to view details about the running task execution. Using the AWS CLI To start your DataSync task, you just need to specify the Amazon Resource Name (ARN) of the task you want to run. Here's an example start-task-execution command: aws datasync start-task-execution \ --task-arn 'arn:aws:datasync:region:account-id:task/task-id' The following example starts a task with a few settings that are different than the task's default settings: aws datasync start-task-execution \ --override-options VerifyMode=NONE,OverwriteMode=NEVER,PosixPermissions=NONE The command returns an ARN for your task execution similar to the following example: { "TaskExecutionArn": "arn:aws:datasync:us-east-1:209870788375:task/ task-08de6e6697796f026/execution/exec-04ce9d516d69bd52f" } Note Each agent can run a single task at a time. Using the DataSync API You can start your task by using the StartTaskExecution operation. Use the DescribeTaskExecution operation to get details about the running task execution. Once started, you can check the task execution's status as DataSync copies your data. You also can throttle the task execution's bandwidth if needed. Starting your task 253 AWS DataSync Task execution statuses User Guide When you start a DataSync task, you might see these statuses. (Task statuses are different than task execution statuses.) Console status API status Description Queueing QUEUED Another task execution is running and using the same DataSync agent. For more information, see Knowing when your task is queued. Launching LAUNCHING DataSync is initializing the task execution. This status usually goes quickly but can take up to a few minutes. Preparing PREPARING DataSync is determining what data to transfer. Preparation can take just minutes, a few hours, or even longer depending on the number of files, objects, or directories in both locations and how you configure your task. How preparation works also depends on your task mode. For more information, see How DataSync prepares your data transfer. Transferring TRANSFERR DataSync is performing the actual data transfer. ING Verifying VERIFYING DataSync is verifying the integrity of your data at the end of the transfer. Success SUCCESS The task execution succeeded. Cancelling CANCELLING The task execution is in the process of being cancelled. Error ERROR The task execution failed. Knowing when your task is queued When running multiple tasks (for example, you're transferring a large dataset), DataSync might queue the tasks to run in a series (first in, first out). Some examples of when this happens include: Task execution statuses 254 AWS DataSync User Guide • You run different tasks that use the same DataSync agent. While you can use the same agent for multiple tasks, an agent can only run one task at a time. • A task execution is in progress and you start additional executions of the same task using different filters or manifests. In each example, the queued tasks don't start until the task ahead of them finishes. Cancelling your task execution You can stop
sync-dg-084
sync-dg.pdf
84
the tasks to run in a series (first in, first out). Some examples of when this happens include: Task execution statuses 254 AWS DataSync User Guide • You run different tasks that use the same DataSync agent. While you can use the same agent for multiple tasks, an agent can only run one task at a time. • A task execution is in progress and you start additional executions of the same task using different filters or manifests. In each example, the queued tasks don't start until the task ahead of them finishes. Cancelling your task execution You can stop any running or queued DataSync task execution. To cancel a task execution by using the console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, then choose Tasks. Select the Task ID for the running task that you want to monitor. The task status should be Running. 4. Choose History to view the task's executions. 5. 6. Select the task execution that you want to stop, and then choose Stop. In the dialog box, choose Stop. To cancel a running or queued task by using the DataSync API, see CancelTaskExecution. Cancelling your task execution 255 AWS DataSync User Guide Discovering your storage with AWS DataSync Discovery AWS DataSync Discovery helps you accelerate your migration to AWS. With DataSync Discovery, you can do the following: • Understand how your on-premises storage is used – DataSync Discovery provides detailed reporting about your storage system resources, including utilization, capacity, and configuration information. • Get recommendations about migrating your data to AWS – DataSync Discovery can suggest AWS storage services (such as Amazon FSx for NetApp ONTAP, Amazon EFS, and Amazon FSx for Windows File Server) for your data. Recommendations include a cost estimate and help you understand how to configure a suggested storage service. When you're ready, you can then use DataSync to migrate your data to AWS. Topics • How AWS DataSync Discovery works • Adding your on-premises storage system to DataSync Discovery • Working with DataSync discovery jobs • Viewing storage resource information collected by AWS DataSync Discovery • Getting recommendations from AWS DataSync Discovery • AWS DataSync Discovery statuses How AWS DataSync Discovery works Learn the key concepts and terminology related to AWS DataSync Discovery. DataSync Discovery architecture The following diagram illustrates how DataSync Discovery collects information and provides recommendations for migrating data from an on-premises storage system to AWS. How it works 256 AWS DataSync User Guide Reference Description 1 2 3 A DataSync agent connects to your on-premis es storage system's management interface (using port 443, for example). You then run a discovery job to collect information about your system. The agent sends the information that it collects to DataSync Discovery through a public service endpoint. Using the information that it collects, DataSync Discovery recommends AWS storage services that you can migrate your data to. Concepts and terminology Familiarize yourself with DataSync Discovery features. Topics • Agent • Discovery job • Storage system resource information • AWS storage recommendations Concepts and terminology 257 AWS DataSync Agent User Guide An agent is a virtual machine (VM) appliance that DataSync Discovery uses to access the management interface of your on-premises storage system. The agent collects (reads) information about how your storage resources are performing and being used. You can deploy an agent in your storage environment on VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisors. For storage in a virtual private cloud (VPC) in AWS, you can deploy an agent as an Amazon EC2 instance. A DataSync Discovery agent is no different than an agent that you can use for DataSync transfers, but we don't recommend using the same agent for these scenarios. To get started, see Deploying your AWS DataSync agent. Discovery job You run a discovery job to collect information about your on-premises storage system through the storage system's management interface. You can run a discovery job between 1 hour and 31 days. You'll get more accurate AWS storage recommendations the longer your discovery job runs. For more information, see Working with DataSync discovery jobs. Storage system resource information DataSync Discovery can give you performance and utilization information about your on-premises storage system's resources. For example, get an idea about how much storage capacity is being used in a specific storage volume compared to how much capacity you originally provisioned. You can view this information as your discovery job collects it by using the following: • The DescribeStorageSystemResources operation • The DescribeStorageSystemResourceMetrics operation For more information, see Viewing storage resource information collected by AWS DataSync Discovery. Concepts and terminology 258 AWS DataSync User Guide AWS storage recommendations Using the information that it collects about your on-premises storage system's resources, DataSync Discovery recommends AWS storage services to
sync-dg-085
sync-dg.pdf
85
utilization information about your on-premises storage system's resources. For example, get an idea about how much storage capacity is being used in a specific storage volume compared to how much capacity you originally provisioned. You can view this information as your discovery job collects it by using the following: • The DescribeStorageSystemResources operation • The DescribeStorageSystemResourceMetrics operation For more information, see Viewing storage resource information collected by AWS DataSync Discovery. Concepts and terminology 258 AWS DataSync User Guide AWS storage recommendations Using the information that it collects about your on-premises storage system's resources, DataSync Discovery recommends AWS storage services to help plan your migration to AWS. You can view recommendations by using the DescribeStorageSystemResources operation. For more information, see Getting recommendations from AWS DataSync Discovery. Limitations • Currently, you can only activate DataSync Discovery agents with public service endpoints. Adding your on-premises storage system to DataSync Discovery Specify an on-premises storage system that you want AWS DataSync Discovery to collect information about and provide AWS storage migration recommendations for. Note DataSync Discovery currently supports NetApp Fabric-Attached Storage (FAS) and All Flash FAS (AFF) systems that are running ONTAP 9.7 or later. Accessing your on-premises storage system To collect information about your on-premises storage system, DataSync Discovery needs credentials that provide read access to your storage system's management interface. For security, DataSync Discovery stores these credentials in AWS Secrets Manager. Important If you update these credentials on your storage system, make sure to also update them in DataSync Discovery. You can do this by using the UpdateStorageSystem operation. Limitations 259 AWS DataSync User Guide How DataSync Discovery uses AWS Secrets Manager AWS Secrets Manager is a secret storage service that protects database credentials, API keys, and other secret information. DataSync Discovery uses Secrets Manager to protect the credentials that you provide for accessing your on-premises storage system. Secrets Manager encrypts secrets using AWS Key Management Service keys. For more information, see Secret encryption and decryption. You can configure Secrets Manager to automatically rotate secrets for you according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps to significantly reduce the risk of compromise. For more information, see Rotate AWS Secrets Manager secrets. You pay for credentials stored in Secrets Manager. For more information, see AWS Secrets Manager Pricing. Adding your on-premises storage system You must provide some information about your storage system before DataSync Discovery can collect information about it. Using the AWS CLI Using the AWS Command Line Interface (AWS CLI), configure DataSync Discovery to work with your on-premises storage system. Before you begin: We recommend that you enable logging with CloudWatch. To add an on-premises storage system by using the AWS CLI 1. Copy the following add-storage-system command: aws datasync add-storage-system \ --server-configuration ServerHostname="domain-or-ip",ServerPort=network-port \ --system-type storage-system-type \ --credentials Username="your-management-interface-username",Password="your- management-interface-password" --agent-arns "agent-arn" 2. Specify the following required parameters in the command: Adding your on-premises storage system 260 AWS DataSync User Guide • --server-configuration ServerHostname – Specify the domain name or IP address of your storage system's management interface. • --server-configuration ServerPort – Specify the network port that's needed to connect with the system's management interface. • --system-type – Specify the type of storage system that you're adding. • --credentials – Include the following options: • Username – Specify the user name needed to access your storage system's management interface. • Password – Specify the password needed to access your storage system's management interface. For more information, see Accessing your on-premises storage system. • --agent-arns – Specify the DataSync agent that you want to connect to your storage system's management interface. If you don't haven't an agent, see Deploying your AWS DataSync agent. 3. (Optional) Add any of the following parameters to the command: • --cloud-watch-log-group-arn – Specify the Amazon Resource Name (ARN) of the CloudWatch log group that you want to use to log DataSync Discovery activity. • --tags – Specify a Key and Value to tag the DataSync resource that's representing your storage system. A tag is a key-value pair that helps you manage, filter, and search for your DataSync resources. • --name – Specify a name for your storage system. 4. Run the add-storage-system command. You get a response that shows you the storage system ARN that you just added. { "StorageSystemArn": "arn:aws:datasync:us-east-1:123456789012:system/storage- system-abcdef01234567890" } Adding your on-premises storage system 261 AWS DataSync User Guide After you add the storage system, you can run a discovery job to collect information about the storage system. Removing your on-premises storage system When you remove an on-premises storage system from DataSync Discovery, you permanently delete any associated discovery jobs, collected data, and recommendations. Using the AWS CLI 1. Copy the following remove-storage-system command: aws datasync remove-storage-system --storage-system-arn "your-storage-system-arn" 2. For --storage-system-arn, specify the ARN of your storage system. 3. Run
sync-dg-086
sync-dg.pdf
86
a response that shows you the storage system ARN that you just added. { "StorageSystemArn": "arn:aws:datasync:us-east-1:123456789012:system/storage- system-abcdef01234567890" } Adding your on-premises storage system 261 AWS DataSync User Guide After you add the storage system, you can run a discovery job to collect information about the storage system. Removing your on-premises storage system When you remove an on-premises storage system from DataSync Discovery, you permanently delete any associated discovery jobs, collected data, and recommendations. Using the AWS CLI 1. Copy the following remove-storage-system command: aws datasync remove-storage-system --storage-system-arn "your-storage-system-arn" 2. For --storage-system-arn, specify the ARN of your storage system. 3. Run the remove-storage-system command. If successful, you get an HTTP 200 response with an empty HTTP body. Logging DataSync Discovery activity to Amazon CloudWatch When you enable logging with Amazon CloudWatch, you can more easily troubleshoot issues with DataSync Discovery. For example, if your discovery job is interrupted, you can check the logs to locate the issue. If you resolve the problem within 12 hours of when it occurred, your discovery job picks up where it left off. If you configure your system by using the AWS CLI, you must create a log group with a resource policy that allows DataSync to log events to the log group. You can use a log group resource policy similar to one for DataSync tasks, with some differences: • For the service principal, use discovery-datasync.amazonaws.com. • If you're using the ArnLike condition, specify a storage system ARN like this: "ArnLike": { "aws:SourceArn": [ "arn:aws:datasync:region:account-id:system/*" ] }, Removing your on-premises storage system 262 AWS DataSync User Guide Working with DataSync discovery jobs After you deploy your AWS DataSync agent and add your on-premises storage system to DataSync Discovery, you can run discovery jobs to collect information about the system and get AWS migration recommendations. Starting a discovery job You can run a discovery job for up to 31 days. A storage system can have only one active discovery job at a time. The information that a discovery job collects is available for up to 60 days following the end of the job (unless you remove the related storage system from DataSync Discovery before that). Tip DataSync Discovery can provide more accurate recommendations the longer your discovery job runs. We recommend running a discovery job for at least 14 days. Using the AWS CLI With the AWS Command Line Interface (AWS CLI), you can run a discovery job for as short as 1 hour. 1. Copy the following start-discovery-job command: aws datasync start-discovery-job \ --storage-system-arn "your-storage-system-arn" \ --collection-duration-minutes discovery-job-duration 2. Specify the following parameters in the command: • --storage-system-arn – Specify the Amazon Resource Name (ARN) of the on-premises storage system that you added to DataSync Discovery. • --collection-duration-minutes – Specify how long that you want the discovery job to run in minutes. Enter a value between 60 (1 hour) and 44640 (31 days). 3. Run the start-discovery-job command. You get a response that shows the discovery job that you just started. Working with discovery jobs 263 AWS DataSync User Guide { "DiscoveryJobArn": "arn:aws:datasync:us-east-1:123456789012:system/storage- system-abcdef01234567890/job/discovery-job-12345678-90ab-cdef-0abc-021345abcdef6" } Shortly after starting the discovery job, you can begin looking at the information that the job collects (including storage system capacity and usage). Stopping a discovery job Stop a discovery job at any time. You can still get recommendations for a stopped job. Using the AWS CLI 1. Copy the following stop-discovery-job command: aws datasync stop-discovery-job --discovery-job-arn "your-discovery-job-arn" 2. For --discovery-job-arn, specify the ARN of the discovery job that's currently running. 3. Run the stop-discovery-job command. If successful, you get an HTTP 200 response with an empty HTTP body. Viewing storage resource information collected by AWS DataSync Discovery AWS DataSync Discovery collects information about your on-premises storage system that can help you understand how its storage resources are configured, performing, and utilized. DataSync Discovery uses this information to generate recommendations for migrating your data to AWS. A discovery job can give you the following information about your storage system's resources (such as its volumes): • Total, available, and in use storage capacity • Number of Common Internet File System (CIFS) shares in a resource and whether a resource is available via Network File System (NFS) • Data transfer protocols Stopping a discovery job 264 AWS DataSync User Guide • Performance (such as IOPS, throughput, and latency) Viewing information collected about your storage system You can begin to see what kind of information DataSync Discovery is collecting about your on- premises storage system shortly after you start a discovery job. You can view this information by using the following options: • The DescribeStorageSystemResources operation – Get data about all of the storage system resources that DataSync Discovery can collect information about, including utilization, capacity, and configuration data. • The DescribeStorageSystemResourceMetrics operation – Get performance and capacity information that DataSync Discovery can
sync-dg-087
sync-dg.pdf
87
a discovery job 264 AWS DataSync User Guide • Performance (such as IOPS, throughput, and latency) Viewing information collected about your storage system You can begin to see what kind of information DataSync Discovery is collecting about your on- premises storage system shortly after you start a discovery job. You can view this information by using the following options: • The DescribeStorageSystemResources operation – Get data about all of the storage system resources that DataSync Discovery can collect information about, including utilization, capacity, and configuration data. • The DescribeStorageSystemResourceMetrics operation – Get performance and capacity information that DataSync Discovery can collect about a specific resource in your storage system. Using the AWS CLI The following steps show how to use the DescribeStorageSystemResources operation with the AWS CLI. 1. Copy the following describe-storage-system-resources command: aws datasync describe-storage-system-resources \ --discovery-job-arn "your-discovery-job-arn" \ --resource-type "storage-system-resource-type" 2. Specify the following parameters in the command: • --discovery-job-arn – Specify the Amazon Resource Name (ARN) of the discovery job that you ran. • --resource-type – Specify one of the following values, depending on what kind of storage system resources you want information about: • CLUSTER • SVM • VOLUME 3. (Optional) Specify the --resource-ids parameter with the IDs of the storage system resources that you want information about. Viewing information collected about your storage system 265 AWS DataSync User Guide 4. Run the describe-storage-system-resources command. The following example response returns information that a discovery job collected about two volumes in a storage system. Note that the RecommendationStatus is NONE for each volume. To get AWS storage recommendations, you must run the generate-recommendations command before the describe-storage-system-resources command. For more information, see Getting recommendations. { "ResourceDetails": { "NetAppONTAPVolumes": [ { "VolumeName": "vol1", "ResourceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", "CifsShareCount": 0, "SecurityStyle": "unix", "SvmUuid": "a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa", "SvmName": "my-svm", "CapacityUsed": 409600, "CapacityProvisioned": 1099511627776, "LogicalCapacityUsed": 409600, "NfsExported": true, "SnapshotCapacityUsed": 573440, "MaxP95Performance": { "IopsRead": 251.0, "IopsWrite": 44.0, "IopsOther": 17.0, "IopsTotal": 345.0, "ThroughputRead": 2.06, "ThroughputWrite": 0.88, "ThroughputOther": 0.11, "ThroughputTotal": 2.17, "LatencyRead": 0.06, "LatencyWrite": 0.07, "LatencyOther": 0.13 }, "Recommendations": [], "RecommendationStatus": "NONE" }, { "VolumeName": "root_vol", Viewing information collected about your storage system 266 AWS DataSync User Guide "ResourceId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", "CifsShareCount": 0, "SecurityStyle": "unix", "SvmUuid": "a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa", "SvmName": "my-svm", "CapacityUsed": 462848, "CapacityProvisioned": 1073741824, "LogicalCapacityUsed": 462848, "NfsExported": true, "SnapshotCapacityUsed": 421888, "MaxP95Performance": { "IopsRead": 261.0, "IopsWrite": 53.0, "IopsOther": 23.0, "IopsTotal": 360.0, "ThroughputRead": 10.0, "ThroughputWrite": 2.0, "ThroughputOther": 4.0, "ThroughputTotal": 12.0, "LatencyRead": 0.25, "LatencyWrite": 0.3, "LatencyOther": 0.55 }, "Recommendations": [], "RecommendationStatus": "NONE" } ] } } Getting recommendations from AWS DataSync Discovery After AWS DataSync Discovery collects information about your on-premises storage system, it can recommend moving your data on a per-resource basis to one or more of the following AWS storage services: • Amazon FSx for NetApp ONTAP • Amazon Elastic File System (Amazon EFS) • Amazon FSx for Windows File Server Getting recommendations 267 AWS DataSync User Guide What's included in the recommendations? DataSync Discovery recommendations include storage configurations and cost estimates to help you choose the AWS storage service that works for your data. AWS storage configuration DataSync Discovery provides information about how you might want to configure a recommended AWS storage service. The storage configuration is designed to optimize costs while helping meet storage performance and capacity needs based on information that's collected during a discovery job. The storage configuration is only an approximation and might not account for all capabilities provided by an AWS storage service. For more information, see What's not included in the recommendations? Estimated cost DataSync Discovery provides an estimated monthly cost for each AWS storage service that it recommends. The cost is based on standard AWS pricing and provides only an estimate of your AWS fees. It does not include any taxes that might apply. Your actual fees depend on a variety of factors, including your usage of AWS services. The estimated cost also doesn't include the one-time or periodic fees for migrating your data to AWS. What's not included in the recommendations? DataSync Discovery won't recommend an AWS storage service that doesn't meet your storage configuration needs. Additionally, the following AWS storage capabilities currently aren't accounted for when recommendations are determined: • Amazon FSx for NetApp ONTAP – Single-AZ deployments and backup storage • Amazon EFS – EFS One Zone storage classes and backup storage • Amazon FSx for Windows File Server – Single-AZ deployments and backup storage What's included in the recommendations? 268 AWS DataSync User Guide Getting recommendations You can generate AWS storage recommendations after your discovery job completes, when you stop the job, and even sometimes if the job completes but had some issues collecting information from your storage system. There might be situations when you can't get recommendations (for example, if your discovery job fails). For more information, see Recommendation statuses. Tip Before starting your migration to AWS, review the DataSync Discovery recommendations with your AWS account team. Using
sync-dg-088
sync-dg.pdf
88
backup storage • Amazon FSx for Windows File Server – Single-AZ deployments and backup storage What's included in the recommendations? 268 AWS DataSync User Guide Getting recommendations You can generate AWS storage recommendations after your discovery job completes, when you stop the job, and even sometimes if the job completes but had some issues collecting information from your storage system. There might be situations when you can't get recommendations (for example, if your discovery job fails). For more information, see Recommendation statuses. Tip Before starting your migration to AWS, review the DataSync Discovery recommendations with your AWS account team. Using the AWS CLI 1. Copy the following describe-discovery-job command: aws datasync describe-discovery-job --discovery-job-arn "your-discovery-job-arn" 2. For the --discovery-job-arn parameter, specify the Amazon Resource Name (ARN) of the discovery job that you ran on the storage system. 3. Run the describe-discovery-job command. If your response includes a Status that isn't FAILED, you can continue. If you see FAILED, you must run another discovery job on your storage system to try to generate recommendations. 4. If your discovery job completed successfully, skip this step. Otherwise, do the following to manually generate recommendations: a. Copy the following generate-recommendations command: aws datasync generate-recommendations \ --discovery-job-arn "your-discovery-job-arn" \ --resource-type cluster-svm-volume \ --resource-ids storage-resource-UUIDs b. For the --discovery-job-arn parameter, specify the ARN of the same discovery job that you specified in Step 2. Getting recommendations 269 AWS DataSync User Guide c. d. For the --resource-type parameter, specify CLUSTER, SVM, or RESOURCE depending on the kind of resource you want recommendations on. For the --resource-ids parameter, specify universally unique identifiers (UUIDs) of the resources that you want recommendations on. e. Run the generate-recommendations command. f. Wait until the RecommendationStatus element in the response has a COMPLETED status, then move to the next step. 5. Copy the following describe-storage-system-resources command: aws datasync describe-storage-system-resources \ --discovery-job-arn "your-discovery-job-arn" \ --resource-type cluster-svm-volume 6. Specify the following parameters in the command: • --discovery-job-arn – Specify the ARN of the same discovery job that you specified in Step 2. • --resource-type – Specify the resource type you generated recommendations on (for example, VOLUME). 7. Run the describe-storage-system-resources command. Note In the response, if you don't see COMPLETED for RecommendationStatus, check the recommendation statuses for more information. You may need to retry generating recommendations. In this example response, the Recommendations element suggests a couple AWS storage services where you can migrate a specific volume, how you might configure the service, and estimated monthly AWS storage costs. { "Recommendations": [{ "StorageType": "fsxOntap", "StorageConfiguration": { "StorageCapacityGB": "1024", Getting recommendations 270 AWS DataSync User Guide "ProvisionedIOpsMode": "AUTOMATIC", "CapacityPoolGB": "0", "TotalIOps": "0", "DeploymentType": "Multi-AZ", "ThroughputCapacity": "128" }, "EstimatedMonthlyStorageCost": "410.0" }, { "StorageType": "efs", "StorageConfiguration": { "InfrequentAccessStorageGB": "1", "StandardStorageGB": "1", "InfrequentAccessRequests": "0", "ProvisionedThroughputMBps": "0", "PerformanceMode": "General Purpose", "ThroughputMode": "Bursting" }, "EstimatedMonthlyStorageCost": "1.0" } ], "RecommendationStatus": "COMPLETED" } AWS DataSync Discovery statuses You can check the status of your discovery jobs and whether AWS DataSync Discovery can provide storage recommendations for your AWS migrations. Discovery job statuses Use the following table to understand what's going on with your discovery job. API status RUNNING WARNING Description Your discovery job is running. The job collects data about your on-premises storage system for the duration that you specified. Your discovery job has encountered errors and currently can't collect data. Review the DataSync Discovery statuses 271 AWS DataSync API status STOPPED COMPLETED COMPLETED_WITH_ISSUES TERMINATED FAILED User Guide Description Amazon CloudWatch logs and address these issues within 12 hours, or the job will be terminated. You stopped your discovery job before the job was expected to finish. Your discovery job successfully collected all data from your on-premises storage system. There were times during the discovery job when DataSync Discovery couldn't collect data. For details, see your CloudWatch logs. Your discovery job was canceled because of unresolved issues and some data wasn’t collected. For details, see your CloudWatch logs. Your discovery job encountered issues and couldn’t collect data from your on-premis es storage system. For details, see your CloudWatch logs. Recommendation statuses Use the following table to understand whether DataSync Discovery recommendations for a specific on-premises storage resource are ready to view. API status NONE Description You can't generate recommendations yet. Try generating recommendations when your discovery job completes. Recommendation statuses 272 AWS DataSync API status NONE IN_PROGRESS COMPLETED FAILED NONE COMPLETED User Guide Description Your discovery job collected enough data for DataSync Discovery to provide recommend ations. You may be able to generate recommendations if you stopped the discovery job early or the job completed but had issues with data collection. DataSync Discovery is working on your recommendations. How long this takes depends on how many resources you're generating recommendations for. If you're using the console, it may take a few minutes to generate recommendations for a storage resource. You can view your recommendations. DataSync
sync-dg-089
sync-dg.pdf
89
discovery job completes. Recommendation statuses 272 AWS DataSync API status NONE IN_PROGRESS COMPLETED FAILED NONE COMPLETED User Guide Description Your discovery job collected enough data for DataSync Discovery to provide recommend ations. You may be able to generate recommendations if you stopped the discovery job early or the job completed but had issues with data collection. DataSync Discovery is working on your recommendations. How long this takes depends on how many resources you're generating recommendations for. If you're using the console, it may take a few minutes to generate recommendations for a storage resource. You can view your recommendations. DataSync Discovery couldn't generate recommendations. You can review your CloudWatch logs to identify the issue and try generating the recommendations again. Recommendations aren't available. You may see this status for a failed discovery job or issue with the storage resource. DataSync Discovery currently doesn't support an AWS storage service that meets the needs of the storage resource. Recommendation statuses 273 AWS DataSync User Guide Monitoring your AWS DataSync transfers Monitoring is important for maintaining the reliability and performance of your AWS DataSync transfer and storage discovery activities. We recommend that you collect monitoring data so that you can more easily debug errors if one occurs. Before you start monitoring DataSync, however, create a monitoring plan that includes answers to the following questions: • What are your monitoring goals? • What resources will you monitor? • How often will you monitor these resources? • What monitoring tools will you use? • Who will perform the monitoring tasks? • Who should be notified when something goes wrong? AWS provides various services and tools for monitoring DataSync. You can configure some of these to do the monitoring for you, but some require manual intervention. We recommend that you automate monitoring tasks as much as possible. Topics • Understanding data transfer performance counters • Monitoring data transfers with Amazon CloudWatch metrics • Monitoring your data transfers with task reports • Monitoring data transfers with Amazon CloudWatch Logs • Logging AWS DataSync API calls with AWS CloudTrail • Monitoring events by using Amazon EventBridge • Monitoring AWS DataSync with manual tools Understanding data transfer performance counters When you start a task, AWS DataSync provides counters to help track your data transfer's performance and progress. Understanding data transfer performance counters 274 AWS DataSync User Guide Use the following information to understand what each counter represents. You can view these counters in the DataSync console or a DescribeTaskExecution response. Some counters aren't available with every task mode. Console DescribeT askExecution Task mode support Description – BytesWritten Enhanced, Basic Data throughput – Enhanced, Basic Data transferred BytesTran Enhanced, Basic sferred The number of logical bytes that DataSync actually writes to the destination location. The rate at which DataSync writes logical bytes to the destination location. If you're using DescribeTaskExecut ion, how you calculate this counter depends on your task mode: • Enhanced mode: Divide BytesWritten by TotalDura tion • Basic mode: Divide BytesWritten by TransferD uration The number of bytes that DataSync sends to the network before compression Understanding data transfer performance counters 275 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Deleted from destination FilesDeleted Enhanced, Basic (if compression is possible). For the number of bytes transferred over the network, see the Network throughput (in console) or BytesCompressed (in DescribeT askExecution) counter. The number of files, objects, and directori es that DataSync actually deletes in your destination location. If you don't configure your task to delete data in the destinati on that isn't in the source: • Deleted from destination doesn't display in the console. • FilesDeleted always shows a value of 0. Understanding data transfer performance counters 276 AWS DataSync Console – – Task mode support Description User Guide Enhanced, Basic DescribeT askExecution Estimated BytesToTr ansfer Estimated FilesToDelete Enhanced, Basic The number of logical bytes that DataSync expects to write to the destination location. The number of files, objects, and directori es that DataSync expects to delete in your destination location. If you don't configure your task to delete data in the destinati on that isn't in the source, the value is always 0. Understanding data transfer performance counters 277 AWS DataSync Console – DescribeT askExecution Estimated FilesToTr ansfer Task mode support Description User Guide Enhanced, Basic The number of files, objects, and directori es that DataSync expects to transfer over the network. This value is calculate d while DataSync prepares the transfer. How this gets calculated depends primarily on the transfer mode you're using: • If transfer mode is set to transfer only data that has changed: The calculation is based on comparing the content of the source and destination locations and determining the difference that needs to be transferred. The difference can include: • Anything that's added or Understanding data transfer performance
sync-dg-090
sync-dg.pdf
90
DescribeT askExecution Estimated FilesToTr ansfer Task mode support Description User Guide Enhanced, Basic The number of files, objects, and directori es that DataSync expects to transfer over the network. This value is calculate d while DataSync prepares the transfer. How this gets calculated depends primarily on the transfer mode you're using: • If transfer mode is set to transfer only data that has changed: The calculation is based on comparing the content of the source and destination locations and determining the difference that needs to be transferred. The difference can include: • Anything that's added or Understanding data transfer performance counters 278 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide modified at the source location. • Anything that's in both locations and modified at the destination after an initial transfer (unless you configure your task to not overwrite data in the destination). • (Basic mode only) The number of items that DataSync expects to delete (if you configure your task to delete data in the destination). • If transfer mode is set to transfer all data: The calculati on is based only on the items that DataSync finds at the source location. Understanding data transfer performance counters 279 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide File throughput – Enhanced, Basic The rate at which DataSync transfers files, objects, and directories over the network. If you're using DescribeTaskExecut ion, how you calculate this counter depends on your task mode: • Enhanced mode: Divide FilesTran sferred by TotalDuration • Basic mode: Divide FilesTran sferred by TransferD uration Understanding data transfer performance counters 280 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide – FilesFailed Enhanced The number of objects that DataSync fails to prepare, transfer, verify, and delete during your task execution. If there are failures, you can see these alongside the Prepared, Transferr ed, Skipped, and Deleted from destination console counters, respectiv ely. Understanding data transfer performance counters 281 AWS DataSync Console Listed at source DescribeT askExecution FilesList ed.AtSource Task mode support Description User Guide Enhanced The number of objects that DataSync finds at your source location. • With a manifest, DataSync lists only what's in your manifest (and not everything in your source location). • With an include filter, DataSync lists only what matches the filter at your source location. • With an exclude filter, DataSync lists everything at your source location before applying the filter. Understanding data transfer performance counters 282 AWS DataSync Console – DescribeT askExecution FilesList ed.AtDest inationFo rDelete Task mode support Description User Guide Enhanced The number of objects that DataSync finds at your destinati on location. This counter is only applicable if you configure your task to delete data in the destination that isn't in the source. Understanding data transfer performance counters 283 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Network throughpu t* BytesCompressed Enhanced, Basic The number of physical bytes that DataSync transfers over the network after compression (if compression is possible). This number is typically less than Data transferr ed (in console) or BytesTran sferred (in DescribeTaskExecut ion) unless the data isn't compressible. * – For Enhanced mode, Network throughput doesn't display in the console. Understanding data transfer performance counters 284 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Percent compressed – Basic The percentage of transfer data that DataSync compressed before sending it over the network. If you're using DescribeTaskExecut ion, you can calculate this counter with 1 - BytesComp ressed / BytesWritten . Understanding data transfer performance counters 285 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Prepared FilesPrepared Enhanced The number of objects that DataSync will attempt to transfer after comparing your source and destinati on locations. In the console, this counter can also show you the number of objects that DataSync skips during preparati on. For more information, see How DataSync prepares your data transfer. This counter isn't applicable if you configure your task to transfer all data. In that scenario, DataSync copies everything from the source to the destination without comparing differenc es between locations. Understanding data transfer performance counters 286 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Processing rate – Enhanced, Basic The rate at which DataSync reads files, objects, and directori es at your source location. The processing rate is based on several CloudWatch metrics. The exact metrics depend on the task mode you're using. Enhanced mode: • FilesList edSource • FilesPrepared • FilesTran sferred • FilesVerified Basic mode: • FilesPrep aredSource • FilesPrep aredDesti nation • FilesTran sferred • FilesVeri fiedSource Understanding data transfer performance counters 287 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Remaining – Basic • FilesVeri fiedDesti nation The remaining number of files, objects, and
sync-dg-091
sync-dg.pdf
91
Description User Guide Processing rate – Enhanced, Basic The rate at which DataSync reads files, objects, and directori es at your source location. The processing rate is based on several CloudWatch metrics. The exact metrics depend on the task mode you're using. Enhanced mode: • FilesList edSource • FilesPrepared • FilesTran sferred • FilesVerified Basic mode: • FilesPrep aredSource • FilesPrep aredDesti nation • FilesTran sferred • FilesVeri fiedSource Understanding data transfer performance counters 287 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Remaining – Basic • FilesVeri fiedDesti nation The remaining number of files, objects, and directori es that DataSync expects to transfer over the network. If you're using DescribeTaskExecut ion, you can calculate this counter by subtracti ng FilesTran sferred from Estimated FilesToTr ansfer . Understanding data transfer performance counters 288 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Skipped* FilesSkipped Enhanced, Basic The number of files, objects, and directori es that DataSync skips during your transfer. * – For Enhanced mode, Skipped doesn't display in the console. Instead, skipped items are included in the Prepared counter when transferring only the data that has changed or the Transferred counter when transferring all data. Understanding data transfer performance counters 289 AWS DataSync Console Transferred DescribeT askExecution FilesTran sferred Task mode support Description User Guide Enhanced, Basic The number of files, objects, and directori es that DataSync transfers over the network. This value is updated periodica lly during your task execution when something is read from the source and sent over the network. If DataSync fails to transfer something, this value can be less than Estimated FilesToTr ansfer . In some cases, this value can also be greater than Estimated FilesToTr ansfer . This counter is implement ation-specific for some location types, so don't use it as an exact indication of what's transferring or to monitor your task execution. Understanding data transfer performance counters 290 AWS DataSync Console DescribeT askExecution Task mode support Description User Guide Verified FilesVerified Enhanced, Basic In the console, this counter can also show you the number of objects that DataSync skips during the transfer of an Enhanced mode task. For more information, see How DataSync transfers your data. The number of files, objects, and directori es that DataSync verifies during your transfer. When you configure your task to verify only transferred data, DataSync doesn't verify directories in some situations or files or objects that fail to transfer. Monitoring data transfers with Amazon CloudWatch metrics Amazon CloudWatch provides metrics to track DataSync transfer performance and troubleshoot issues with your transfer task. Monitoring data transfers with CloudWatch metrics 291 AWS DataSync User Guide You can monitor AWS DataSync transfer performance by using Amazon CloudWatch metrics. DataSync metrics are automatically sent to CloudWatch in 5-minute intervals (regardless of how you configure logging). The metrics are retained for a period of 15 months. To see CloudWatch metrics for DataSync, you can use the following tools: • The CloudWatch console • The CloudWatch CLI • The CloudWatch API • The DataSync console (on the task execution's details page) For more information, see the Amazon CloudWatch User Guide. CloudWatch metrics for DataSync DataSync metrics use the aws/datasync namespace and provide metrics for the following dimensions: • AgentId – The unique ID of the agent (if your task uses an agent). • TaskId – The unique ID of the task. It takes the form of task-01234567890abcdef. The aws/datasync namespace includes the following metrics. Some metrics aren't available with every task mode. CloudWatch metric Task mode support Description BytesComp Basic ressed The number of physical bytes that DataSync transfers over the network after compression (if compression is possible). This number is typically less than BytesTran sferred unless the data isn't compressible. Unit: Bytes BytesPrep aredDesti nation Basic The number of logical bytes that DataSync prepares at the destination location. CloudWatch metrics for DataSync 292 AWS DataSync User Guide CloudWatch metric Task mode support Description Unit: Bytes BytesPrep aredSource Basic The number of logical bytes that DataSync prepares at the source location. BytesTran Basic sferred Unit: Bytes The number of bytes that DataSync sends to the network before compression (if compression is possible) . For the number of bytes transferred over the network, see the BytesCompressed metric. Unit: Bytes BytesVeri fiedDesti nation Basic The number of logical bytes that DataSync verifies at the destination location. Unit: Bytes BytesVeri fiedSource Basic The number of logical bytes that DataSync verifies at the source location. Units: Bytes BytesWritten Enhanced, Basic The number of logical bytes that DataSync writes to the destination location. FilesDeleted Enhanced, Basic Unit: Bytes The number of files, objects, and directories that DataSync deletes in your destination location. If you don't configure your task to delete data in the destinati on that isn't in the source, the value
sync-dg-092
sync-dg.pdf
92
bytes transferred over the network, see the BytesCompressed metric. Unit: Bytes BytesVeri fiedDesti nation Basic The number of logical bytes that DataSync verifies at the destination location. Unit: Bytes BytesVeri fiedSource Basic The number of logical bytes that DataSync verifies at the source location. Units: Bytes BytesWritten Enhanced, Basic The number of logical bytes that DataSync writes to the destination location. FilesDeleted Enhanced, Basic Unit: Bytes The number of files, objects, and directories that DataSync deletes in your destination location. If you don't configure your task to delete data in the destinati on that isn't in the source, the value is always 0. Unit: Count CloudWatch metrics for DataSync 293 AWS DataSync User Guide CloudWatch metric Task mode support Description FilesList Enhanced edSource The number of objects that DataSync finds at your source location. FilesPrep Enhanced ared Unit: Count The number of objects that DataSync will attempt to transfer after comparing your source and destination locations. For more information, see How DataSync prepares your data transfer. This metric isn't applicable if you configure your task to transfer all data. In that scenario, DataSync copies everything from the source to the destination without comparing differences between the locations. Unit: Count FilesPrep aredDesti nation Basic The number of files, objects, and directories that DataSync prepares at the destination location. Unit: Count FilesPrep aredSource Basic The number of files, objects, and directories that DataSync prepares at the source location. Unit: Count FilesSkipped Basic The number of files, objects, and directories that DataSync skips during your transfer. Unit: Count CloudWatch metrics for DataSync 294 AWS DataSync User Guide CloudWatch metric Task mode support Description FilesTran Enhanced, Basic sferred The number of files, objects, and directories that DataSync transfers over the network. This value is updated periodically during the task execution when something is read from the source and sent over the network. Note This value can be less than Estimated FilesToTransfer in a DescribeTaskExecut ion response if DataSync fails to transfer something. In some cases, this value can also be greater than EstimatedFilesToTransfer This metric is implementation-specific for . some location types, so don't use it as an exact indication of what transferred or to monitor your task execution. Unit: Count FilesVeri Enhanced fied The number of objects that DataSync verifies during your transfer. Unit: Count FilesVeri fiedDesti nation Basic The number of files, objects, and directories that DataSync verifies at the destination location. Unit: Count FilesVeri fiedSource Basic The number of files, objects, and directories that DataSync verifies at the source location. Unit: Count CloudWatch metrics for DataSync 295 AWS DataSync User Guide Monitoring your data transfers with task reports Task reports provide detailed information about what AWS DataSync attempts to transfer, skip, verify, and delete during a task execution. For more information, see How DataSync transfers files, objects, and directories. Task reports are generated in JSON format. You can customize the level of detail in your reports: • Summary only task reports give you the necessary details about your task execution, such as how many files transferred and whether DataSync could verify the data integrity of those files. • Standard task reports include a summary plus detailed reports that list each file, object, or folder that DataSync attempts to transfer, skip, verify, and delete. With a standard task report, you can also specify the report level to show only the task execution's errors or its successes and errors. Use cases Here are some situations where task reports can help you monitor and audit your data transfers: • When migrating millions of files, quickly identify files that DataSync has issues transferring. • Verify chain-of-custody processes for your files. Summary only task reports A report that's only a summary of a task execution includes the following details: • The AWS account that ran the task execution • The source and destination locations • The total number of files, objects, and folders that were skipped, transferred, verified, and deleted • The total bytes (logical and physical) that were transferred • If the task execution was completed, canceled, or encountered an error • The start and end times (including the total time of the transfer) • The task's settings (such as bandwidth limits, data integrity verification, and other options for your DataSync transfer) Monitoring data transfers with task reports 296 AWS DataSync Standard task reports User Guide A standard task report includes a summary of your task execution plus detailed reports of what DataSync attempts to transfer, skip, verify, and delete. Topics • Report level • Transferred reports • Skipped reports • Verified reports • Deleted reports Report level With standard task reports, you can choose one of the following report levels: • Errors only • Successes and errors (essentially a list of everything that happened during your task execution) For example, you might want to
sync-dg-093
sync-dg.pdf
93
and other options for your DataSync transfer) Monitoring data transfers with task reports 296 AWS DataSync Standard task reports User Guide A standard task report includes a summary of your task execution plus detailed reports of what DataSync attempts to transfer, skip, verify, and delete. Topics • Report level • Transferred reports • Skipped reports • Verified reports • Deleted reports Report level With standard task reports, you can choose one of the following report levels: • Errors only • Successes and errors (essentially a list of everything that happened during your task execution) For example, you might want to see which files DataSync skipped successfully during your transfer and which ones it didn't. Files that DataSync skipped successfully might be ones that you purposely want DataSync to exclude because they already exist in your destination location. However, a skipped error for instance might indicate that DataSync doesn't have the right permissions to read a file. Transferred reports A list of files, objects, and directories that DataSync attempted to transfer during your task execution. A transferred report includes the following details: • The paths for the transferred data • What was transferred (content, metadata, or both) • The metadata, which includes the data type, content size (objects and files only), and more • The time when an item was transferred • The object version (if the destination is an Amazon S3 bucket that has versioning enabled) Standard task reports 297 AWS DataSync User Guide • If something was overwritten in the destination • Whether an item transferred successfully Note When moving data between S3 buckets, the prefix that you specify in your source location can show up in your report (or in Amazon CloudWatch logs), even if that prefix doesn't exist as an object in your destination location. (In the DataSync console, you might also notice this prefix showing up as skipped or verified data.) Skipped reports A list of files, objects, and directories that DataSync finds in your source location but didn't attempt to transfer. The reasons DataSync skips data can depend on several factors, such as how you configure your task and storage system permissions. Here are some examples: • There's a file that exists in your source and destination locations. The file in the source hasn't been modified since the previous task execution. Since you're only transferring data that has changed, DataSync doesn't transfer that file next time you run your task. • An object that exists in both of your locations changes in your source. When you run your task, DataSync skips this object in your destination because your task doesn't overwrite data in the destination. • DataSync skips an object in your source that's using an archival storage class and isn't restored. You must restore an archived object for DataSync to read it. • DataSync skips a file, object, or directory in your source location because it can't read it. If this happens and isn't expected, check your storage's access permissions and make sure that DataSync can read what was skipped. A skipped report includes the following details: • The paths for skipped data • The time when an item was skipped • The reason it was skipped • Whether an item was skipped successfully Standard task reports 298 AWS DataSync Note User Guide Skipped reports can be large when they include successes and errors, you configure your task to transfer only the data that has changed, and source data already exists in the destination. Verified reports A list of files, objects, and directories that DataSync attempted to verify the integrity of during your task execution. A verified data report includes the following details: • The paths for verified data • The time when an item was verified • The reason for the verification error (if any) • The source and destination SHA256 checksums (files only) • Whether an item was successfully verified Note the following about verified reports: • When you configure your task to verify only transferred data, DataSync doesn't verify directories in some situations or files or objects that fail to transfer. In either case, DataSync doesn't include unverified data in this report. • If you're using Enhanced mode, verification might take longer than usual if you're transferring large objects. Deleted reports A list of files, directories, and objects that were deleted during your task execution. DataSync generates this report only if you configure your task to delete data in the destination location that isn't in the source. A deleted data report includes the following details: • The paths for deleted data • Whether an item was successfully deleted • The time when an item was deleted Standard task reports 299 AWS DataSync Example task reports User Guide The level of detail in your task report is up to you. Here are
sync-dg-094
sync-dg.pdf
94
longer than usual if you're transferring large objects. Deleted reports A list of files, directories, and objects that were deleted during your task execution. DataSync generates this report only if you configure your task to delete data in the destination location that isn't in the source. A deleted data report includes the following details: • The paths for deleted data • Whether an item was successfully deleted • The time when an item was deleted Standard task reports 299 AWS DataSync Example task reports User Guide The level of detail in your task report is up to you. Here are some example transferred data reports with the following configuration: • Report type – Standard • Report level – Successes and errors Note Reports use the ISO-8601 standard for the timestamp format. Times are in UTC and measured in nanoseconds. This behavior differs from how some other task report metrics are measured. For example, task execution details, such as TransferDuration and VerifyDuration, are measured in milliseconds. Enhanced mode task reports use a somewhat different schema than Basic mode task reports. The following examples can help you know what to expect from your reports depending on the task mode you use. Example transferred data reports with success status The following reports show successful transfers for an object named object1.txt. Enhanced mode { "TaskExecutionId": "exec-abcdefgh12345678", "Transferred": [{ "RelativePath": "object1.txt", "SourceMetadata": { "Type": "Object", "ContentSize": 6, "LastModified": "2024-10-04T14:40:55Z", "SystemMetadata": { "ContentType": "binary/octet-stream", "ETag": "\"9b2d7e1f8054c3a2041905d0378e6f14\"", "ServerSideEncryption": "AES256" }, "UserMetadata": {}, Example task reports 300 AWS DataSync User Guide "Tags": [] }, "Overwrite": "False", "DstS3VersionId": "jtqRtX3jN4J2G8k0sFSGYK1f35KqpAVP", "TransferTimestamp": "2024-10-04T14:48:39.748862183Z", "TransferType": "CONTENT_AND_METADATA", "TransferStatus": "SUCCESS" }] } Basic mode { "TaskExecutionId": "exec-abcdefgh12345678", "Transferred": [{ "RelativePath": "/object1.txt", "SrcMetadata": { "Type": "Regular", "ContentSize": 6, "Mtime": "2022-01-07T16:59:26.136114671Z", "Atime": "2022-01-07T16:59:26.136114671Z", "Uid": 0, "Gid": 0, "Mode": "0644" }, "Overwrite": "False", "DstS3VersionId": "jtqRtX3jN4J2G8k0sFSGYK1f35KqpAVP", "TransferTimestamp": "2022-01-07T16:59:45.747270957Z", "TransferType": "CONTENT_AND_METADATA", "TransferStatus": "SUCCESS" }] } Example transferred data reports with error status The following reports provide examples of when DataSync can't transfer an object named object1.txt. Enhanced mode This report shows that DataSync can't access an object named object1.txt because of an AWS KMS permissions issue. (If you get an error like this, see Accessing S3 buckets using server-side encryption.) Example task reports 301 AWS DataSync User Guide { "TaskExecutionId": "exec-abcdefgh12345678", "Transferred": [{ "RelativePath": "object1.txt", "SourceMetadata": { "Type": "Object", "ContentSize": 6, "LastModified": "2022-10-07T20:48:32Z", "SystemMetadata": { "ContentType": "binary/octet-stream", "ETag": "\"3a7c0b2f1d9e5c4a6f8b2e0d1c9f7a3b2\"", "ServerSideEncryption": "AES256" }, "UserMetadata": {}, "Tags": [] }, "Overwrite": "False", "TransferTimestamp": "2022-10-09T16:05:11.134040717Z", "TransferType": "CONTENT_AND_METADATA", "TransferStatus": "FAILED", "ErrorCode": "AccessDenied", "ErrorDetail": "User: arn:aws:sts::111222333444:assumed-role/ AWSDataSyncS3Bucket/AwsSync-loc-0b3017fc4ba4a2d8d is not authorized to perform: kms:GenerateDataKey on resource: arn:aws:kms:us- east-1:111222333444:key/1111aaaa-22bb-33cc-44d-5555eeee6666 because no identity- based policy allows the kms:GenerateDataKey action" }] } Basic mode This report shows that an object named object1.txt didn't transfer because of an S3 bucket permissions issue. (If you get an error like this, see Providing DataSync access to S3 buckets.) { "TaskExecutionId": "exec-abcdefgh12345678", "Transferred": [{ "RelativePath": "/object1.txt", "SrcMetadata": { "Type": "Regular", "ContentSize": 6, "Mtime": "2022-01-07T16:59:26.136114671Z", Example task reports 302 AWS DataSync User Guide "Atime": "2022-01-07T16:59:26.136114671Z", "Uid": 0, "Gid": 0, "Mode": "0644" }, "Overwrite": "False", "DstS3VersionId": "jtqRtX3jN4J2G8k0sFSGYK1f35KqpAVP", "TransferTimestamp": "2022-01-07T16:59:45.747270957Z", "TransferType": "CONTENT_AND_METADATA", "TransferStatus": "FAILED", "FailureReason": "S3 Get Object Failed", "FailureCode": 40974 }] } Limitations • Individual task reports can't exceed 5 MB. If you're copying a large number of files, your task report might be split into multiple reports. • There are situations when creating task reports can affect the performance of your data transfer. For example, you might notice this when your network connection has high latency and the files you're transferring are small or you're copying only metadata changes. Creating your DataSync task reports AWS DataSync task reports can be only a summary of your task execution or a set of detailed reports about what DataSync attempts to transfer, skip, verify, and delete. Prerequisites Before you can create a task report, you must do the following. Topics • Create an S3 bucket for your task reports • Allow DataSync to upload task reports to your S3 bucket Limitations 303 AWS DataSync User Guide Create an S3 bucket for your task reports If you don't already have one, create an S3 bucket where DataSync can upload your task report. Reports are stored in the S3 Standard storage class. We recommend the following for this bucket: • If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. • To avoid a complex access permissions setup, make sure that your task report bucket is in the same AWS account and Region as your DataSync transfer task. Allow DataSync to upload task reports to your S3 bucket You must configure an AWS Identity and Access Management (IAM) role
sync-dg-095
sync-dg.pdf
95
• If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. • To avoid a complex access permissions setup, make sure that your task report bucket is in the same AWS account and Region as your DataSync transfer task. Allow DataSync to upload task reports to your S3 bucket You must configure an AWS Identity and Access Management (IAM) role that allows DataSync to upload a task report to your S3 bucket. In the DataSync console, you can create an IAM role that in most cases automatically includes the permissions to upload a task report to your bucket. Keep in mind that this automatically generated role might not meet your needs from a least-privilege standpoint. This role also won't work if your bucket is encrypted with a customer managed AWS Key Management Service (AWS KMS) key (SSE-KMS). In these cases, you can create the role manually as long as the role does at least the following: • Prevents the cross-service confused deputy problem in the role's trusted entity. The following full example shows how you can use the aws:SourceArn and aws:SourceAccount global condition context keys to prevent the confused deputy problem with DataSync. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", Creating your task reports 304 AWS DataSync User Guide "Condition": { "StringEquals": { "aws:SourceAccount": "123456789012" }, "StringLike": { "aws:SourceArn": "arn:aws:datasync:us-east-2:123456789012:*" } } } ] } • Allows DataSync to upload a task report to your S3 bucket. The following example does this by including the s3:PutObject action only for a specific prefix (reports/) in your bucket. { "Version": "2012-10-17", "Statement": [{ "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::your-task-reports-bucket/reports/*" }] } • If your S3 bucket is encrypted with a customer managed SSE-KMS key, the key's policy must include the IAM role that DataSync uses to access the bucket. For more information, see Accessing S3 buckets using server-side encryption. Creating a summary only task report You can configure a task report that includes a summary only when creating your DataSync task, starting your task, or updating your task. The following steps show how to configure a summary only task report when creating a task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Creating your task reports 305 AWS DataSync User Guide 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. 4. 5. For more information, see Where can I transfer my data with AWS DataSync? Scroll down to the Task report section. For Report type, choose Summary only. For S3 bucket for reports, choose an S3 bucket where you want DataSync to upload your task report. Tip If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. 6. For Folder, enter a prefix to use for your task report when DataSync uploads the report to your S3 bucket (for example, reports/). Make sure to include the appropriate delimiter character at the end of your prefix. This character is usually a forward slash (/). For more information, see Organizing objects by using prefixes in the Amazon S3 User Guide. 7. For IAM role, do one of the following: • Choose Autogenerate to have DataSync automatically create an IAM role with the permissions that are required to access the S3 bucket. If DataSync previously created an IAM role for this S3 bucket, that role is chosen by default. • Choose a custom IAM role that you created. In some cases, you might need to create the role yourself. For more information, see Allow DataSync to upload task reports to your S3 bucket. Important If your S3 bucket is encrypted with a customer managed SSE-KMS key, the key's policy must include the IAM role that DataSync uses to access the bucket. Creating your task reports 306 AWS DataSync User Guide For more information, see Accessing S3 buckets using server-side encryption. 8. Finish creating your task, and then start the task to begin transferring your data. When your transfer is complete, you can view your task report. Using the AWS CLI 1. Copy the following create-task AWS Command Line Interface (AWS CLI) command: aws datasync create-task \ --source-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh \ --destination-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-abcdefgh12345678 \ --task-report-config '{ "Destination":{ "S3":{ "Subdirectory":"reports/", "S3BucketArn":"arn:aws:s3:::your-task-reports-bucket", "BucketAccessRoleArn":"arn:aws:iam::123456789012:role/bucket-iam-role" } }, "OutputType":"SUMMARY_ONLY" }'
sync-dg-096
sync-dg.pdf
96
key, the key's policy must include the IAM role that DataSync uses to access the bucket. Creating your task reports 306 AWS DataSync User Guide For more information, see Accessing S3 buckets using server-side encryption. 8. Finish creating your task, and then start the task to begin transferring your data. When your transfer is complete, you can view your task report. Using the AWS CLI 1. Copy the following create-task AWS Command Line Interface (AWS CLI) command: aws datasync create-task \ --source-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh \ --destination-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-abcdefgh12345678 \ --task-report-config '{ "Destination":{ "S3":{ "Subdirectory":"reports/", "S3BucketArn":"arn:aws:s3:::your-task-reports-bucket", "BucketAccessRoleArn":"arn:aws:iam::123456789012:role/bucket-iam-role" } }, "OutputType":"SUMMARY_ONLY" }' 2. 3. For the --source-location-arn parameter, specify the Amazon Resource Name (ARN) of the source location in your transfer. Replace us-east-1 with the appropriate AWS Region, replace 123456789012 with the appropriate AWS account number, and replace 12345678abcdefgh with the appropriate source location ID. For the --destination-location-arn parameter, specify the ARN of the destination location in your transfer. Replace us-east-1 with the appropriate AWS Region, replace 123456789012 with the appropriate AWS account number, and replace abcdefgh12345678 with the appropriate destination location ID. 4. For the --task-report-config parameter, do the following: • Subdirectory – Replace reports/ with the prefix in your S3 bucket where you want DataSync to upload your task reports. Creating your task reports 307 AWS DataSync User Guide Make sure to include the appropriate delimiter character at the end of your prefix. This character is usually a forward slash (/). For more information, see Organizing objects by using prefixes in the Amazon S3 User Guide. • S3BucketArn – Specify the ARN of the S3 bucket where you want to upload your task report. Tip If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. • BucketAccessRoleArn – Specify the IAM role that allows DataSync to upload a task report to your S3 bucket. For more information, see Allow DataSync to upload task reports to your S3 bucket. Important If your S3 bucket is encrypted with a customer managed SSE-KMS key, the key's policy must include the IAM role that DataSync uses to access the bucket. For more information, see Accessing S3 buckets using server-side encryption. • OutputType – Specify SUMMARY_ONLY. For more information, see Summary only task reports. 5. Run the create-task command to create your task. You get a response like the following that shows you the ARN of the task that you created. You will need this ARN to run the start-task-execution command. { "TaskArn": "arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh" } 6. Copy the following start-task-execution command. Creating your task reports 308 AWS DataSync User Guide aws datasync-task-report start-task-execution \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh 7. For the --task-arn parameter, specify the ARN of the task that you're starting. Use the ARN that you received from running the create-task command. 8. Run the start-task-execution command. When your transfer is complete, you can view your task report. Creating a standard task report You can configure a standard task report when creating your DataSync task, starting your task, or updating your task. The following steps show how to configure a standard task report when creating a task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? Scroll down to the Task report section. For Report type, choose Standard report. For Report level, choose one of the following: 4. 5. • Errors only – Your task report includes only issues with what DataSync tried to transfer, skip, verify, and delete. • Successes and errors – Your task report includes what DataSync successfully transferred, skipped, verified, and deleted and what it didn't. • Custom – Allows you to choose whether you want to see errors only or successes and errors for specific aspects of your task report. For example, you can choose Successes and errors for the transferred files list but Errors only for the rest of the report. Creating your task reports 309 AWS DataSync User Guide 6. 7. If you're transferring to an S3 bucket that uses object versioning, keep Include Amazon S3 object versions selected if you want your report to include the new version for each transferred object. For S3 bucket for reports, choose an S3 bucket where you want DataSync to upload your task report. Tip If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files
sync-dg-097
sync-dg.pdf
97
transferred files list but Errors only for the rest of the report. Creating your task reports 309 AWS DataSync User Guide 6. 7. If you're transferring to an S3 bucket that uses object versioning, keep Include Amazon S3 object versions selected if you want your report to include the new version for each transferred object. For S3 bucket for reports, choose an S3 bucket where you want DataSync to upload your task report. Tip If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. 8. For Folder, enter a prefix to use for your task report when DataSync uploads the report to your S3 bucket (for example, reports/). Make sure to include the appropriate delimiter character at the end of your prefix. This character is usually a forward slash (/). For more information, see Organizing objects by using prefixes in the Amazon S3 User Guide. 9. For IAM role, do one of the following: • Choose Autogenerate to have DataSync automatically create an IAM role with the permissions that are required to access the S3 bucket. If DataSync previously created an IAM role for this S3 bucket, that role is chosen by default. • Choose a custom IAM role that you created. In some cases, you might need to create the role yourself. For more information, see Allow DataSync to upload task reports to your S3 bucket. Important If your S3 bucket is encrypted with a customer managed SSE-KMS key, the key's policy must include the IAM role that DataSync uses to access the bucket. For more information, see Accessing S3 buckets using server-side encryption. 10. Finish creating your task and start the task to begin transferring your data. When your transfer is complete, you can view your task report. Creating your task reports 310 AWS DataSync Using the AWS CLI 1. Copy the following create-task command: User Guide aws datasync create-task \ --source-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh \ --destination-location-arn arn:aws:datasync:us-east-1:123456789012:location/ loc-abcdefgh12345678 \ --task-report-config '{ "Destination":{ "S3":{ "Subdirectory":"reports/", "S3BucketArn":"arn:aws:s3:::your-task-reports-bucket", "BucketAccessRoleArn":"arn:aws:iam::123456789012:role/bucket-iam-role" } }, "OutputType":"STANDARD", "ReportLevel":"level-of-detail", "ObjectVersionIds":"include-or-not" }' 2. 3. For the --source-location-arn parameter, specify the ARN of the source location in your transfer. Replace us-east-1 with the appropriate AWS Region, replace 123456789012 with the appropriate AWS account number, and replace 12345678abcdefgh with the appropriate source location ID. For the --destination-location-arn parameter, specify the ARN of the destination location in your transfer. Replace us-east-1 with the appropriate AWS Region, replace 123456789012 with the appropriate AWS account number, and replace abcdefgh12345678 with the appropriate destination location ID. 4. For the --task-report-config parameter, do the following: • Subdirectory – Replace reports/ with the prefix in your S3 bucket where you want DataSync to upload your task reports. Make sure to include the appropriate delimiter character at the end of your prefix. This character is usually a forward slash (/). For more information, see Organizing objects by using prefixes in the Amazon S3 User Guide. • S3BucketArn – Specify the ARN of the S3 bucket where you want to upload your task report. Creating your task reports 311 AWS DataSync Tip User Guide If you're planning to transfer data to an S3 bucket, don't use the same bucket for your task report if you disable the Keep deleted files option. Otherwise, DataSync will delete any previous task reports each time you execute a task since those reports don't exist in your source location. • BucketAccessRoleArn – Specify the IAM role that allows DataSync to upload a task report to your S3 bucket. For more information, see Allow DataSync to upload task reports to your S3 bucket. Important If your S3 bucket is encrypted with a customer managed SSE-KMS key, the key's policy must include the IAM role that DataSync uses to access the bucket. For more information, see Accessing S3 buckets using server-side encryption. • OutputType – Specify STANDARD report. For more information, see Standard task reportsTypes of task reports. • (Optional) ReportLevel – Specify whether you want ERRORS_ONLY (the default) or SUCCESSES_AND_ERRORS in your report. • (Optional) ObjectVersionIds – If you're transferring to an S3 bucket that uses object versioning, specify NONE if you don't want to include the new version for each transferred object in the report. By default, this option is set to INCLUDE. • (Optional) Overrides – Customize the ReportLevel of a particular aspect of your report. For example, you might want to see SUCCESSES_AND_ERRORS for the list of what DataSync deletes in your destination location, but you want ERRORS_ONLY for everything else. In this example, you would add the following Overrides option to the --task-report-config parameter: "Overrides":{ "Deleted":{ Creating your task reports
sync-dg-098
sync-dg.pdf
98
(Optional) ObjectVersionIds – If you're transferring to an S3 bucket that uses object versioning, specify NONE if you don't want to include the new version for each transferred object in the report. By default, this option is set to INCLUDE. • (Optional) Overrides – Customize the ReportLevel of a particular aspect of your report. For example, you might want to see SUCCESSES_AND_ERRORS for the list of what DataSync deletes in your destination location, but you want ERRORS_ONLY for everything else. In this example, you would add the following Overrides option to the --task-report-config parameter: "Overrides":{ "Deleted":{ Creating your task reports 312 AWS DataSync User Guide "ReportLevel":"SUCCESSES_AND_ERRORS" } } If you don't use Overrides, your entire report uses the ReportLevel that you specify. 5. Run the create-task command to create your task. You get a response like the following that shows you the ARN of the task that you created. You will need this ARN to run the start-task-execution command. { "TaskArn": "arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh" } 6. Copy the following start-task-execution command. aws datasync-task-report start-task-execution \ --task-arn arn:aws:datasync:us-east-1:123456789012:task/task-12345678abcdefgh 7. For the --task-arn parameter, specify the ARN of the task you're running. Use the ARN that you received from running the create-task command. 8. Run the start-task-execution command. When your transfer is complete, you can view your task report. Viewing your DataSync task reports DataSync creates task reports for every task execution. When your execution completes, you can find the related task reports in your S3 bucket. Task reports are organized under prefixes that include the IDs of your tasks and their executions. To help locate task reports in your S3 bucket, use these examples: • Summary only task report – reports-prefix/Summary-Reports/task-id-folder/task- execution-id-folder • Standard task report – reports-prefix/Detailed-Reports/task-id-folder/task- execution-id-folder Because task reports are in JSON format, you have several options for viewing your reports: Viewing your task reports 313 AWS DataSync User Guide • View a report by using Amazon S3 Select. • Visualize reports by using AWS services such as AWS Glue, Amazon Athena, and Amazon QuickSight. For more information about visualizing your task reports, see the AWS Storage Blog. Monitoring data transfers with Amazon CloudWatch Logs You can monitor your AWS DataSync transfer by using CloudWatch Logs. We recommend that you configure your task to at least log basic information (such as transfer errors). Allowing DataSync to upload logs to a CloudWatch log group To configure logging for your DataSync task, you need a CloudWatch log group that DataSync has permission to send logs to. You set up this access through an AWS Identity and Access Management (IAM) role. How this specifically works depends on your task mode. Enhanced mode With Enhanced mode, DataSync automatically sends task logs to a log group named /aws/ datasync. If that log group doesn't exist in your AWS Region, DataSync creates the log group on your behalf by using an IAM service-linked role when you create your task. Basic mode There are a couple ways to set up a CloudWatch log group for a DataSync task using Basic mode. In the console, you can automatically create an IAM role that in most cases includes the permissions that DataSync requires to upload logs. Keep in mind that this automatically generated role might not meet your needs from a least-privilege standpoint. If you want to use an existing CloudWatch log group or are creating your tasks programmatically, you must create the IAM role yourself. The following example is an IAM policy that grants these permissions. { "Version": "2012-10-17", "Statement": [ { "Sid": "DataSyncLogsToCloudWatchLogs", "Effect": "Allow", "Action": [ Monitoring data transfers with CloudWatch Logs 314 AWS DataSync User Guide "logs:PutLogEvents", "logs:CreateLogStream" ], "Principal": { "Service": "datasync.amazonaws.com" }, "Condition": { "ArnLike": { "aws:SourceArn": [ "arn:aws:datasync:region:account-id:task/*" ] }, "StringEquals": { "aws:SourceAccount": "account-id" } }, "Resource": "arn:aws:logs:region:account-id:log-group:*:*" } ] } The policy uses Condition statements to help ensure that only DataSync tasks from the specified account have access to the specified CloudWatch log group. We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in these Condition statements to protect against the confused deputy problem. For more information, see Cross- service confused deputy prevention. To specify the DataSync task or tasks, replace region with the Region code for the AWS Region where the tasks are located (for example, us-west-2), and replace account-id with the AWS account ID of the account that contains the tasks. To specify the CloudWatch log group, replace the same values. You can also modify the Resource statement to target specific log groups. For more information about using SourceArn and SourceAccount, see Global condition keys in the IAM User Guide. To apply the policy, save this policy statement to a file on your local computer. Then run the following AWS CLI command to apply the resource policy. To use this example command, replace full-path-to-policy-file with the path
sync-dg-099
sync-dg.pdf
99
Region where the tasks are located (for example, us-west-2), and replace account-id with the AWS account ID of the account that contains the tasks. To specify the CloudWatch log group, replace the same values. You can also modify the Resource statement to target specific log groups. For more information about using SourceArn and SourceAccount, see Global condition keys in the IAM User Guide. To apply the policy, save this policy statement to a file on your local computer. Then run the following AWS CLI command to apply the resource policy. To use this example command, replace full-path-to-policy-file with the path to the file that contains your policy statement. aws logs put-resource-policy --policy-name trust-datasync --policy-document file://full-path-to-policy-file Allowing DataSync to upload logs to a CloudWatch log group 315 AWS DataSync Note User Guide Run this command by using the same AWS account and AWS Region where you activated your DataSync agent. For more information, see the Amazon CloudWatch Logs User Guide. Configuring logging for your DataSync task We recommend that you configure at least some level of logging for your DataSync task. Before you begin DataSync needs permission to upload logs to a CloudWatch log group. For more information, see Allowing DataSync to upload logs to a CloudWatch log group. Using the DataSync console The following instructions describe how to configure CloudWatch logging when creating a task. You also can configure logging when editing a task. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. On the Configure settings page, choose a task mode and any other options. You might be interested in some of the following options: • Specify what data to transfer by using a manifest or filters. • Configure how to handle file metadata and verify data integrity. 5. For Log level, choose one of the following options: • Log basic information such as transfer errors – Publish logs with only basic information (such as transfer errors). Configuring logging for your DataSync task 316 AWS DataSync User Guide • Log all transferred objects and files – Publish logs for all files or objects that DataSync transfers and performs data-integrity checks on. • Don't generate logs 6. Do one of the following depending on the task mode you're using to create or specify a CloudWatch log group: Enhanced mode When you choose Create task, DataSync automatically uses (or creates) a log group named /aws/datasync. Basic mode For CloudWatch log group, specify a log group that DataSync has permission to upload logs to by doing one of the following: • Choose Autogenerate to automatically create a log group that allows DataSync to upload logs to it. • Choose an existing log group in your current AWS Region. If you choose an existing log group, make sure that DataSync has permission to upload logs to that log group. 7. Choose Create task. You're ready to start your task. Using the AWS CLI 1. Copy the following create-task command: aws datasync create-task \ --source-location-arn "arn:aws:datasync:us-east-1:account-id:location/location- id" \ --destination-location-arn "arn:aws:datasync:us-east-1:account- id:location/location-id" \ --task-mode "ENHANCED-or-BASIC" \ --name "task-name" \ --options '{"LogLevel": "log-level"}' \ --cloudwatch-log-group-arn "arn:aws:logs:us-east-1:account-id:log-group:log- group-name:*" Configuring logging for your DataSync task 317 AWS DataSync User Guide 2. For --source-location-arn, specify the Amazon Resource Name (ARN) of your source location. 3. For --destination-location-arn, specify the ARN of your destination location. If you're transferring across AWS Regions or accounts, make sure that the ARN includes the 4. 5. 6. other Region or account ID. For --task-mode, specify ENHANCED or BASIC. (Recommended) For --name, specify a name for your task that you can remember. For LogLevel, specify one of the following options: • BASIC – Publish logs with only basic information (such as transfer errors). • TRANSFER – Publish logs for all files or objects that DataSync transfers and performs data- integrity checks on. • NONE – Don't generate logs. 7. For --cloudwatch-log-group-arn, specify the ARN of a CloudWatch log group. Important If your --task-mode is ENHANCED, you don't need to specify this option. For more information, see Allowing DataSync to upload logs to a CloudWatch log group. 8. Run the create-task command. If the command is successful, you get a response that shows you the ARN of the task that you created. For example: { "TaskArn": "arn:aws:datasync:us-east-1:111222333444:task/ task-08de6e6697796f026" } You're ready to start your task. Using the DataSync API You can configure CloudWatch logging for your task by using the CloudWatchLogGroupArn parameter with any of the following operations: Configuring logging for your DataSync task 318 AWS DataSync • CreateTask • UpdateTask Viewing DataSync task logs User Guide When you start your task, you can view the task execution's
sync-dg-100
sync-dg.pdf
100
DataSync to upload logs to a CloudWatch log group. 8. Run the create-task command. If the command is successful, you get a response that shows you the ARN of the task that you created. For example: { "TaskArn": "arn:aws:datasync:us-east-1:111222333444:task/ task-08de6e6697796f026" } You're ready to start your task. Using the DataSync API You can configure CloudWatch logging for your task by using the CloudWatchLogGroupArn parameter with any of the following operations: Configuring logging for your DataSync task 318 AWS DataSync • CreateTask • UpdateTask Viewing DataSync task logs User Guide When you start your task, you can view the task execution's logs by using the CloudWatch console or AWS CLI (among other options). For more information, see the Amazon CloudWatch Logs User Guide. DataSync provides JSON-structured logs for Enhanced mode tasks. Basic mode tasks have unstructured logs. The following examples show how verification errors display in Enhanced mode logs compared to Basic mode logs. Enhanced mode log example { "Action": "VERIFY", "Source": { "LocationId": "loc-abcdef01234567890", "RelativePath": "directory1/directory2/file1.txt" }, "Destination": { "LocationId": "loc-05ab2fdc272204a5f", "RelativePath": "directory1/directory2/file1.txt", "Metadata": { "Type": "Object", "ContentSize": 66060288, "LastModified": "2024-10-03T20:46:58Z", "S3": { "SystemMetadata": { "ContentType": "binary/octet-stream", "ETag": "\"1234abcd5678efgh9012ijkl3456mnop\"", "ServerSideEncryption": "AES256" }, "UserMetadata": { "file-mtime": "1602647222/222919600" }, "Tags": {} } } }, Viewing DataSync task logs 319 AWS DataSync User Guide "ErrorCode": "FileNotAtSource", "ErrorDetail": "Verification failed due to file being present at the destination but not at the source" } Basic mode log example [NOTICE] Verification failed > /directory1/directory2/file1.txt [NOTICE] /directory1/directory2/file1.txt dstMeta: type=R mode=0755 uid=65534 gid=65534 size=8972938 atime=1728657659/0 mtime=1728657659/0 extAttrsHash=0 [NOTICE] dstHash: f9c2cca900301d38b0930367d8d587153154af467da0fdcf1bebc0848ec72c0d Logging AWS DataSync API calls with AWS CloudTrail AWS DataSync is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in DataSync. CloudTrail captures all API calls for DataSync as events. The calls that are captured include calls from the DataSync console and code calls to DataSync API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for AWS DataSync. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to AWS DataSync, the IP address from which the request was made, who made the request, when it was made, and additional details. To learn more about CloudTrail, see the AWS CloudTrail User Guide. Working with DataSync information in CloudTrail CloudTrail is enabled on your AWS account when you create the account. When activity occurs in AWS DataSync, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing events with CloudTrail event history. For an ongoing record of events in your AWS account, including events for AWS DataSync, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all AWS Regions in the same AWS partition and delivers the log files to the Amazon S3 bucket that you Logging with CloudTrail 320 AWS DataSync User Guide specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following: • Overview for creating a trail • CloudTrail supported services and integrations • Configuring Amazon SNS notifications for CloudTrail • Receiving CloudTrail log files from multiple Regions and Receiving CloudTrail log files from multiple accounts All DataSync actions are logged by CloudTrail. (For more information, see the DataSync API reference.) For example, calls to the CreateAgent, CreateTask, and ListLocations operations generate entries in the CloudTrail log files. Every event or log entry contains information about who generated the request. The identity information helps you determine the following: • Whether the request was made with root or AWS Identity and Access Management (IAM) credentials. • Whether the request was made with temporary security credentials for a role or federated user. • Whether the request was made by another AWS service. For more information, see CloudTrail userIdentity element in the AWS CloudTrail User Guide. Understanding DataSync log file entries A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, the request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear
sync-dg-101
sync-dg.pdf
101
was made by another AWS service. For more information, see CloudTrail userIdentity element in the AWS CloudTrail User Guide. Understanding DataSync log file entries A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, the request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order. The following example shows a CloudTrail log entry that demonstrates the CreateTask operation. { "eventVersion": "1.05", Understanding DataSync log file entries 321 User Guide AWS DataSync "userIdentity": { "type": "IAMUser", "principalId": "1234567890abcdef0", "arn": "arn:aws:iam::123456789012:user/user1", "accountId": "123456789012", "accessKeyId": "access key", "userName": "user1", "sessionContext": { "attributes": { "mfaAuthenticated": "false", "creationDate": "2018-12-13T14:56:46Z" } }, "invokedBy": "signin.amazonaws.com" }, "eventTime": "2018-12-13T14:57:02Z", "eventSource": "datasync.amazonaws.com", "eventName": "CreateTask", "awsRegion": "ap-southeast-1", "sourceIPAddress": "192.0.2.1", "userAgent": "signin.amazonaws.com", "requestParameters": { "cloudWatchLogGroupArn": "arn:aws:logs:ap-southeast-1:123456789012:log- group:MyLogGroup", "name": "MyTask-NTIzMzY1", "tags": [], "destinationLocationArn": "arn:aws:datasync:ap- southeast-1:123456789012:location/loc-abcdef01234567890", "options": { "bytesPerSecond": -1, "verifyMode": "POINT_IN_TIME_CONSISTENT", "uid": "INT_VALUE", "posixPermissions": "PRESERVE", "mtime": "PRESERVE", "gid": "INT_VALUE", "preserveDevices": "NONE", "preserveDeletedFiles": "REMOVE", "atime": "BEST_EFFORT" }, "sourceLocationArn": "arn:aws:datasync:ap-southeast-1:123456789012:location/ loc-021345abcdef6789" }, "responseElements": { Understanding DataSync log file entries 322 AWS DataSync User Guide "taskArn": "arn:aws:datasync:ap-southeast-1:123456789012:task/ task-1234567890abcdef0" }, "requestID": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", "eventID": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", "eventType": "AwsApiCall", "recipientAccountId": "123456789012" } Monitoring events by using Amazon EventBridge Amazon EventBridge events describe changes in DataSync resources. You can set up rules to match these events and route them to one or more target functions or streams. Events are emitted on a best-effort basis. DataSync transfer events The following EventBridge events are available for DataSync transfers. Agent state changes Event Online Offline Location state changes Event Adding Available Description The agent is configured properly and ready to use. This is the normal running status for an agent. The agent has been out of contact with the DataSync service for five minutes or longer. This can happen for a few reasons. For more information, see What do I do if my agent is offline? Description DataSync is adding a location. The location is created and is available to use. Monitoring with EventBridge 323 AWS DataSync Agent state changes Task state changes Event Available Running Unavailable Queued Task execution state changes Event Queueing Launching User Guide Description The task was created and is ready to start. The task is in progress and functioning properly. The task isn't configured properly and can't be used. You might see this event when an agent associated with the task goes offline. Another task is running and using the same agent. DataSync runs tasks in series (first in, first out). Description Another task execution is running and using the same DataSync agent. For more informati on, see Knowing when your task is queued. DataSync is initializing the task execution. This status usually goes quickly but can take up to a few minutes. Preparing DataSync is determining what data to transfer. This step can take just minutes or a few hours depending on the number of files, objects, or directories in both locations and on how you configure your task. Preparation also might not be applicable to your task. For more information, see How DataSync prepares your data transfer. DataSync transfer events 324 AWS DataSync Agent state changes Transferring Verifying Success Cancelling Error User Guide DataSync is performing the actual data transfer. DataSync is performing a data-integrity check at the end of the transfer. The task execution succeeded. The task execution is in the process of being cancelled. The task execution failed. DataSync Discovery events The following EventBridge events are available for DataSync Discovery. Storage system state changes Event Description Storage System Connectivity Status Change The connection between your DataSync agent and your on-premises storage system changed. For details, see your CloudWatch logs. Discovery job state changes Event Description Discovery Job State Change Discovery Job Expiration Soon The status of your discovery job changed. For more information, see discovery job statuses. Your discovery job expires soon. This includes any information the discovery job collected about your on-premises storage system. Before the job expires, you can export DataSync Discovery events 325 AWS DataSync User Guide Storage system state changes collected data by using the DescribeS torageSystemResources and DescribeS torageSystemResourceMetrics operations. Monitoring AWS DataSync with manual tools You can track your AWS DataSync transfers from the console or the command line. Monitoring your transfer by using the DataSync console You can monitor your DataSync transfer by using the console, which provides real-time metrics such as data transferred, data and file throughput, and data compression. To monitor your transfer by using the DataSync console 1. After you start your DataSync task, choose See execution details. 2. View metrics about your transfer. Monitoring
sync-dg-102
sync-dg.pdf
102
325 AWS DataSync User Guide Storage system state changes collected data by using the DescribeS torageSystemResources and DescribeS torageSystemResourceMetrics operations. Monitoring AWS DataSync with manual tools You can track your AWS DataSync transfers from the console or the command line. Monitoring your transfer by using the DataSync console You can monitor your DataSync transfer by using the console, which provides real-time metrics such as data transferred, data and file throughput, and data compression. To monitor your transfer by using the DataSync console 1. After you start your DataSync task, choose See execution details. 2. View metrics about your transfer. Monitoring your transfer by using the AWS CLI You can monitor your DataSync transfer by using the AWS Command Line Interface (AWS CLI). Copy the following describe-task-execution command. To use this example command, replace the user input placeholders with your own information. aws datasync describe-task-execution \ --task-execution-arn 'arn:aws:datasync:region:account-id:task/task-id/execution/task- execution-id' This command returns information about a task execution similar to that shown following. { "BytesCompressed": 3500, "BytesTransferred": 5000, "BytesWritten": 5000, "EstimatedBytesToTransfer": 5000, "EstimatedFilesToDelete": 10, Monitoring with manual tools 326 AWS DataSync User Guide "EstimatedFilesToTransfer": 100, "FilesDeleted": 10, "FilesSkipped": 0, "FilesTransferred": 100, "FilesVerified": 100, "Result": { "ErrorCode": "??????", "ErrorDetail": "??????", "PrepareDuration": 100, "PrepareStatus": "SUCCESS", "TransferDuration": 60, "TransferStatus": "AVAILABLE", "VerifyDuration": 30, "VerifyStatus": "SUCCESS" }, "StartTime": 1532660733.39, "Status": "SUCCESS", "OverrideOptions": { "Atime": "BEST_EFFORT", "BytesPerSecond": "1000", "Gid": "NONE", "Mtime": "PRESERVE", "PosixPermissions": "PRESERVE", "PreserveDevices": "NONE", "PreserveDeletedFiles": "PRESERVE", "Uid": "NONE", "VerifyMode": "POINT_IN_TIME_CONSISTENT" }, "TaskExecutionArn": "arn:aws:datasync:us-east-1:111222333444:task/task- aaaabbbbccccddddf/execution/exec-1234abcd1234abcd1", "TaskReportConfig": { "Destination": { "S3": { "BucketAccessRoleArn": "arn:aws:iam::111222333444:role/my-datasync- role", "S3BucketArn": "arn:aws:s3:::amzn-s3-demo-bucket/*", "Subdirectory": "reports" } }, "ObjectVersionIds": "INCLUDE", "OutputType": "STANDARD", "Overrides": { "Deleted": { "ReportLevel": "ERRORS_ONLY" Monitoring your transfer by using the AWS CLI 327 User Guide AWS DataSync }, "Skipped": { "ReportLevel": "SUCCESSES_AND_ERRORS" }, "Transferred": { "ReportLevel": "ERRORS_ONLY" }, "Verified": { "ReportLevel": "ERRORS_ONLY" } }, "ReportLevel": "ERRORS_ONLY" } } • If the task execution succeeds, the value of Status changes to SUCCESS. For information about what the response elements mean, see DescribeTaskExecution. • If the task execution fails, the result sends error codes that can help you troubleshoot issues. For information about the error codes, see TaskExecutionResultDetail. Monitoring your transfer by using the watch utility To monitor the progress of your task in real time from the command line, you can use the standard Unix watch utility. Task execution duration values are measured in milliseconds. The watch utility doesn't recognize the DataSync alias. The following example shows how to invoke the CLI directly. To use this example command, replace the user input placeholders with your own information. # pass '-n 1' to update every second and '-d' to highlight differences $ watch -n 1 -d \ "aws datasync describe-task-execution --task-execution-arn 'arn:aws:datasync:region:account-id:task/task-id/execution/task execution-id'" Monitoring your transfer by using the watch utility 328 AWS DataSync User Guide Managing AWS DataSync resources Learn how to manage your AWS DataSync resources, such as agents, locations, and tasks. Managing your DataSync agent Once you activate a DataSync agent, AWS manages the agent for you (including software updates). Learn more Testing your DataSync agent's connectivity and system resources While AWS manages your DataSync agent once it's deployed and activated, there might be cases where you need to change your agent's settings or troubleshoot an issue. Learn more Replacing your DataSync agent To replace a DataSync agent, you must create a new agent and update any locations that are using the old agent. Learn more Cleaning up DataSync resources If you used DataSync for a test or just no longer need its resources, delete those resources so that you aren't charged for them. Learn more Reusing a DataSync agent's infrastructure After you delete an agent resource from DataSync, you can still use the agent's virtual machine or Amazon EC2 instance to activate a new agent. Learn more Managing your AWS DataSync agent Once you activate an AWS DataSync agent, AWS manages the virtual machine (VM) appliance for you. Managing your DataSync agent 329 AWS DataSync User Guide Agent software updates AWS automatically updates your agent's software, including the underlying operating system and related DataSync software packages. DataSync updates your agent only when it's idle. For example, your agent won't be updated until your transfer is complete. The agent might go offline briefly following updates. This can happen, for instance, shortly after agent activation when AWS updates the agent. Important • DataSync automatically and regularly patches agents to maintain their security and stability. DataSync agents use Amazon Linux 2 as their base operating system. You can view the current status of detected Common Vulnerabilities and Exposures (CVE) issues on the Amazon Linux Security Center. CVE patches are automatically applied within 30 days of their release date, as indicated on the Amazon Linux Security Center. Patching occurs as long as your agent is online and not actively running a task execution. • DataSync doesn't support updating an Amazon EC2 agent manually
sync-dg-103
sync-dg.pdf
103
shortly after agent activation when AWS updates the agent. Important • DataSync automatically and regularly patches agents to maintain their security and stability. DataSync agents use Amazon Linux 2 as their base operating system. You can view the current status of detected Common Vulnerabilities and Exposures (CVE) issues on the Amazon Linux Security Center. CVE patches are automatically applied within 30 days of their release date, as indicated on the Amazon Linux Security Center. Patching occurs as long as your agent is online and not actively running a task execution. • DataSync doesn't support updating an Amazon EC2 agent manually with cloud-init directives. If you update an agent this way, you may encounter interoperability problems with DataSync where you can’t activate or use the agent. Agent statuses The following table describes the status of DataSync agents. Agent status Online Offline Meaning The agent is configured properly and ready to use. This is the normal running status for an agent. The agent has been out of contact with the DataSync service for five minutes or longer. This can happen for a few reasons. For more Agent software updates 330 AWS DataSync Agent status User Guide Meaning information, see What do I do if my agent is offline? Troubleshooting your agent While AWS manages the DataSync agent for you, there are situations when you might need to again work directly with it. For example, if your agent goes offline or loses its connection to your on-premises storage system, you can try to resolve these issues in the agent’s local console. For more information, see troubleshooting DataSync agents. Performing maintenance on your agent While AWS manages your AWS DataSync agent once it's deployed and activated, there might be cases where you need to change your agent's settings or troubleshoot an issue. Here are some examples of why you'd work with your agent through its local console: • Manually assign an IP address to the agent. • Check your agent's system resources. Important You don't need to use the agent's local console for standard DataSync functionality. Accessing your agent's local console How you access the local console depends on the type of agent you're using. Accessing the local console (VMware ESXi, Linux KVM, or Microsoft Hyper-V) For security reasons, you can't remotely connect to the local console of the DataSync agent virtual machine (VM). • If this is your first time using the local console, log in with the default credentials. The default user name is admin and the password is password. Troubleshooting your agent 331 AWS DataSync Note User Guide We recommend changing the default password. To do this, on the console main menu enter 5 (or 6 for VMware VMs), then run the passwd command to change the password. Accessing the local console (Amazon EC2) To connect to an Amazon EC2 agent's local console, you must use SSH. Before you begin: Make sure that your EC2 instance's security group allows access with SSH (TCP port 22). 1. Open a terminal and copy the following ssh command: ssh -i /path/key-pair-name.pem instance-user-name@instance-public-ip-address • For /path/key-pair-name, specify the path and file name (.pem) of the private key required to connect to your instance. • For instance-user-name, specify admin. • For instance-public-ip-address, specify the public IP address of your instance. 2. Run the ssh command to connect to the instance. Once connected, the main menu of the agent's local console displays. Configuring your agent's DHCP and DNS settings The default network configuration for the agent is Dynamic Host Configuration Protocol (DHCP). With DHCP, your agent is automatically assigned an IP address. In some cases, you might need to manually assign your agent's IP as a static IP address, as described following. 1. Log in to your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 1 to begin configuring your network. 3. On the Network Configuration menu, choose one of the following options. Configuring your agent's DHCP and DNS settings 332 AWS DataSync To Get information about your network adapter Do this Enter 1. User Guide A list of adapter names appears, and you are prompted to enter an adapter name—for example, eth0. If the adapter you specify is in use, the following information about the adapter is displayed: • • • • • Media access control (MAC) address IP address Netmask Agent IP address DHCP enabled status You use the same adapter name when you configure a static IP address (option 3) as when you set your agent's default route adapter (option 5). Configure DHCP Enter 2. You are prompted to configure the network interface to use DHCP. Configuring your agent's DHCP and DNS settings 333 AWS DataSync To Configure a static IP address for your agent Do this Enter 3. User Guide You are prompted to enter the Network
sync-dg-104
sync-dg.pdf
104
the following information about the adapter is displayed: • • • • • Media access control (MAC) address IP address Netmask Agent IP address DHCP enabled status You use the same adapter name when you configure a static IP address (option 3) as when you set your agent's default route adapter (option 5). Configure DHCP Enter 2. You are prompted to configure the network interface to use DHCP. Configuring your agent's DHCP and DNS settings 333 AWS DataSync To Configure a static IP address for your agent Do this Enter 3. User Guide You are prompted to enter the Network adapter name. Important If your agent has already been activated, you must shut it down and restart it from the DataSync console for the settings to take effect. Reset all your agent's network configura tion to DHCP Enter 4. All network interfaces are set to use DHCP. Important If your agent has already been activated, you must shut down and restart your agent from the DataSync console for the settings to take effect. Set your agent's default route adapter Enter 5. The available adapters for your agent are shown, and you are prompted to choose one of the adapters—for example, eth0. Configuring your agent's DHCP and DNS settings 334 AWS DataSync To Do this User Guide Edit your agent's Domain Name System (DNS) configuration Enter 6. The available adapters of the primary and secondary DNS servers are displayed. You are prompted to provide the new IP address. View your agent's DNS configuration Enter 7. The available adapters of the primary and secondary DNS servers are displayed. Note For some versions of the VMware hypervisor, you can edit the adapter configuration in this menu. View routing tables Enter 8. The default route of your agent is displayed. Checking your agent's system resources When you log in to your agent console, virtual CPU cores, root volume size, and RAM are automatically checked. If there are any errors or warnings, they're flagged on the console menu display with a banner that provides details about those errors or warnings. If there are no errors or warnings when the console starts, the menu displays white text. The View System Resource Check option will display (0 Errors). If there are errors or warnings, the console menu displays the number of errors and warnings, in red and yellow respectively, in a banner across the top of the menu. For example, (1 ERROR, 1 WARNING). Checking your agent's system resources 335 AWS DataSync User Guide To check your agent's system resources 1. Log in to your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 4 to view the results of the system resource check. The console displays an [OK], [WARNING], or [FAIL] message for each resource as described in the table following. For Amazon EC2 instances, the system resource check verifies that the instance type is one of the instances recommended for use with DataSync. If the instance type matches that list, a single result is displayed in green text, as follows. [ OK ] Instance Type Check If the Amazon EC2 instance is not on the recommended list, the system resource check verifies the following resources. • CPU cores check: At least four cores are required. • Disk size check: A minimum of 80 GB of available disk space is required. • RAM check: • 32 GB of RAM assigned to the instance for task executions working with up to 20 million files, objects, or directories. • 64 GB of RAM assigned to the instance for task executions working with more than 20 million files, objects, or directories. • CPU flags check: The agent VM CPU must have either SSSE3 or SSE4 instruction set flags. If the Amazon EC2 instance is not on the list of recommended instances for DataSync, but it has sufficient resources, the result of the system resource check displays four results, all in green text. The same resources are verified for agents deployed in Hyper-V, Linux Kernel-based Virtual Machine (KVM), and VMware VMs. VMware agents are also checked for supported version; unsupported versions cause a red banner error. Supported versions include VMware versions 6.5 and 6.7. Checking your agent's system resources 336 AWS DataSync User Guide Synchronizing the time on your VMware agent If you're using a VMware agent, you can view or edit your Network Time Protocol (NTP) server configuration and synchronize the agent's time with your VMware hypervisor host. 1. Log in to your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 5. 3. On the System Time Management menu, do one of the following: To View and synchronize your VM time with NTP server time Do this Enter 1. The current time of your agent displays. Your agent
sync-dg-105
sync-dg.pdf
105
your agent's system resources 336 AWS DataSync User Guide Synchronizing the time on your VMware agent If you're using a VMware agent, you can view or edit your Network Time Protocol (NTP) server configuration and synchronize the agent's time with your VMware hypervisor host. 1. Log in to your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 5. 3. On the System Time Management menu, do one of the following: To View and synchronize your VM time with NTP server time Do this Enter 1. The current time of your agent displays. Your agent determines the time difference betwee n your agent and NTP server, and prompts you to synchronize the times. In some situations, an agent's time might drift. For example, there might be a prolon ged network outage and your hypervisor host and agent don't get time updates, so your agent's time is different from the actual time. When there's a time drift like this, a discre pancy occurs between the stated times when operations (such as snapshots occur) and the actual times that the operations occur. Edit your NTP server configuration Enter 2. You're prompted to provide an NTP server configuration. View your NTP server configuration Enter 3. Your NTP server configuration displays. Synchronizing the time on your VMware agent 337 AWS DataSync User Guide Running maintenance-related commands for your agent In your DataSync agent's local console, you can perform some maintenance tasks and diagnose issues with your agent. To run a configuration or diagnostic command in your agent's local console 1. Log in to your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 5 (or for 6 a VMware VM) for the Command Prompt. 3. Use the following commands to perform the following tasks with your agent. Command dig diskclean exit h ifconfig ip iptables ncport nping Description Look up DNS information about the host. Perform disk cleanup. Return to the console configuration menu. Display a list of available commands. Display or configure network interfaces. Display or configure routing, devices, and tunnels. Set up and maintain IPv4 packet filtering and network address translation (NAT). Test connectivity to a specific network TCP port. Get information to troubleshoot network issues. save-iptables Save IP table firewall rules permanently. save-routing-table Save a newly added routing table entry. Running maintenance-related commands for your agent 338 AWS DataSync User Guide Command sslcheck tcptraceroute Description Verify whether an SSL certificate is valid. Collect traceroute output on TCP traffic to a destination. 4. Follow the onscreen instructions. Replacing your AWS DataSync agent To replace an AWS DataSync agent, you must create a new agent and update any transfer locations that are using the old agent. Creating a new agent To create your new DataSync agent, follow the same process when you created your old agent: 1. Deploy an agent in your storage environment. 2. Choose a service endpoint that the agent uses to communicate with AWS. 3. Configure your network so that the agent can communicate with your storage and AWS. 4. Activate your agent. 5. Once activated, make note of the agent’s Amazon Resource Name (ARN). You need this ARN when updating your DataSync location to use the new agent. Updating your location with the new agent Once you create a new agent, you can update an existing DataSync location to use this agent. In most cases, you also have to re-enter access credentials to update the location. This is because DataSync stores location credentials in a way that only your agent can use them. Using the DataSync console The following instructions describe how to update locations with a new agent by using the DataSync console. Replacing your agent 339 AWS DataSync NFS User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. Choose Save changes. SMB 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. For Password, enter the password of the user that can mount your SMB file server and has permission to access the files and folders involved in your transfer. 6. Choose Save changes. HDFS 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update,
sync-dg-106
sync-dg.pdf
106
then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. For Password, enter the password of the user that can mount your SMB file server and has permission to access the files and folders involved in your transfer. 6. Choose Save changes. HDFS 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. If you're using Kerberos authentication, upload your Keytab file and Kerberos configuration file. 6. Choose Save changes. Updating your location with the new agent 340 AWS DataSync Object storage User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. If your location requires credentials, enter the Secret key that allows DataSync to access your object storage bucket. 6. Choose Save changes. Azure Blob Storage Do the following to update your Microsoft Azure Blob Storage location: 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to update, then choose Edit. 4. For Agents, choose your new agent. You can choose more than one agent if you're replacing multiple agents for a location. 5. For SAS token, enter the shared access signature (SAS) token that allows DataSync to access your blob storage. 6. Choose Save changes. Using the AWS CLI The following instructions describe how to update locations with a new agent by using the AWS CLI. (You can also do this by using the DataSync API.) NFS 1. Copy the following update-location-nfs command: aws datasync update-location-nfs \ Updating your location with the new agent 341 AWS DataSync User Guide --location-arn datasync-nfs-location-arn \ --on-prem-config AgentArns=new-datasync-agent-arn 2. 3. For the --location-arn parameter, specify the ARN of the NFS location that you're updating. For the --on-prem-config parameter’s AgentArns option, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. Run the update-location-nfs command to update the location. SMB 1. Copy the following update-location-smb command: aws datasync update-location-smb \ --location-arn datasync-smb-location-arn \ --agent-arns new-datasync-agent-arn \ --password smb-file-server-password 2. For the --location-arn parameter, specify the ARN of the SMB location that you're updating. 3. For the --agent-arns parameter, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. For the --password parameter, specify the password of the user that can mount your SMB file server and has permission to access the files and folders involved in your transfer. 5. Run the update-location-smb command to update the location. HDFS 1. Copy the following update-location-hdfs command: aws datasync update-location-hdfs \ --location-arn datasync-hdfs-location-arn \ --agent-arns new-datasync-agent-arn \ --kerberos-keytab keytab-file \ --kerberos-krb5-conf krb5-conf-file Updating your location with the new agent 342 AWS DataSync User Guide 2. For the --location-arn parameter, specify the ARN of the HDFS location that you're updating. 3. For the --agent-arns parameter, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. If you're using Kerberos authentication, include the --kerberos-keytab and -- kerberos-krb5-conf parameters: • For the --kerberos-keytab parameter, specify the Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and encrypted keys. You can specify the keytab file by providing the file's address. • For the --kerberos-krb5-conf parameter, specify the file that contains the configuration for your Kerberos realm. You can specify the krb5.conf file by providing the file's address. If you're using simple authentication, you don't need to include these Kerberos-related parameters in your command. 5. Run the update-location-hdfs command to update the location. Object storage 1. Copy the following update-location-object-storage command: aws datasync update-location-object-storage \ --location-arn datasync-object-storage-location-arn \ --agent-arns new-datasync-agent-arn \ --secret-key bucket-secret-key 2. For the --location-arn parameter, specify the ARN of the object storage location that you're updating. 3. For the --agent-arns parameter, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. Do the following depending on if your object storage location requires access credentials: Updating your location with the new agent 343 AWS DataSync User
sync-dg-107
sync-dg.pdf
107
in your command. 5. Run the update-location-hdfs command to update the location. Object storage 1. Copy the following update-location-object-storage command: aws datasync update-location-object-storage \ --location-arn datasync-object-storage-location-arn \ --agent-arns new-datasync-agent-arn \ --secret-key bucket-secret-key 2. For the --location-arn parameter, specify the ARN of the object storage location that you're updating. 3. For the --agent-arns parameter, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. Do the following depending on if your object storage location requires access credentials: Updating your location with the new agent 343 AWS DataSync User Guide • If your location requires credentials – For the --secret-key parameter, specify the secret key that allows DataSync to access your object storage bucket. • If your location requires credentials – Specify empty strings for the --access-key and --secret-key parameters. Here's an example command: aws datasync update-location-object-storage \ --location-arn arn:aws:datasync:us-east-2:111122223333:location/ loc-abcdef01234567890 \ --agent-arns arn:aws:datasync:us-east-2:111122223333:agent/ agent-1234567890abcdef0 \ --access-key "" \ --secret-key "" 5. Run the update-location-object-storage command to update the location. Azure Blob Storage 1. Copy the following update-location-azure-blob command: aws datasync update-location-azure-blob \ --location-arn datasync-azure-blob-storage-location-arn \ --agent-arns new-datasync-agent-arn \ --sas-configuration '{ "Token": "sas-token-for-azure-blob-storage" }' 2. For the --location-arn parameter, specify the ARN of the Azure Blob Storage location that you're updating. 3. For the --agent-arns parameter, specify the ARN of your new agent. You can specify more than one ARN if you're replacing multiple agents for a location. 4. For the --sas-configuration parameter's Token option, specify the SAS token that allows DataSync to access your blob storage. 5. Run the update-location-azure-blob command to update the location. Updating your location with the new agent 344 AWS DataSync Next steps User Guide 1. Delete your old agent. If you have any running DataSync tasks using this agent, wait until those tasks finish before deleting it. 2. If you need to replace agents for multiple locations, repeat the previous steps. 3. When you’re done, you can resume running your tasks. Note Replacing agents for scheduled tasks – If you replace an agent for a scheduled task, you must start that task manually if the new agent is using a different type of service endpoint than your old agent. If you don't run the task manually before its next scheduled run, the task fails. For example, if your old agent used a public service endpoint, but the new agent uses a VPC endpoint, start that task manually by using the console or StartTaskExecution operation. After that, your task will resume running on its schedule. Filtering AWS DataSync resources You can filter your AWS DataSync locations and tasks by using the ListLocations and ListTasks API operations in the AWS CLI. For example, retrieve a list of your most recent tasks. Parameters for filtering You can use API filters to narrow down the list of resources returned by ListTasks and ListLocations. For example, to retrieve all of your Amazon S3 locations, you can use ListLocations with the filter name LocationType S3 and Operator Equals. To filter API results, you must specify a filter name, operator, and value. • Name – The name of the filter that's being used. Each API call supports a list of filters that are available for it (for example, LocationType for ListLocations). • Values – The values that you want to filter for. For example, you might want to display only Amazon S3 locations. • Operator – The operator that's used to compare filter values (for example, Equals or Contains). Next steps 345 AWS DataSync User Guide The following table lists the available operators. Operator Equals NotEquals LessThan LessThanOrEqual GreaterThan GreaterThanOrEqual In Contains NotContains BeginsWith Key types String, Number String, Number Number Number Number Number String String String String Filtering by location ListLocations supports the following filter names: • LocationType – Filters on the location type: • SMB • NFS • HDFS • OBJECT_STORAGE • S3 • OUTPOST_S3 • FSX_WINDOWS • FSX_LUSTRE Filtering by location 346 AWS DataSync • FSX_OPENZFS_NFS • FSX_ONTAP_NFS • FSX_ONTAP_SMB User Guide • LocationUri – Filters on the uniform resource identifier (URI) assigned to the location, as returned by the DescribeLocation* API call (for example, s3://bucket-name/your- prefix for Amazon S3 locations). • CreationTime – Filters on the time that the location was created. The input format is yyyy- MM-dd:mm:ss in Coordinated Universal Time (UTC). The following AWS CLI example lists all locations of type Amazon S3 that have a location URI starting with the string "s3://amzn-s3-demo-bucket" and that were created at or after 2019-12-15 17:15:20 UTC. aws datasync list-locations \ --filters [{Name=LocationType, Values=["S3"], Operator=Equals}, {Name=LocationUri, Values=["s3://amzn-s3-demo-bucket"], Operator=BeginsWith}, {Name=CreationTime,Values=["2019-12-15 17:15:20"],Operator=GreaterThanOrEqual}] This command returns output similar to the following. { "Locations": [ { "LocationArn": "arn:aws:datasync:us-east-1:111122223333:location/ loc-333333333abcdef0", "LocationUri": "s3://amzn-s3-demo-bucket1/" }, { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-987654321abcdef0", "LocationUri": "s3://amzn-s3-demo-bucket2/" } ] } Filtering by task ListTasks supports the following filter names. Filtering by
sync-dg-108
sync-dg.pdf
108
time that the location was created. The input format is yyyy- MM-dd:mm:ss in Coordinated Universal Time (UTC). The following AWS CLI example lists all locations of type Amazon S3 that have a location URI starting with the string "s3://amzn-s3-demo-bucket" and that were created at or after 2019-12-15 17:15:20 UTC. aws datasync list-locations \ --filters [{Name=LocationType, Values=["S3"], Operator=Equals}, {Name=LocationUri, Values=["s3://amzn-s3-demo-bucket"], Operator=BeginsWith}, {Name=CreationTime,Values=["2019-12-15 17:15:20"],Operator=GreaterThanOrEqual}] This command returns output similar to the following. { "Locations": [ { "LocationArn": "arn:aws:datasync:us-east-1:111122223333:location/ loc-333333333abcdef0", "LocationUri": "s3://amzn-s3-demo-bucket1/" }, { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-987654321abcdef0", "LocationUri": "s3://amzn-s3-demo-bucket2/" } ] } Filtering by task ListTasks supports the following filter names. Filtering by task 347 AWS DataSync User Guide • LocationId – Filters on both source and destination locations on Amazon Resource Name (ARN) values. • CreationTime – Filters on the time that the task was created. The input format is yyyy-MM- dd:mm:ss in UTC. The following AWS CLI example shows the syntax when filtering on LocationId. aws datasync list-tasks \ --filters Name=LocationId,Values=arn:aws:datasync:us-east-1:your-account- id:location/your-location-id,Operator=Contains The output of this command looks similar to the following. { "Tasks": [ { "TaskArn": "arn:aws:datasync:us-east-1:your-account-id:task/your-task-id", "Status": "AVAILABLE", "Name": "amzn-s3-demo-bucket" } ] } Cleaning up your AWS DataSync resources If you used AWS DataSync for a test or don't need the AWS resources that you created, delete them so that you aren't charged for resources you don't plan to use. Note If you have DataSync resources in an opt-in Region that you disable, those resources aren't automatically deleted. The resources are still there if you enable that Region again. Deleting a DataSync agent When you delete an agent from AWS DataSync, the agent resource is no longer associated with your AWS account and can't be undone. Cleaning up DataSync resources 348 AWS DataSync User Guide Keep in mind that deleting an agent from DataSync doesn't remove its virtual machine (VM) or Amazon EC2 instance from your storage environment. You can delete the VM or instance or reuse it to activate a new agent. Prerequisites Don't delete an agent until you update or remove the DataSync resources that depend on it. If you're replacing an agent, update your transfer locations with the new agent. If you aren't replacing an agent, delete transfer tasks and locations using that agent first. Deleting the agent 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, choose Agents. 3. Choose the agent that you want to delete. 4. Choose Delete, enter delete in the text box that appears, and then choose Delete. 5. If you aren't planning to reuse the agent's infrastructure for other DataSync activities, delete the agent's VM or Amazon EC2 instance to remove it from your storage environment. Reusing a DataSync agent's infrastructure You can delete an agent resource from DataSync and still use the agent's underlying VM or Amazon EC2 instance to activate a new agent. To reuse an agent's infrastructure 1. Test the agent's connection to AWS. If the network tests pass, go to the next step. The network tests must pass before you can move to the next step. 2. Delete the agent resource from DataSync but don't delete the agent's VM or Amazon EC2 instance. 3. Repeat step 1 to test the agent's connection to AWS again. If the network tests pass, go to the next step. 4. About three minutes after deleting the agent resource from DataSync, check if port 80 is open on the agent VM or Amazon EC2 instance. If it is, go to the next step. 5. Activate a new agent with the existing VM or Amazon EC2 instance. Reusing a DataSync agent's infrastructure 349 AWS DataSync User Guide You can activate the new agent in a different AWS Region, AWS account, and with another type of service endpoint. If you use a different type of service endpoint, you have to adjust your network configuration. Deleting a DataSync location As a best practice, remove the AWS DataSync locations that you no longer need. To remove a location by using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose the location that you want to remove. 4. Choose Delete. Confirm the deletion by entering delete, and then choose Delete. Deleting a DataSync task If you no longer need an AWS DataSync task, you can delete it and its related AWS resources. Prerequisites When you run a task, DataSync automatically creates and manages network interfaces for data transfer traffic. When you delete a task, you also delete its related network interfaces as long as you have the following permissions: • ec2:DeleteNetworkInterface • ec2:DescribeNetworkInterfaces • ec2:ModifyNetworkInterfaceAttribute These permissions are available in the AWS managed policy AWSDataSyncFullAccess. For more information, see AWS managed policies for AWS DataSync. Deleting the task Once you delete a task, you can't
sync-dg-109
sync-dg.pdf
109
delete, and then choose Delete. Deleting a DataSync task If you no longer need an AWS DataSync task, you can delete it and its related AWS resources. Prerequisites When you run a task, DataSync automatically creates and manages network interfaces for data transfer traffic. When you delete a task, you also delete its related network interfaces as long as you have the following permissions: • ec2:DeleteNetworkInterface • ec2:DescribeNetworkInterfaces • ec2:ModifyNetworkInterfaceAttribute These permissions are available in the AWS managed policy AWSDataSyncFullAccess. For more information, see AWS managed policies for AWS DataSync. Deleting the task Once you delete a task, you can't restore it. Deleting a DataSync location 350 AWS DataSync Using the DataSync console User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, then choose Tasks. Select the task that you want to delete. 4. Choose Actions, then choose Delete. 5. In the dialog box, choose Delete. Using the AWS CLI 1. Copy the following delete-task command: aws datasync delete-task \ --task-arn "task-to-delete" 2. For the --task-arn parameter, specify the Amazon Resource Name (ARN) of the task you're deleting (for example, arn:aws:datasync:us-east-2:123456789012:task/ task-012345678abcd0123). 3. Run the delete-task command. Deleting a DataSync task 351 AWS DataSync User Guide Security in AWS DataSync Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third- party auditors regularly test and verify the effectiveness of our security as part of the AWS compliance programs. To learn about the compliance programs that apply to AWS DataSync, see AWS services in scope by compliance program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company's requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using DataSync. The following topics show you how to configure DataSync to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your DataSync resources. Topics • Data protection in AWS DataSync • Identity and access management in AWS DataSync • Compliance validation for AWS DataSync • Resilience in AWS DataSync • Infrastructure security in AWS DataSync Data protection in AWS DataSync AWS DataSync securely transfers data between self-managed storage systems and AWS storage services and also between AWS storage services. How your storage data is encrypted in transit depends in part on the locations involved in the transfer. Data protection 352 AWS DataSync User Guide After the transfer completes, data is encrypted at rest by the system or service that's storing the data (not DataSync). Topics • AWS DataSync encryption in transit • AWS DataSync encryption at rest • Internetwork traffic privacy AWS DataSync encryption in transit Your storage data (including metadata) is encrypted in transit, but how it's encrypted throughout the transfer depends on your source and destination locations. When connecting with a location, DataSync uses the most secure options provided by that location's data access protocol. For example, when connecting with a file system using Server Message Block (SMB), DataSync uses the security features provided by SMB. Network connections in a transfer DataSync requires three network connections to copy data: a connection to read data from a source location, another to transfer data between locations, and one more to write data to a destination location. The following diagram is an example of the network connections that DataSync uses to transfer data from an on-premises storage system to an AWS storage service. To understand where the connections happen and how data is protected as it transfers through each connection, use the accompanying table. Encryption in transit 353 AWS DataSync User Guide Reference Network connection Description 1 2 3 Reading data from the source location Transferring data between locations DataSync connects by using the storage system's protocol for accessing data (for example, SMB or the Amazon S3 API). For this connection, data is protected by using the security features of the storage system unless DataSync doesn't support those features. For example, DataSync currently doesn't support Kerberos authentication with NFS file servers or when using TDE encryption with HDFS. For this connection, DataSync encrypts all network traffic with Transport Layer Security (TLS) 1.3. Writing data to the destination
sync-dg-110
sync-dg.pdf
110
353 AWS DataSync User Guide Reference Network connection Description 1 2 3 Reading data from the source location Transferring data between locations DataSync connects by using the storage system's protocol for accessing data (for example, SMB or the Amazon S3 API). For this connection, data is protected by using the security features of the storage system unless DataSync doesn't support those features. For example, DataSync currently doesn't support Kerberos authentication with NFS file servers or when using TDE encryption with HDFS. For this connection, DataSync encrypts all network traffic with Transport Layer Security (TLS) 1.3. Writing data to the destination location Like it did with the source location, DataSync connects by using the Encryption in transit 354 AWS DataSync User Guide Reference Network connection Description storage system's protocol for accessing data. Data is again protected by using the security features of the storage system unless DataSync doesn't support those features. Learn how your data is encrypted in transit when DataSync connects to the following AWS storage services: • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP • Amazon S3 TLS ciphers When transferring data between locations, DataSync uses different TLS ciphers. The TLS cipher depends on the type of service endpoint that your agent uses to communicate with DataSync. (For more information, see Choosing a service endpoint for your AWS DataSync agent.) Contents • Public or VPC endpoints • FIPS endpoints Public or VPC endpoints For public and virtual private cloud (VPC) service endpoints, DataSync uses one of the following TLS ciphers: • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) Encryption in transit 355 AWS DataSync User Guide • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) FIPS endpoints For Federal Information Processing Standard (FIPS) service endpoints, DataSync uses the following TLS cipher: • TLS_AES_128_GCM_SHA256 (secp256r1) AWS DataSync encryption at rest Because AWS DataSync is a transfer service, it generally doesn't manage your storage data at rest. The storage services and systems that DataSync supports are responsible for protecting data in that state. However, there is some service-related data that DataSync manages at rest. What's encrypted? The only data that DataSync handles at rest relates to the information that it discovers about your on-premises storage system and the details needs to complete your transfer. DataSync stores the following data with full at-rest encryption in Amazon DynamoDB: • Information collected about your on-premises storage system (if you use DataSync Discovery). This information is also stored with full at-rest encryption in Amazon S3. • Task configurations (for example, details about the locations in your transfer). • User credentials that allow your DataSync agent to authenticate with a location. These credentials are encrypted by using your agent's public keys. The agent can decrypt these keys as needed with its private keys. For more information, see DynamoDB encryption at rest in the Amazon DynamoDB Developer Guide. Contents • Information collected by DataSync Discovery • Key management Encryption at rest 356 AWS DataSync User Guide Information collected by DataSync Discovery DataSync Discovery stores and manages the data that it collects about your on-premises storage system for up to 60 days. You can use Amazon EventBridge to notify you when that expiration date is approaching. For more information, see DataSync Discovery events. When you remove an on-premises storage system resource from DataSync Discovery, you permanently delete any associated discovery jobs, collected data, and recommendations. Key management You can't manage the encryption keys that DataSync uses to store information in DynamoDB related to running your task. This information includes your task configurations and the credentials that agents use to authenticate with a storage location. What's not encrypted? Though DataSync doesn’t control how your storage data is encrypted at rest, we still recommend configuring your locations with the highest level of security that they support. For example, you can encrypt objects with Amazon S3 managed encryption keys (SSE-S3) or AWS Key Management Service (AWS KMS) keys (SSE-KMS). Learn more about how AWS storage services encrypt data at rest: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP Internetwork traffic privacy We recommend configuring your source and destination locations with the highest level of security that each one supports. When connecting to a location, AWS DataSync works with the most secure version of the data access protocol that the storage system uses. Additionally, consider limiting subnet traffic to known protocols and services. Internetwork traffic privacy 357 AWS DataSync User Guide DataSync secures the connection between locations—including between AWS accounts, AWS Regions, and Availability Zones—by using Transport Layer Security (TLS) 1.3. Identity and access management in AWS DataSync AWS uses security credentials to identify you and to grant you access to
sync-dg-111
sync-dg.pdf
111
privacy We recommend configuring your source and destination locations with the highest level of security that each one supports. When connecting to a location, AWS DataSync works with the most secure version of the data access protocol that the storage system uses. Additionally, consider limiting subnet traffic to known protocols and services. Internetwork traffic privacy 357 AWS DataSync User Guide DataSync secures the connection between locations—including between AWS accounts, AWS Regions, and Availability Zones—by using Transport Layer Security (TLS) 1.3. Identity and access management in AWS DataSync AWS uses security credentials to identify you and to grant you access to your AWS resources. You can use features of AWS Identity and Access Management (IAM) to allow other users, services, and applications to use your AWS resources fully or in a limited way, without sharing your security credentials. By default, IAM identities (users, groups, and roles) don't have permission to create, view, or modify AWS resources. To allow users, groups, and roles to access AWS DataSync resources and interact with the DataSync console and API, we recommend that you use an IAM policy that grants them permission to use the specific resources and API actions that they will need. You then attach the policy to the IAM identity that requires access. For an overview of the basic elements for a policy, see Access management for AWS DataSync. Topics • Access management for AWS DataSync • AWS managed policies for AWS DataSync • IAM customer managed policies for AWS DataSync • Using service-linked roles for DataSync • Permissions for tagging DataSync resources during creation • Cross-service confused deputy prevention • DataSync API permissions: Actions and resources Access management for AWS DataSync Every AWS resource is owned by an AWS account. Permissions to create or access a resource are governed by permissions policies. An account administrator can attach permissions policies to AWS Identity and Access Management (IAM) identities. Some services (such as AWS Lambda) also support attaching permissions policies to resources. Identity and access management 358 AWS DataSync Note User Guide An account administrator is a user with administrator privileges in an AWS account. For more information, see IAM best practices in the IAM User Guide. Topics • DataSync resources and operations • Understanding resource ownership • Managing access to resources • Specifying policy elements: Actions, effects, resources, and principals • Specifying conditions in a policy DataSync resources and operations In DataSync, the primary resources are agent, location, task, and task execution. These resources have unique Amazon Resource Names (ARNs) associated with them, as shown in the following table. Resource type ARN format Agent ARN arn:aws:datasync: region:account-id :agent/agent-id Location ARN arn:aws:datasync: region:account-id :location/ location-id Task ARN arn:aws:datasync: region:account-id :task/task-id Task execution ARN arn:aws:datasync: region:account-id :task/task-id/executio n/ exec-id To grant permissions for specific API operations, such as creating a task, DataSync defines a set of actions that you can specify in a permissions policy. An API operation can require permissions for Access management 359 AWS DataSync User Guide more than one action. For a list of all the DataSync API actions and the resources that they apply to, see DataSync API permissions: Actions and resources. Understanding resource ownership A resource owner is the AWS account that created the resource. That is, the resource owner is the AWS account of the principal entity (for example, an IAM role) which authenticates the request that creates the resource. The following examples illustrate how this behavior works: • If you use the root account credentials of your AWS account to create a task, your AWS account is the owner of the resource (in DataSync, the resource is the task). • If you create an IAM roles in your AWS account and grant permissions to the CreateTask action to that user, the user can create a task. However, your AWS account, to which the user belongs, owns the task resource. • If you create an IAM role in your AWS account with permissions to create a task, anyone who can assume the role can create a task. Your AWS account, to which the role belongs, owns the task resource. Managing access to resources A permissions policy describes who has access to what. The following section explains the available options for creating permissions policies. Note This section discusses using IAM in the context of DataSync. It doesn't provide detailed information about the IAM service. For complete IAM documentation, see What is IAM? in the IAM User Guide. For information about IAM policy syntax and descriptions, see AWS Identity and Access Management policy reference in the IAM User Guide. Policies attached to an IAM identity are referred to as identity-based policies (IAM policies) and policies attached to a resource are referred to as resource-based policies. DataSync supports only identity-based policies (IAM policies). Topics • Identity-based policies • Resource-based policies Access management
sync-dg-112
sync-dg.pdf
112
for creating permissions policies. Note This section discusses using IAM in the context of DataSync. It doesn't provide detailed information about the IAM service. For complete IAM documentation, see What is IAM? in the IAM User Guide. For information about IAM policy syntax and descriptions, see AWS Identity and Access Management policy reference in the IAM User Guide. Policies attached to an IAM identity are referred to as identity-based policies (IAM policies) and policies attached to a resource are referred to as resource-based policies. DataSync supports only identity-based policies (IAM policies). Topics • Identity-based policies • Resource-based policies Access management 360 AWS DataSync Identity-based policies User Guide You can manage DataSync resource access with IAM policies. These policies can help an AWS account administrator do the following with DataSync: • Grant permissions to create and manage DataSync resources – Create an IAM policy that allows an IAM role in your AWS account to create and manage DataSync resources, such as agents, locations, and tasks. • Grant permissions to a role in another AWS account or an AWS service – Create an IAM policy that grants permissions to an IAM role in a different AWS account or an AWS service. For example: 1. The Account A administrator creates an IAM role and attaches a permissions policy to the role that grants permissions on resources in Account A. 2. The Account A administrator attaches a trust policy to the role that identifies Account B as the principal who can assume the role. To grant an AWS service permissions to assume the role, the Account A administrator can specify an AWS service as the principal in the trust policy. 3. The Account B administrator can then delegate permissions to assume the role to any users in Account B. This allows anyone using the role in Account B to create or access resources in Account A. For more information about using IAM to delegate permissions, see Access management in the IAM User Guide. The following example policy grants permissions to all List* actions on all resources. This action is a read-only action and doesn't allow resource modification. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAllListActionsOnAllResources", "Effect": "Allow", "Action": [ "datasync:List*" ], "Resource": "*" } Access management 361 AWS DataSync ] } User Guide For more information about using identity-based policies with DataSync, see AWS managed policies and customer managed policies. For more information about IAM identities, see the IAM User Guide . Resource-based policies Other services, such as Amazon S3, support resource-based permissions policies. For example, you can attach a policy to an Amazon S3 bucket to manage access permissions to that bucket. However, DataSync doesn't support resource-based policies. Specifying policy elements: Actions, effects, resources, and principals For each DataSync resource (see DataSync API permissions: Actions and resources), the service defines a set of API operations (see Actions). To grant permissions for these API operations, DataSync defines a set of actions that you can specify in a policy. For example, for the DataSync resource, the following actions are defined: CreateTask, DeleteTask, and DescribeTask. Performing an API operation can require permissions for more than one action. The following are the most basic policy elements: • Resource – In a policy, you use an Amazon Resource Name (ARN) to identify the resource to which the policy applies. For DataSync resources, you can use the wildcard character (*) in IAM policies. For more information, see DataSync resources and operations. • Action – You use action keywords to identify resource operations that you want to allow or deny. For example, depending on the specified Effect element, the datasync:CreateTask permission allows or denies the user permissions to perform the DataSync CreateTask operation. • Effect – You specify the effect when the user requests the specific action—this effect can be either Allow or Deny. If you don't explicitly grant access to (Allow) a resource, access is implicitly denied. You can also explicitly deny access to a resource, which you might do to make sure that a user cannot access it, even if a different policy grants that user access. For more information, see Authorization in the IAM User Guide. • Principal – In identity-based policies (IAM policies), the user that the policy is attached to is the implicit principal. For resource-based policies, you specify the user, account, service, or other entity that you want to receive permissions (applies to resource-based policies only). DataSync doesn't support resource-based policies. Access management 362 AWS DataSync User Guide To learn more about IAM policy syntax and descriptions, see AWS Identity and Access Management policy reference in the IAM User Guide. For a table showing all of the DataSync API actions, see DataSync API permissions: Actions and resources. Specifying conditions in a policy When you grant permissions, you can use the IAM policy language to
sync-dg-113
sync-dg.pdf
113
the policy is attached to is the implicit principal. For resource-based policies, you specify the user, account, service, or other entity that you want to receive permissions (applies to resource-based policies only). DataSync doesn't support resource-based policies. Access management 362 AWS DataSync User Guide To learn more about IAM policy syntax and descriptions, see AWS Identity and Access Management policy reference in the IAM User Guide. For a table showing all of the DataSync API actions, see DataSync API permissions: Actions and resources. Specifying conditions in a policy When you grant permissions, you can use the IAM policy language to specify the conditions when a policy should take effect when granting permissions. For example, you might want a policy to be applied only after a specific date. For more information about specifying conditions in policy language, see Condition in the IAM User Guide. To express conditions, you use predefined condition keys. There are no condition keys specific to DataSync. However, there are AWS wide condition keys that you can use as appropriate. For a complete list of AWS wide keys, see Available keys in the IAM User Guide. AWS managed policies for AWS DataSync To add permissions to users, groups, and roles, it's easier to use AWS managed policies than to write policies yourself. It takes time and expertise to create IAM customer managed policies that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see AWS managed policies in the IAM User Guide. AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services do not remove permissions from an AWS managed policy, so policy updates won't break your existing permissions. Additionally, AWS supports managed policies for job functions that span multiple services. For example, the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see AWS managed policies for job functions in the IAM User Guide. AWS managed policies 363 AWS DataSync User Guide AWS managed policy: AWSDataSyncReadOnlyAccess You can attach the AWSDataSyncReadOnlyAccess policy to your IAM identities. This policy grants read-only permissions for DataSync. { "Version": "2012-10-17", "Statement": [{ "Sid": "DataSyncReadOnlyAccessPermissions", "Effect": "Allow", "Action": [ "datasync:Describe*", "datasync:List*", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeMountTargets", "fsx:DescribeFileSystems", "iam:GetRole", "iam:ListRoles", "logs:DescribeLogGroups", "logs:DescribeResourcePolicies", "s3:ListAllMyBuckets", "s3:ListBucket" ], "Resource": "*" }] } AWS managed policy: AWSDataSyncFullAccess You can attach the AWSDataSyncFullAccess policy to your IAM identities. AWS managed policies 364 AWS DataSync User Guide This policy grants administrative permissions for DataSync and is required for AWS Management Console access to the service. AWSDataSyncFullAccess provides full access to DataSync API operations and the operations that describe related resources (such as Amazon S3 buckets, Amazon EFS file systems, AWS KMS keys, and Secrets Manager secrets). The policy also grants permissions for Amazon CloudWatch, including creating log groups and creating or updating a resource policy. { "Version": "2012-10-17", "Statement": [{ "Sid": "DataSyncFullAccessPermissions", "Effect": "Allow", "Action": [ "datasync:*", "ec2:CreateNetworkInterface", "ec2:CreateNetworkInterfacePermission", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces", "ec2:DescribeRegions", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcEndpoints", "ec2:ModifyNetworkInterfaceAttribute", "fsx:DescribeFileSystems", "fsx:DescribeStorageVirtualMachines", "elasticfilesystem:DescribeAccessPoints", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeMountTargets", "iam:GetRole", "iam:ListRoles", "logs:CreateLogGroup", "logs:DescribeLogGroups", "logs:DescribeResourcePolicies", "outposts:ListOutposts", "s3:GetBucketLocation", "s3:ListAllMyBuckets", "s3:ListBucket", "s3:ListBucketVersions", "s3-outposts:ListAccessPoints", "s3-outposts:ListRegionalBuckets", "secretsmanager:ListSecrets", "kms:ListAliases", "kms:DescribeKey" ], AWS managed policies 365 User Guide AWS DataSync "Resource": "*" }, { "Sid": "DataSyncPassRolePermissions", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": [ "datasync.amazonaws.com" ] } } }, { "Sid": "DataSyncCreateSLRPermissions", "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole" ], "Resource": "arn:aws:iam::*:role/aws-service-role/datasync.amazonaws.com/ AWSServiceRoleForDataSync", "Condition": { "StringEquals": { "iam:AWSServiceName": "datasync.amazonaws.com" } } }, { "Sid": "DataSyncSecretsManagerWriteAccess", "Effect": "Allow", "Action": [ "secretsmanager:CreateSecret", "secretsmanager:PutSecretValue", "secretsmanager:DeleteSecret", "secretsmanager:UpdateSecret" ], "Resource": [ "arn:*:secretsmanager:*:*:secret:aws-datasync!*" ], "Condition": { AWS managed policies 366 AWS DataSync User Guide "StringEquals": { "secretsmanager:ResourceTag/aws:secretsmanager:owningService": "aws-datasync", "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } AWS managed policy: AWSDataSyncDiscoveryServiceRolePolicy You can't attach the AWSDataSyncDiscoveryServiceRolePolicy policy to your IAM identities. This policy is attached to a service-linked role that allows DataSync to perform actions on your behalf. For more information, see Using service-linked roles for DataSync. AWS managed policy: AWSDataSyncServiceRolePolicy You can't attach the AWSDataSyncServiceRolePolicy policy to your IAM identities. This policy is attached to a service-linked role that allows DataSync to perform actions on your behalf. For more information, see Using service-linked roles for DataSync. This policy grants administrative permissions that allow the service-linked role to create
sync-dg-114
sync-dg.pdf
114
"aws-datasync", "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } AWS managed policy: AWSDataSyncDiscoveryServiceRolePolicy You can't attach the AWSDataSyncDiscoveryServiceRolePolicy policy to your IAM identities. This policy is attached to a service-linked role that allows DataSync to perform actions on your behalf. For more information, see Using service-linked roles for DataSync. AWS managed policy: AWSDataSyncServiceRolePolicy You can't attach the AWSDataSyncServiceRolePolicy policy to your IAM identities. This policy is attached to a service-linked role that allows DataSync to perform actions on your behalf. For more information, see Using service-linked roles for DataSync. This policy grants administrative permissions that allow the service-linked role to create Amazon CloudWatch logs for DataSync tasks using Enhanced mode. Policy updates Change Description Date AWSDataSyncFullAccess – Change DataSync added new May 7, 2025 permissions to AWSDataSy ncFullAccess : • secretsmanager:Cre ateSecret AWS managed policies 367 AWS DataSync Change Description Date User Guide AWSDataSyncFullAccess – Change • secretsmanager:Put SecretValue • secretsmanager:Del eteSecret • secretsmanager:Upd ateSecret These permissions let DataSync create, edit, and delete AWS Secrets Manager secrets. DataSync added new April 23, 2025 permissions to AWSDataSy ncFullAccess : • secretsmanager:Lis tSecrets • kms:ListAliases • kms:DescribeKey These permissions let DataSync retrieve metadata about your AWS Secrets Manager secrets and AWS KMS keys, including any aliases associated with your keys. AWS managed policies 368 AWS DataSync Change Description Date User Guide AWSDataSyncService RolePolicy – Change DataSync added new permissions to the April 15, 2025 AWSDataSyncService RolePolicy used by the DataSync service- policy that's linked role AWSServic : eRoleForDataSync • secretsmanager:Des cribeSecret • secretsmanager:Get SecretValue These permissions let DataSync read metadata and values for secrets managed by AWS Secrets Manager. AWSDataSyncService RolePolicy – New policy DataSync added a policy that's used by the DataSync October 30, 2024 service-linked role AWSServiceRoleForD ataSync . This new managed policy automatically creates Amazon CloudWatch logs for your DataSync tasks that use Enhanced mode. AWS managed policies 369 AWS DataSync Change AWSDataSyncFullAccess – Change AWSDataSyncFullAccess – Change AWSDataSyncFullAccess – Change Description Date DataSync added new a October 30, 2024 User Guide permission to AWSDataSy ncFullAccess : • iam:CreateServiceL inkedRole This permission lets DataSync create service-linked roles for you. DataSync added new a July 22, 2024 permission to AWSDataSy ncFullAccess : • ec2:DescribeRegions This permission lets you choose opt-in Regions when creating a DataSync task for transfers between AWS Regions. DataSync added new a February 16, 2024 permission to AWSDataSy ncFullAccess : • s3:ListBucketVersi ons This permission lets you choose a specific version of your DataSync manifest. AWS managed policies 370 AWS DataSync Change AWSDataSyncFullAccess – Change Description Date DataSync added new May 2, 2023 User Guide permissions to AWSDataSy ncFullAccess : • ec2:DescribeVpcEnd points • elasticfilesystem: DescribeAccessPoin ts • fsx:DescribeStorag eVirtualMachines • outposts:ListOutpo sts • s3:GetBucketLocati on • s3-outposts:ListAc cessPoints • s3-outposts:ListRe gionalBuckets These permissions help you create DataSync agents and locations for Amazon EFS, Amazon FSx for NetApp ONTAP, Amazon S3, and S3 on Outposts. DataSync started tracking changes DataSync started tracking changes for its AWS managed policies. March 1, 2021 AWS managed policies 371 AWS DataSync User Guide IAM customer managed policies for AWS DataSync In addition to AWS managed policies, you also can create your own identity-based policies for AWS DataSync and attach them to the AWS Identity and Access Management (IAM) identities that require those permissions. These are known as customer managed policies, which are standalone policies that you administer in your own AWS account. Important Before you begin, we recommend that you learn about the basic concepts and options for managing access to your DataSync resources. For more information, see Access management for AWS DataSync. When creating a customer managed policy, you include statements about DataSync operations that can be used on certain AWS resources. The following example policy has two statements (note the Action and Resource elements in each statement): { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowsSpecifiedActionsOnAllTasks", "Effect": "Allow", "Action": [ "datasync:DescribeTask", ], "Resource": "arn:aws:datasync:us-east-2:111222333444:task/*" }, { "Sid": "ListAllTasks", "Effect": "Allow", "Action": [ "datasync:ListTasks" ], "Resource": "*" }, } The policy's statements do the following: Customer managed policies 372 AWS DataSync User Guide • The first statement grants permissions to perform the datasync:DescribeTask action on certain transfer task resources by specifying an Amazon Resource Name (ARN) with a wildcard character (*). • The second statement grants permissions to perform the datasync:ListTasks action on all tasks by specifying just a wildcard character (*) . Examples of customer managed policies The following example customer managed policies grant permissions for various DataSync operations. The policies work if you're using the AWS Command Line Interface (AWS CLI) or an AWS SDK. To use these policies in the console, you must also use the managed policy AWSDataSyncFullAccess. Topics • Example 1: Create a trust relationship that allows DataSync to access your Amazon S3 bucket • Example 2: Allow DataSync to read and write to your Amazon S3
sync-dg-115
sync-dg.pdf
115
The second statement grants permissions to perform the datasync:ListTasks action on all tasks by specifying just a wildcard character (*) . Examples of customer managed policies The following example customer managed policies grant permissions for various DataSync operations. The policies work if you're using the AWS Command Line Interface (AWS CLI) or an AWS SDK. To use these policies in the console, you must also use the managed policy AWSDataSyncFullAccess. Topics • Example 1: Create a trust relationship that allows DataSync to access your Amazon S3 bucket • Example 2: Allow DataSync to read and write to your Amazon S3 bucket • Example 3: Allow DataSync to upload logs to CloudWatch log groups Example 1: Create a trust relationship that allows DataSync to access your Amazon S3 bucket The following is an example of a trust policy that allows DataSync to assume an IAM role. This role allows DataSync to access an Amazon S3 bucket. To prevent the cross-service confused deputy problem, we recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in the policy. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "123456789012" }, Customer managed policies 373 AWS DataSync User Guide "StringLike": { "aws:SourceArn": "arn:aws:datasync:us-east-2:123456789012:*" } } } ] } Example 2: Allow DataSync to read and write to your Amazon S3 bucket The following example policy grants DataSync the minimum permissions to read and write data to an S3 bucket that's used as a destination location. Note The value for aws:ResourceAccount should be the account ID that owns the Amazon S3 bucket specified in the policy. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } } }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:GetObjectTagging", Customer managed policies 374 AWS DataSync User Guide "s3:GetObjectVersion", "s3:GetObjectVersionTagging", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } } } ] } Example 3: Allow DataSync to upload logs to CloudWatch log groups DataSync requires permissions to be able to upload logs to your Amazon CloudWatch log groups. You can use CloudWatch log groups to monitor and debug your tasks. For an example of an IAM policy that grants such permissions, see Allowing DataSync to upload logs to a CloudWatch log group. Using service-linked roles for DataSync AWS DataSync uses AWS Identity and Access Management (IAM) service-linked roles. A service- linked role is a unique type of IAM role that is linked directly to DataSync. Service-linked roles are predefined by DataSync and include all the permissions that the service requires to call other AWS services on your behalf. Topics • Using roles for DataSync Discovery • Using roles for DataSync Using roles for DataSync Discovery AWS DataSync uses AWS Identity and Access Management (IAM) service-linked roles. A service- linked role is a unique type of IAM role that is linked directly to DataSync. Service-linked roles are Using service-linked roles 375 AWS DataSync User Guide predefined by DataSync and include all the permissions that the service requires to call other AWS services on your behalf. A service-linked role makes setting up DataSync easier because you don’t have to manually add the necessary permissions. DataSync defines the permissions of its service-linked roles, and unless defined otherwise, only DataSync can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. You can delete a service-linked role only after first deleting their related resources. This protects your DataSync resources because you can't inadvertently remove permission to access the resources. For information about other services that support service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-linked roles column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions for DataSync DataSync uses the service-linked role named AWSServiceRoleForDataSyncDiscovery – Allows DataSync Discovery to use AWS Secrets Manager and Amazon CloudWatch. The AWSServiceRoleForDataSyncDiscovery service-linked role trusts the following services to assume the role: • discovery-datasync.amazonaws.com The role permissions policy named AWSDataSyncDiscoveryServiceRolePolicy allows DataSync to complete the following actions on the specified resources: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": [ "arn:*:secretsmanager:*:*:secret:datasync!*" ], "Condition": { Using service-linked roles 376 AWS DataSync User Guide "StringEquals": { "secretsmanager:ResourceTag/aws:secretsmanager:owningService": "datasync", "aws:ResourceAccount": "${aws:PrincipalAccount}" } } }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync*" ] }, { "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync:log-stream:*" ] } ] } You must configure permissions to allow your users, groups, or roles to create, edit, or delete a service-linked role. For more information, see Service-linked role permissions in the
sync-dg-116
sync-dg.pdf
116
allows DataSync to complete the following actions on the specified resources: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": [ "arn:*:secretsmanager:*:*:secret:datasync!*" ], "Condition": { Using service-linked roles 376 AWS DataSync User Guide "StringEquals": { "secretsmanager:ResourceTag/aws:secretsmanager:owningService": "datasync", "aws:ResourceAccount": "${aws:PrincipalAccount}" } } }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync*" ] }, { "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync:log-stream:*" ] } ] } You must configure permissions to allow your users, groups, or roles to create, edit, or delete a service-linked role. For more information, see Service-linked role permissions in the IAM User Guide. Creating a service-linked role for DataSync You don't need to manually create a service-linked role. When you add a storage system in the AWS Management Console, the AWS CLI, or the AWS API, DataSync creates the service-linked role for you. You can also use the IAM console to create a service-linked role with the DataSync Discovery use case. In the AWS CLI or the AWS API, use IAM to create a service-linked role with the discovery- datasync.amazonaws.com service name. For more information, see Creating a service-linked role in the IAM User Guide. If you delete this service-linked role, you can use this same process to create the role again. Using service-linked roles 377 AWS DataSync User Guide If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you add a storage system, DataSync creates the service- linked role for you again. If you delete this service-linked role, you can use the same IAM process to create the role again. Editing a service-linked role for DataSync DataSync does not allow you to edit the AWSServiceRoleForDataSyncDiscovery service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see Editing a service-linked role in the IAM User Guide. Deleting a service-linked role for DataSync If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up the resources for your service-linked role before you can manually delete it. Note If the DataSync service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again. To delete DataSync resources used by the AWSServiceRoleForDataSyncDiscovery role 1. Remove the on-premises storage systems that you're using with DataSync Discovery. 2. Delete the service-linked role using IAM. Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForDataSyncDiscovery service-linked role. For more information, see Deleting a service-linked role in the IAM User Guide. Supported Regions for DataSync service-linked roles DataSync supports using service-linked roles in all of the Regions where the service is available. For more information, see AWS Regions and endpoints. Using service-linked roles 378 AWS DataSync User Guide DataSync does not support using service-linked roles in every Region where the service is available. You can use the AWSServiceRoleForDataSyncDiscovery role in the following Regions. Using roles for DataSync AWS DataSync uses AWS Identity and Access Management (IAM) service-linked roles. A service- linked role is a unique type of IAM role that is linked directly to DataSync. Service-linked roles are predefined by DataSync and include all the permissions that the service requires to call other AWS services on your behalf. A service-linked role makes setting up DataSync easier because you don’t have to manually add the necessary permissions. DataSync defines the permissions of its service-linked roles, and unless defined otherwise, only DataSync can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. You can delete a service-linked role only after first deleting their related resources. This protects your DataSync resources because you can't inadvertently remove permission to access the resources. For information about other services that support service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-linked roles column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions for DataSync DataSync uses the service-linked role named AWSServiceRoleForDataSync – Allows DataSync to perform essential operations for transfer task execution, including reading secrets from AWS Secrets Manager, and creating CloudWatch log groups and events. The AWSServiceRoleForDataSync service-linked role trusts the following services to assume the role: • datasync.amazonaws.com The service-linked role uses the AWS managed
sync-dg-117
sync-dg.pdf
117
about other services that support service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-linked roles column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions for DataSync DataSync uses the service-linked role named AWSServiceRoleForDataSync – Allows DataSync to perform essential operations for transfer task execution, including reading secrets from AWS Secrets Manager, and creating CloudWatch log groups and events. The AWSServiceRoleForDataSync service-linked role trusts the following services to assume the role: • datasync.amazonaws.com The service-linked role uses the AWS managed policy named AWSDataSyncServiceRolePolicy, which allows DataSync to complete the following actions on the specified resources: { "Version": "2012-10-17", "Statement": [{ Using service-linked roles 379 AWS DataSync User Guide "Sid": "DataSyncCloudWatchLogCreateAccess", "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync*" ] }, { "Sid": "DataSyncCloudWatchLogStreamUpdateAccess", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:*:logs:*:*:log-group:/aws/datasync*:log-stream:*" ] }, { "Sid": "DataSyncSecretsManagerReadAccess", "Effect": "Allow", "Action": [ "secretsmanager:DescribeSecret", "secretsmanager:GetSecretValue" ], "Resource": [ "arn:*:secretsmanager:*:*:secret:aws-datasync!*" ], "Condition": { "StringEquals": { "secretsmanager:ResourceTag/aws:secretsmanager:owningService": "aws-datasync", "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } You must configure permissions to allow your users, groups, or roles to create, edit, or delete a service-linked role. For more information, see Service-linked role permissions in the IAM User Guide. Using service-linked roles 380 AWS DataSync User Guide Creating a service-linked role for DataSync You don't need to manually create a service-linked role. When you create a DataSync task in the AWS Management Console, the AWS CLI, or the AWS API, DataSync creates the service-linked role for you. In the AWS CLI or the AWS API, you can create a service-linked role with the datasync.amazonaws.com service name. For more information, see Creating a service-linked role in the IAM User Guide. If you delete this service-linked role, you can use this same process to create the role again. If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a DataSync task, DataSync creates the service-linked role for you again. If you delete this service-linked role, you can use the same IAM process to create the role again. Editing a service-linked role for DataSync DataSync does not allow you to edit the AWSServiceRoleForDataSync service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see Editing a service-linked role in the IAM User Guide. Deleting a service-linked role for DataSync If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up your service-linked role before you can manually delete it. Cleaning up a service-linked role Before you can use IAM to delete a service-linked role, you must first delete any resources used by the role. Note If the DataSync service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again. Using service-linked roles 381 AWS DataSync User Guide To delete DataSync resources used by the AWSServiceRoleForDataSync 1. Delete the DataSync agents used by the task (if there are any). 2. Delete the task's locations. 3. Delete the task. Manually delete the service-linked role Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForDataSync service-linked role. For more information, see Deleting a service-linked role in the IAM User Guide. Supported Regions for DataSync service-linked roles DataSync supports using service-linked roles in all of the Regions where the service is available. For more information, see AWS Regions and endpoints. Permissions for tagging DataSync resources during creation Some resource-creating AWS DataSync API actions enable you to specify tags when you create the resource. You can use resource tags to implement attribute-based access control (ABAC). For more information, see What is ABAC for AWS? in the IAM User Guide. To enable users to tag resources on creation, they must have permissions to use the action that creates the resource (such as datasync:CreateAgent or datasync:CreateTask). If tags are specified in the resource-creating action, users must also have explicit permissions to use the datasync:TagResource action. The datasync:TagResource action is only evaluated if tags are applied during the resource- creating action. Therefore, a user that has permissions to create a resource (assuming there are no tagging conditions) doesn't require permissions to use the datasync:TagResource action if no tags are specified in the request. However, if the user attempts to create a resource with tags,
sync-dg-118
sync-dg.pdf
118
users to tag resources on creation, they must have permissions to use the action that creates the resource (such as datasync:CreateAgent or datasync:CreateTask). If tags are specified in the resource-creating action, users must also have explicit permissions to use the datasync:TagResource action. The datasync:TagResource action is only evaluated if tags are applied during the resource- creating action. Therefore, a user that has permissions to create a resource (assuming there are no tagging conditions) doesn't require permissions to use the datasync:TagResource action if no tags are specified in the request. However, if the user attempts to create a resource with tags, the request fails if the user doesn't have permissions to use the datasync:TagResource action. Example IAM policy statements Use the following example IAM policy statements to grant TagResource permissions to users creating DataSync resources. The following statement allows users to tag a DataSync agent when they create the agent. Tagging resources during creation 382 AWS DataSync User Guide { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "datasync:TagResource", "Resource": "arn:aws:datasync:region:account-id:agent/*" } ] } The following statement allows users to tag a DataSync location when they create the location. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "datasync:TagResource", "Resource": "arn:aws:datasync:region:account-id:location/*" } ] } The following statement allows users to tag a DataSync task when they create the task. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "datasync:TagResource", "Resource": "arn:aws:datasync:region:account-id:task/*" } ] } Cross-service confused deputy prevention The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service Cross-service confused deputy prevention 383 AWS DataSync User Guide impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the calling service) calls another service (the called service). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account. We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in resource policies to limit the permissions that AWS DataSync gives another service to the resource. If you use both global condition context keys and the aws:SourceArn value contains the account ID, the aws:SourceAccount value and the account in the aws:SourceArn value must use the same account ID when used in the same policy statement. Use aws:SourceArn if you want only one resource to be associated with the cross-service access. Use aws:SourceAccount if you want any resource in that account to be associated with the cross-service use. The value of aws:SourceArn must include the DataSync location ARN with which DataSync is allowed to assume the IAM role. The most effective way to protect against the confused deputy problem is to use the aws:SourceArn key with the full ARN of the resource. If you don't know the full ARN or if you're specifying multiple resources, use wildcard characters (*) for the unknown portions. Here are some examples of how to do this for DataSync: • To limit the trust policy to an existing DataSync location, include the full location ARN in the policy. DataSync will assume the IAM role only when dealing with that particular location. • When creating an Amazon S3 location for DataSync, you don't know the location's ARN. In these scenarios, use the following format for the aws:SourceArn key: arn:aws:datasync:us- east-2:123456789012:*. This format validates the partition (aws), account ID, and Region. The following full example shows how you can use the aws:SourceArn and aws:SourceAccount global condition context keys in a trust policy to prevent the confused deputy problem with DataSync. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { Cross-service confused deputy prevention 384 AWS DataSync User Guide "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "123456789012" }, "StringLike": { "aws:SourceArn": "arn:aws:datasync:us-east-2:123456789012:*" } } } ] } For more example policies that show how you can use the aws:SourceArn and aws:SourceAccount global condition context keys with DataSync, see the following topics: • Create a trust relationship that allows DataSync to access your Amazon S3 bucket • Configure an IAM role to access your Amazon S3 bucket DataSync API permissions: Actions and resources When creating AWS Identity and Access Management (IAM) policies, this page can help you understand the relationship between AWS DataSync API operations, the corresponding actions that you can grant permissions to perform, and the AWS resources for which you can grant the permissions. In general, here's how you add DataSync permissions to your policy: • Specify an action in the Action element. The value includes a datasync: prefix and the API operation name. For example, datasync:CreateTask.
sync-dg-119
sync-dg.pdf
119
DataSync to access your Amazon S3 bucket • Configure an IAM role to access your Amazon S3 bucket DataSync API permissions: Actions and resources When creating AWS Identity and Access Management (IAM) policies, this page can help you understand the relationship between AWS DataSync API operations, the corresponding actions that you can grant permissions to perform, and the AWS resources for which you can grant the permissions. In general, here's how you add DataSync permissions to your policy: • Specify an action in the Action element. The value includes a datasync: prefix and the API operation name. For example, datasync:CreateTask. • Specify an AWS resource related to the action in the Resource element. You can also use AWS condition keys in your DataSync policies. For a complete list of AWS keys, see Available keys in the IAM User Guide. For a list of DataSync resources and their Amazon Resource Name (ARN) formats, see DataSync resources and operations. API permissions reference 385 AWS DataSync User Guide DataSync API operations and corresponding actions AddStorageSystem Action: datasync:AddStorageSystem Resource: None Actions: • kms:Decrypt • iam:CreateServiceLinkedRole Resource: * Action: secretsmanager:CreateSecret Resource: arn:aws:secretsmanager:region:account-id:secret:datasync!* CancelTaskExecution Action: datasync:CancelTaskExecution Resource: arn:aws:datasync:region:account-id:task/task-id/execution/exec- id CreateAgent Action: datasync:CreateAgent Resource: None CreateLocationAzureBlob Action: dataSync:CreateLocationAzureBlob Resource: arn:aws:datasync:region:account-id:agent/agent-id CreateLocationEfs Action: datasync:CreateLocationEfs Resource: None CreateLocationFsxLustre Action: datasync:CreateLocationFsxLustre API permissions reference 386 User Guide AWS DataSync Resource: None CreateLocationFsxOntap Action: datasync:CreateLocationFsxOntap Resource: None CreateLocationFsxOpenZfs Action: datasync:CreateLocationFsxOpenZfs Resource: None CreateLocationFsxWindows Action: datasync:CreateLocationFsxWindows Resource: None CreateLocationHdfs Action: dataSync:CreateLocationHdfs Resource: arn:aws:datasync:region:account-id:agent/agent-id CreateLocationNfs Action: datasync:CreateLocationNfs Resource: arn:aws:datasync:region:account-id:agent/agent-id CreateLocationObjectStorage Action: dataSync:CreateLocationObjectStorage Resource: arn:aws:datasync:region:account-id:agent/agent-id CreateLocationS3 Action: datasync:CreateLocationS3 Resource: arn:aws:datasync:region:account-id:agent/agent-id (only for Amazon S3 on Outposts) CreateLocationSmb Action: datasync:CreateLocationSmb API permissions reference 387 AWS DataSync User Guide Resource: arn:aws:datasync:region:account-id:agent/agent-id CreateTask Action: datasync:CreateTask Resources: • arn:aws:datasync:region:account-id:location/source-location-id • arn:aws:datasync:region:account-id:location/destination-location-id DeleteAgent Action: datasync:DeleteAgent Resource: arn:aws:datasync:region:account-id:agent/agent-id DeleteLocation Action: datasync:DeleteLocation Resource: arn:aws:datasync:region:account-id:location/location-id DeleteTask Action: datasync:DeleteTask Resource: arn:aws:datasync:region:account-id:task/task-id DescribeAgent Action: datasync:DescribeAgent Resource: arn:aws:datasync:region:account-id:agent/agent-id DescribeDiscoveryJob Action: datasync:DescribeDiscoveryJob Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id DescribeLocationAzureBlob Action: datasync:DescribeLocationAzureBlob Resource: arn:aws:datasync:region:account-id:location/location-id API permissions reference 388 AWS DataSync DescribeLocationEfs Action: datasync:DescribeLocationEfs User Guide Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationFsxLustre Action: datasync:DescribeLocationFsxLustre Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationFsxOntap Action: datasync:DescribeLocationFsxOntap Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationFsxOpenZfs Action: datasync:DescribeLocationFsxOpenZfs Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationFsxWindows Action: datasync:DescribeLocationFsxWindows Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationHdfs Action: datasync:DescribeLocationHdfs Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationNfs Action: datasync:DescribeLocationNfs Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationObjectStorage Action: datasync:DescribeLocationObjectStorage Resource: arn:aws:datasync:region:account-id:location/location-id API permissions reference 389 AWS DataSync DescribeLocationS3 Action: datasync:DescribeLocationS3 User Guide Resource: arn:aws:datasync:region:account-id:location/location-id DescribeLocationSmb Action: datasync:DescribeLocationSmb Resource: arn:aws:datasync:region:account-id:location/location-id DescribeStorageSystem Action: datasync:DescribeStorageSystem Resource: arn:aws:datasync:region:account-id:system/storage-system-id Action: secretsmanager:DescribeSecret Resource: arn:aws:secretsmanager:region:account-id:secret:datasync!* DescribeStorageSystemResourceMetrics Action: datasync:DescribeStorageSystemResourceMetrics Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id DescribeStorageSystemResources Action: datasync:DescribeStorageSystemResources Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id DescribeTask Action: datasync:DescribeTask Resource: arn:aws:datasync:region:account-id:task/task-id DescribeTaskExecution Action: datasync:DescribeTaskExecution API permissions reference 390 AWS DataSync User Guide Resource: arn:aws:datasync:region:account-id:task/task-id/execution/exec- id GenerateRecommendations Action: datasync:GenerateRecommendations Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id ListAgents Action: datasync:ListAgents Resource: None ListDiscoveryJobs Action: datasync:ListDiscoveryJobs Resource: arn:aws:datasync:region:account-id:system/storage-system-id ListLocations Action: datasync:ListLocations Resource: None ListTagsForResource Action: datasync:ListTagsForResource Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:task/task-id • arn:aws:datasync:region:account-id:location/location-id ListTaskExecutions Action: datasync:ListTaskExecutions Resource: arn:aws:datasync:region:account-id:task/task-id ListTasks Action: datasync:ListTasks API permissions reference 391 AWS DataSync Resource: None RemoveStorageSystem User Guide Action: datasync:RemoveStorageSystem Resource: arn:aws:datasync:region:account-id:system/storage-system-id Action: secretsmanager:DeleteSecret Resource: arn:aws:secretsmanager:region:account-id:secret:datasync!* StartDiscoveryJob Action: datasync:StartDiscoveryJob Resource: arn:aws:datasync:region:account-id:system/storage-system-id StopDiscoveryJob Action: datasync:StopDiscoveryJob Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id StartTaskExecution Action: datasync:StartTaskExecution Resource: arn:aws:datasync:region:account-id:task/task-id TagResource Action: datasync:TagResource Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:task/task-id • arn:aws:datasync:region:account-id:location/location-id UntagResource Action: datasync:UntagResource Resources: API permissions reference 392 AWS DataSync User Guide • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:task/task-id • arn:aws:datasync:region:account-id:location/location-id UpdateAgent Action: datasync:UpdateAgent Resource: arn:aws:datasync:region:account-id:agent/agent-id UpdateDiscoveryJob Action: datasync:UpdateDiscoveryJob Resource: arn:aws:datasync:region:account-id:system/storage-system-id/ job/discovery-job-id UpdateLocationAzureBlob Action: datasync:UpdateLocationAzureBlob Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:location/location-id UpdateLocationHdfs Action: datasync:UpdateLocationHdfs Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:location/location-id UpdateLocationNfs Action: datasync:UpdateLocationNfs Resource: arn:aws:datasync:region:account-id:location/location-id UpdateLocationObjectStorage Action: datasync:UpdateLocationObjectStorage Resources: API permissions reference 393 AWS DataSync User Guide • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:location/location-id UpdateLocationSmb Action: datasync:UpdateLocationSmb Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:location/location-id UpdateStorageSystem Action: datasync:UpdateStorageSystem Resources: • arn:aws:datasync:region:account-id:agent/agent-id • arn:aws:datasync:region:account-id:system/storage-system-id UpdateTask Action: datasync:UpdateTask Resource: arn:aws:datasync:region:account-id:task/task-id UpdateTaskExecution Action: datasync:UpdateTaskExecution Resource: arn:aws:datasync:region:account-id:task/task-id/execution/exec- id Compliance validation for AWS DataSync To learn whether an AWS service is within the scope of specific compliance programs, see AWS services in Scope by Compliance Program and choose the compliance program that you are interested in. For general information, see AWS Compliance Programs. You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Compliance validation 394 AWS DataSync User Guide Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: • Security Compliance & Governance – These solution implementation guides discuss architectural considerations and provide steps for deploying security and compliance features. • HIPAA Eligible Services Reference – Lists HIPAA eligible services. Not all AWS services are HIPAA eligible. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • AWS Customer Compliance Guides – Understand the shared responsibility model through the lens of compliance. The guides summarize the best practices for securing AWS services and map the guidance to security controls across multiple frameworks (including National Institute of Standards and Technology (NIST),
sync-dg-120
sync-dg.pdf
120
Security Compliance & Governance – These solution implementation guides discuss architectural considerations and provide steps for deploying security and compliance features. • HIPAA Eligible Services Reference – Lists HIPAA eligible services. Not all AWS services are HIPAA eligible. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • AWS Customer Compliance Guides – Understand the shared responsibility model through the lens of compliance. The guides summarize the best practices for securing AWS services and map the guidance to security controls across multiple frameworks (including National Institute of Standards and Technology (NIST), Payment Card Industry Security Standards Council (PCI), and International Organization for Standardization (ISO)). • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS. Security Hub uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see Security Hub controls reference. • Amazon GuardDuty – This AWS service detects potential threats to your AWS accounts, workloads, containers, and data by monitoring your environment for suspicious and malicious activities. GuardDuty can help you address various compliance requirements, like PCI DSS, by meeting intrusion detection requirements mandated by certain compliance frameworks. • AWS Audit Manager – This AWS service helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards. Resilience in AWS DataSync The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Resilience 395 AWS DataSync User Guide Zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. Note If an Availability Zone you're migrating data to or from does fail while you're running a DataSync task, the task also will fail. For more information about AWS Regions and Availability Zones, see AWS global infrastructure. Infrastructure security in AWS DataSync As a managed service, AWS DataSync is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see AWS Cloud Security. To design your AWS environment using the best practices for infrastructure security, see Infrastructure Protection in Security Pillar AWS Well‐Architected Framework. You use AWS published API calls to access DataSync through the network. Clients must support the following: • Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3. • Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. Infrastructure security 396 AWS DataSync User Guide AWS DataSync quotas Find out about resource quotas and limits when working with AWS DataSync. Storage system, file, and object limits The following table describes the limits that DataSync has when working with storage systems, files, and objects. Description Limit Maximum total file path length 4,096 bytes Maximum file path component (file name, directory, or subdirectory) length 255 bytes Maximum length of Windows domain 253 characters Maximum length of server hostname 255 characters Maximum Amazon S3 object name length 1,024 UTF-8 characters DataSync quotas The following table describes the quotas for DataSync resources in a specific AWS account and Region. Resource Quota Adjustable Maximum number of tasks you can create 100 (Enhanced mode tasks) Maximum number of source and destination objects that DataSync can work with per task execution Virtually unlimited Yes N/A For more information, see How DataSync transfers files, objects, and directories Storage system, file, and object limits 397 AWS DataSync Resource (Basic mode tasks) Maximum number of source and destination files, objects, and directories that DataSync can work with per task execution between on-premises, self- managed, or other cloud storage and AWS storage services For more information, see How DataSync transfers files, objects, and directories Quota 50 million User Guide Adjustable Yes Important Remember the following about this quota: • If you transfer Amazon S3 objects with prefixes, the prefixes are treated as directories and count towards the quota. For example, DataSync would considers3:// bucket/foo/ bar.txt as two directories (./ and ./foo/) and one
sync-dg-121
sync-dg.pdf
121
and object limits 397 AWS DataSync Resource (Basic mode tasks) Maximum number of source and destination files, objects, and directories that DataSync can work with per task execution between on-premises, self- managed, or other cloud storage and AWS storage services For more information, see How DataSync transfers files, objects, and directories Quota 50 million User Guide Adjustable Yes Important Remember the following about this quota: • If you transfer Amazon S3 objects with prefixes, the prefixes are treated as directories and count towards the quota. For example, DataSync would considers3:// bucket/foo/ bar.txt as two directories (./ and ./foo/) and one object (bar.txt). • If your task is working with more than 20 million files, objects, or directories, make sure that you allocate a minimum of 64 GB of RAM to your DataSync agent. For more information, see agent requireme Tip Instead of requestin g an increase, you can create tasks that focus on specific directori es using include and exclude filters. For more informati on, see filtering the data transferr DataSync quotas 398 AWS DataSync Resource User Guide Quota Adjustable nts for DataSync transfers. ed by DataSync. DataSync quotas 399 AWS DataSync Resource (Basic mode tasks) Maximum number of source and destination files, objects, and directories that DataSync can work with per task execution between AWS storage services For more information, see How DataSync transfers files, objects, and directories Quota 25 million User Guide Adjustable Yes Important If you transfer Amazon S3 objects with prefixes, the prefixes are treated as directories and count towards the quota. For example, DataSync would considers3://buck et/foo/bar.txt as two directories (./ and ./foo/) and one object (bar.txt). Tip Instead of requestin g an increase, you can create tasks that focus on specific directori es using include and exclude filters. For more informati on, see filtering the data transferr DataSync quotas 400 AWS DataSync Resource Quota Adjustable User Guide ed by DataSync. Maximum throughput per task (for transfers that use a DataSync agent) 10 Gbps Maximum throughput per task (for transfers that don't use a DataSync agent) 5 Gbps Maximum number of characters you can include in a task filter 102,400 characters Note If you're using the DataSync console, this limit includes all the characters combined in your include and exclude patterns. Maximum number of queued executions for a single task 50 Maximum number of concurrent Enhanced mode task executions Maximum number of days a task execution's history is retained 120 30 DataSync Discovery quotas No No No No No No The following table describes the quotas for DataSync Discovery in a specific AWS account and Region. DataSync Discovery quotas 401 AWS DataSync Resource Quota Adjustable User Guide Maximum number of storage systems you can use with DataSync Discovery Maximum number of storage systems a DataSync agent can access at a time 10 4 Request a quota increase No No You can request an increase for some DataSync quotas. Increases aren't granted right away and might take a couple of days to take effect. To request a quota increase 1. Open the Service Quotas console at https://console.aws.amazon.com/servicequotas/. 2. In the navigation pane, choose AWS services and then choose AWS DataSync. 3. Choose the quota that you want to increase, then choose Request increase at account-level. 4. Enter the total amount that you want the quota to be, then choose Request. If you need to increase a different quota, fill out a separate request. Request a quota increase 402 AWS DataSync User Guide Troubleshooting AWS DataSync issues Use the following information to troubleshoot AWS DataSync issues and errors. Topics • Troubleshooting issues with DataSync agents • Troubleshooting issues with DataSync locations • Troubleshooting issues with DataSync tasks • Troubleshooting data verification issues • Troubleshooting higher than expected S3 storage costs with DataSync Troubleshooting issues with DataSync agents Use the following information to help you troubleshoot issues with AWS DataSync agents. Some of these issues can include: • Trouble connecting to an Amazon EC2 agent's local console • Failing to retrieve an agent's activation key • Issues activating an agent with a VPC service endpoint • Discovering an agent is offline How do I connect to an Amazon EC2 agent's local console? To connect to an Amazon EC2 agent's local console, you must use SSH. Make sure that your EC2 instance's security group allows access with SSH (TCP port 22). In a terminal, run the following ssh command to connect to the instance: ssh -i /path/key-pair-name.pem instance-user-name@instance-public-ip-address • For /path/key-pair-name, specify the path and file name (.pem) of the private key required to connect to your instance. • For instance-user-name, specify admin. Troubleshooting agent issues 403 AWS DataSync User Guide • For instance-public-ip-address, specify the public IP address of your instance. What does the Failed to retrieve agent activation key error mean? When activating your DataSync agent, the
sync-dg-122
sync-dg.pdf
122
agent's local console, you must use SSH. Make sure that your EC2 instance's security group allows access with SSH (TCP port 22). In a terminal, run the following ssh command to connect to the instance: ssh -i /path/key-pair-name.pem instance-user-name@instance-public-ip-address • For /path/key-pair-name, specify the path and file name (.pem) of the private key required to connect to your instance. • For instance-user-name, specify admin. Troubleshooting agent issues 403 AWS DataSync User Guide • For instance-public-ip-address, specify the public IP address of your instance. What does the Failed to retrieve agent activation key error mean? When activating your DataSync agent, the agent connects to the service endpoint that you specify to request an activation key. This error likely means that your network security settings are blocking the connection. Action to take If you're using a virtual private cloud (VPC) service endpoint, verify that your security group settings allow your agent to connect to the VPC endpoint. For information about required ports, see Network requirements for VPC service endpoints. If you're using a public or Federal Information Processing Standard (FIPS) endpoint, check that your firewall and router settings allow your agent to connect to the endpoint. For information, see Network requirements for public or FIPS service endpoints. I still can't activate an agent by using a VPC service endpoint If you're still having issues activating a DataSync agent with a VPC service endpoint, see I don't know what's going on with my agent. Can someone help me? What do I do if my agent is offline? Your DataSync agent can be offline for a few reasons, but you might be able to get it back online. Before you delete the agent and create a new one, go through the following checklist to help you understand what might have happened. • Contact your backup team – If your agent is offline because its virtual machine (VM) was restored from a snapshot or backup, you might need to replace the agent. • Check if the agent's VM or Amazon EC2 instance is off – Depending on the type of agent that you're using, try turning the VM or EC2 instance back on if it's off. Once it's on again, test your agent's network connectivity to AWS. • Verify your agent meets the minimum hardware requirements – Your agent might be offline because its VM or EC2 instance configuration was accidentally changed since the agent was activated. For example, if your VM no longer has the minimum required memory or space, the agent might appear as offline. For more information, see Requirements for AWS DataSync agents. What does the Failed to retrieve agent activation key error mean? 404 AWS DataSync User Guide • Wait for agent-related software updates to finish – Your agent might go offline briefly following software updates provided by AWS. If you believe this is why the agent is offline, wait a short period then check if the agent is back online. • Check your VPC service endpoint settings – If your offline agent is using a public service endpoint and also in the same VPC where you created a VPC service endpoint for DataSync, you might need to disable private DNS support for that VPC endpoint. If none of these seem to be the reason that the agent is offline, you likely need to replace the agent. I don't know what's going on with my agent. Can someone help me? You can allow AWS Support to access your DataSync agent and help troubleshoot agent-related issues. You must enable this access through the agent's local console. To provide Support access to your agent 1. Log in to your agent's local console. 2. At the prompt, enter 5 to open the command prompt (for VMware VMs, use 6). 3. 4. Enter h to open the AVAILABLE COMMANDS window. In the AVAILABLE COMMANDS window, enter the following to connect to Support: open-support-channel If you are using the agent with VPC endpoints, you must provide a VPC endpoint IP address for your support channel, as follows: open-support-channel vpc-ip-address Your firewall must allow the outbound TCP port 22 to initiate a support channel to AWS. When you connect to Support, DataSync assigns you a support number. Make a note of your support number. Note The channel number isn't a Transmission Control Protocol/User Datagram Protocol (TCP/UDP) port number. Instead, it makes an SSH (TCP 22) connection to servers and provides the support channel for the connection. I don't know what's going on with my agent. Can someone help me? 405 AWS DataSync User Guide 5. When the support channel is established, provide your support service number to Support so that they can provide troubleshooting assistance. 6. When the support session is finished, press Enter to end it. 7. 8. Enter exit to log out of the DataSync
sync-dg-123
sync-dg.pdf
123
Make a note of your support number. Note The channel number isn't a Transmission Control Protocol/User Datagram Protocol (TCP/UDP) port number. Instead, it makes an SSH (TCP 22) connection to servers and provides the support channel for the connection. I don't know what's going on with my agent. Can someone help me? 405 AWS DataSync User Guide 5. When the support channel is established, provide your support service number to Support so that they can provide troubleshooting assistance. 6. When the support session is finished, press Enter to end it. 7. 8. Enter exit to log out of the DataSync local console. Follow the prompts to exit the local console. Troubleshooting issues with DataSync locations Use the following information to help you troubleshoot issues with AWS DataSync locations. Some of these issues can include: • Permissions and mount errors with NFS locations • File ownership issues • Problems accessing SMB locations that use Kerberos authentication • Permission and access issues with object storage, such as Amazon S3 and Microsoft Azure Blob locations My task failed with an NFS permissions denied error You can get a "permissions denied" error message if you configure your NFS file server with root_squash or all_squash and your files don't all have read access. Action to take To fix this issue, configure your NFS export with no_root_squash or make sure that the permissions for all of the files that you want to transfer allow read access for all users. For DataSync to access directories, you must also enable all-execute access. To make sure that the directory can be mounted, first connect to any computer that has the same network configuration as your agent. Then run the following CLI command: mount -t nfs -o nfsvers=<your-nfs-server-version> <your-nfs-server- name>:<nfs-export-path-you-specified> <new-test-folder-on-your-computer> If the issue still isn't resolved, contact AWS Support Center. Troubleshooting location issues 406 AWS DataSync User Guide My task failed with an NFS mount error You might see the following error when running a DataSync task that involves an NFS file server location: Task failed to access location loc-1111222233334444a: x40016: mount.nfs: Connection timed out Actions to take Do the following until the error is resolved. 1. Make sure that the NFS file server and export that you specify in your DataSync location are valid. If they aren't, delete your location and task, then create a new location and task that use a valid NFS file server and export. For more information, see Using the DataSync console. 2. Check your firewall configuration between your agent and NFS file server. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. 3. Make sure that your agent can access the NFS file server and mount the export. For more information, see Providing DataSync access to NFS file servers. 4. If you still see the error, open a support channel with Support. For more information, see I don't know what's going on with my agent. Can someone help me?. My task failed with an Amazon EFS mount error You might see the following error when running a DataSync task that involves an Amazon EFS location: Task failed to access location loc-1111222233334444a: x40016: Failed to connect to EFS mount target with IP: 10.10.1.0. This can happen if the Amazon EFS file system's mount path that you configure with your location gets updated or deleted. DataSync isn't aware of these changes in the file system. Action to take Delete your location and task and create a new Amazon EFS location with the new mount path. My task failed with an NFS mount error 407 AWS DataSync User Guide File ownership isn't maintained with NFS transfer After your transfer, you might notice that the files in your DataSync destination location have different user IDs (UIDs) or group IDs (GIDs) than the same files in your source location. For example, the files in your destination might have a UID of 65534, 99, or nobody. This can happen if a file system involved in your transfer uses NFS version 4 ID mapping, a feature that DataSync doesn't support. Action to take You have a couple options to work around this issue: • Create a new location for the file system that uses NFS version 3 instead of version 4. • Disable NFS version 4 ID mapping on the file system. Retry the transfer. Either option should resolve the issue. My task can't access an SMB location that uses Kerberos DataSync errors with SMB locations that use Kerberos authentication are typically related to mismatches between your location and Kerberos configurations. There also might be a network issue. Failed to access location The following error indicates that there might be configuration issues with your SMB location or Kerberos setup: Task failed to access location Verify the following: • The SMB file server that you specify
sync-dg-124
sync-dg.pdf
124
version 3 instead of version 4. • Disable NFS version 4 ID mapping on the file system. Retry the transfer. Either option should resolve the issue. My task can't access an SMB location that uses Kerberos DataSync errors with SMB locations that use Kerberos authentication are typically related to mismatches between your location and Kerberos configurations. There also might be a network issue. Failed to access location The following error indicates that there might be configuration issues with your SMB location or Kerberos setup: Task failed to access location Verify the following: • The SMB file server that you specify for your location is a domain name. For Kerberos, you can't specify the file server's IP address. • The Kerberos principal that you specify for your location matches the principal that you use to create the Kerberos key table (keytab) file. Principal names are case sensitive. • The Kerberos principal's mapped user password hasn't changed since you created the keytab file. If the password changes (because of password rotation or some other reason), your task execution might fail with the following error: File ownership isn't maintained with NFS transfer 408 AWS DataSync User Guide Task failed to access location loc-1111222233334444a: x40015: kinit: Preauthentication failed while getting initial credentials Can't contact KDC realm The following error indicates a networking issue: kinit: Cannot contact any KDC for realm 'MYDOMAIN.ORG' while getting initial credentials" Verify the following: • The Kerberos configuration file (krb5.conf) that you provided DataSync has the correct information about your Kerberos realm. For an example krb5.conf file, see Kerberos authentication prerequisites. • The Kerberos Key Distribution Center (KDC) server port is open. The KDC port is typically TCP port 88. • The DNS configuration on your network. My task failed with an input/output error You can get an input/output error message if your storage system fails I/O requests from the DataSync agent. Common reasons for this include a server disk failure, changes to your firewall configuration, or a network router failure. If the error involves an NFS file server or Hadoop Distributed File System (HDFS) cluster, use the following steps to resolve the error. Actions to take (NFS) First, check your NFS file server's logs and metrics to determine if the problem started on the NFS server. If yes, resolve that issue. Next, check that your network configuration hasn't changed. To check if the NFS file server is configured correctly and that DataSync can access it, do the following: 1. Set up another NFS client on the same network subnet as the agent. 2. Mount your share on that client. 3. Validate that the client can read and write to the share successfully. My task failed with an input/output error 409 AWS DataSync Actions to take (HDFS) Do the following until you resolve the error: User Guide 1. Make sure that your HDFS cluster allows your DataSync agent to communicate with the cluster's NameNode and DataNode ports. In most clusters, you can find the port numbers that the cluster uses in the following configuration files: • To find the NameNode port, look in the core-site.xml file under the fs.default or fs.default.name property (depending on the Hadoop distribution). • To find the DataNode port, look in the hdfs-site.xml file under the dfs.datanode.address property. 2. In your hdfs-site.xml file, verify that your dfs.data.transfer.protection property has only one value. For example: <property> <name>dfs.data.transfer.protection</name> <value>privacy</value> </property> Error: FsS3UnableToConnectToEndpoint DataSync can't connect to your Amazon S3 location. This could mean the location's S3 bucket isn't reachable or the location isn't configured correctly. Do the following until you resolve the issue: • Check if DataSync can access your S3 bucket. • Make your sure location is configured correctly by using the DataSync console or DescribeLocationS3 operation. Error: FsS3HeadBucketFailed DataSync can't access the S3 bucket that you're transferring to or from. Check if DataSync has permission to access the bucket by using the Amazon S3 HeadBucket operation. If you need to adjust your permissions, see Providing DataSync access to S3 buckets. Error: FsS3UnableToConnectToEndpoint 410 AWS DataSync User Guide Task fails with an Unable to list Azure Blobs on the volume root error If your DataSync transfer task fails with an Unable to list Azure Blobs on the volume root error, there might be an issue with your shared access signature (SAS) token or your Azure storage account's network. Actions to take Try the following and run your task again until you fix the issue: • Make sure that your SAS token has the right permissions to access your Microsoft Azure Blob Storage. • If you're running your DataSync agent in Azure, configure your storage account to allow access from the virtual network where your agent resides. • If you're running your agent on Amazon EC2, configure your Azure storage firewall to allow access from the agent's public
sync-dg-125
sync-dg.pdf
125
volume root error, there might be an issue with your shared access signature (SAS) token or your Azure storage account's network. Actions to take Try the following and run your task again until you fix the issue: • Make sure that your SAS token has the right permissions to access your Microsoft Azure Blob Storage. • If you're running your DataSync agent in Azure, configure your storage account to allow access from the virtual network where your agent resides. • If you're running your agent on Amazon EC2, configure your Azure storage firewall to allow access from the agent's public IP address. For information on how to configure your Azure storage account's network, see the Azure Blob Storage documentation. Error: FsAzureBlobVolRootListBlobsFailed The shared access signature (SAS) token that DataSync uses to access your Microsoft Azure Blob Storage doesn't have the List permission. To resolve the issue, update your location with a token that has the List permission and try running your task again. Error: SrcLocHitAccess DataSync can't access your source location. Check that DataSync has permission to access the location and try running your task again. Error: SyncTaskErrorLocationNotAdded DataSync can't access your location. Check that DataSync has permission to access the location and try running your task again. Task fails with an Unable to list Azure Blobs on the volume root error 411 AWS DataSync User Guide Task with S3 source location fails with HeadObject or GetObjectTagging error If you're transferring objects with specific version IDs from an S3 bucket, you might see an error related to HeadObject or GetObjectTagging. For example, here's an error related to GetObjectTagging: [WARN] Failed to read metadata for file /picture1.png (versionId: 111111): S3 Get Object Tagging Failed [ERROR] S3 Exception: op=GetObjectTagging photos/picture1.png, code=403, type=15, exception=AccessDenied, msg=Access Denied req-hdrs: content-type=application/xml, x-amz-api-version=2006-03-01 rsp-hdrs: content-type=application/xml, date=Wed, 07 Feb 2024 20:16:14 GMT, server=AmazonS3, transfer-encoding=chunked, x-amz-id-2=IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km, x-amz- request-id=79104EXAMPLEB723 If you see either of these errors, validate that the IAM role that DataSync uses to access your S3 source location has the following permissions: • s3:GetObjectVersion • s3:GetObjectVersionTagging If you need to update your role with these permissions, see Creating an IAM role for DataSync to access your Amazon S3 location. Troubleshooting issues with DataSync tasks Use the following information to help you troubleshoot issues with AWS DataSync tasks and task executions. These issues might include task setup problems, stuck task executions, and data not transferring as expected. Error: Invalid SyncOption value. Option: TransferMode,PreserveDeletedFiles, Value: ALL,REMOVE. This error occurs when you're creating or editing your DataSync task and you select the Transfer all data option and deselect the Keep deleted files option. Task with S3 source location fails with HeadObject or GetObjectTagging error 412 AWS DataSync User Guide When you transfer all data, DataSync doesn't scan your destination location and doesn't know what to delete. Task execution fails with an EniNotFound error This error occurs if you delete one of your task's network interfaces in your virtual private cloud (VPC). If your task is scheduled or queued, the task will fail if it's missing a network interface required to transfer your data. Actions to take You have the following options to work around this issue: • Manually restart the task. When you do this, DataSync will create any missing network interfaces it needs to run the task. • If you need to clean up resources in your VPC, make sure that you don't delete network interfaces related to a DataSync task that you're still using. To see the network interfaces allocated to your task, do one of the following: • Use the DescribeTask operation. You can view the network interfaces in the SourceNetworkInterfaceArns and DestinationNetworkInterfaceArns response elements. • In the Amazon EC2 console, search for your task ID (such as task-f012345678abcdef0) to find its network interfaces. • Consider not running your tasks automatically. This could include disabling task queueing or scheduling (through DataSync or custom automation). Task execution fails with a Cannot allocate memory error When your DataSync task fails with a Cannot allocate memory error, it can mean a few different things. Action to take Try the following until you no longer see the issue: • If your transfer involves an agent, make sure that the agent meets the virtual machine (VM) or Amazon EC2 instance requirements. Task execution fails with an EniNotFound error 413 AWS DataSync User Guide • Split your transfer into multiple tasks by using filters. It's possible that you're trying to transfer more files or objects than what one DataSync task can handle. • If you still see the issue, contact Support. Task execution has a launching status but nothing seems to be happening Your DataSync task can get stuck with a Launching status typically because the agent is powered off or has lost network connectivity. Action to take Make sure that your agent's status
sync-dg-126
sync-dg.pdf
126
virtual machine (VM) or Amazon EC2 instance requirements. Task execution fails with an EniNotFound error 413 AWS DataSync User Guide • Split your transfer into multiple tasks by using filters. It's possible that you're trying to transfer more files or objects than what one DataSync task can handle. • If you still see the issue, contact Support. Task execution has a launching status but nothing seems to be happening Your DataSync task can get stuck with a Launching status typically because the agent is powered off or has lost network connectivity. Action to take Make sure that your agent's status is ONLINE. If the agent is OFFLINE, make sure it's powered on. If the agent is powered on and the task is still Launching, then there's likely a network connection problem between your agent and AWS. For information about how to test network connectivity, see Verifying your agent's connection to the DataSync service. If you're still having this issue, see I don't know what's going on with my agent. Can someone help me?. Task execution seems stuck in the preparing status The time your DataSync transfer task has the Preparing status depends on the amount of data in your transfer source and destination and the performance of those storage systems. When a task starts, DataSync performs a recursive directory listing to discover all files, objects, directories, and metadata in your source and destination. DataSync uses these listings to identify differences between storage systems and determine what to copy. This process can take a few minutes or even a few hours. Action to take You shouldn't have to do anything. Continue to wait for the task status to change to Transferring. If the status still doesn't change, contact AWS Support Center. Task execution stops before the transfer finishes If your DataSync task execution stops early, your task configuration might include an AWS Region that's disabled in your AWS account. Task execution has a launching status but nothing seems to be happening 414 AWS DataSync Actions to take Do the following to run your task again: User Guide 1. Check the opt-in status of your task's Regions and make sure they're enabled. 2. Start the task again. Task execution fails when transferring from a Google Cloud Storage bucket Because DataSync communicates with Google Cloud Storage by using the Amazon S3 API, there's a limitation that might cause your DataSync transfer to fail if you try to copy object tags. The following message related to the issue appears in your CloudWatch logs: [WARN] Failed to read metadata for file /your-bucket/your-object: S3 Get Object Tagging Failed: proceeding without tagging To prevent this, deselect the Copy object tags option when configuring your transfer task settings. There are mismatches between task execution's timestamps When looking at the DataSync console or Amazon CloudWatch logs, you might notice that the start and end times for your DataSync task execution don't match the timestamps you see in other monitoring tools. This is because the console and CloudWatch logs take into account the time a task execution spends in the launching or queueing states, while some other tools don’t. You might notice this discrepancy when comparing execution timestamps between the DataSync console or CloudWatch logs and the following places: • Logs for the file system involved in your transfer • The last modified date on an Amazon S3 object that DataSync wrote to • Network traffic coming from the DataSync agent • Amazon EventBridge events Task execution fails with NoMem error The set of data you're trying to transfer may be too large for DataSync. If you see this error, contact AWS Support Center. Task execution fails when transferring from a Google Cloud Storage bucket 415 AWS DataSync User Guide Object fails to transfer to Azure Blob Storage with user metadata key error When transferring from an S3 bucket to Azure Blob Storage, you might see the following error: [ERROR] Failed to transfer file /user-metadata/file1: Azure Blob user metadata key must be a CSharp identifier This means that /user-metadata/file1 includes user metadata that doesn't use a valid C# identifier. For more information, see the Microsoft documentation. There's an /.aws-datasync folder in the destination location DataSync creates a folder called /.aws-datasync in your destination location to help facilitate your data transfer. While DataSync typically deletes this folder following your transfer, there might be situations where this doesn't happen. Action to take Delete this folder anytime as long as you don't have a running task execution copying to that location. Can't transfer symbolic links between locations using SMB When your task execution finishes, you see the following error: Transfer and verification completed. Selected files transferred except for files skipped due to errors. If no skipped files are listed in Cloud Watch Logs, please contact AWS Support for further assistance. When transferring between
sync-dg-127
sync-dg.pdf
127
in your destination location to help facilitate your data transfer. While DataSync typically deletes this folder following your transfer, there might be situations where this doesn't happen. Action to take Delete this folder anytime as long as you don't have a running task execution copying to that location. Can't transfer symbolic links between locations using SMB When your task execution finishes, you see the following error: Transfer and verification completed. Selected files transferred except for files skipped due to errors. If no skipped files are listed in Cloud Watch Logs, please contact AWS Support for further assistance. When transferring between SMB storage systems (such as an SMB file server and Amazon FSx for Windows File Server file system), you might see the following warnings and errors in your CloudWatch logs: [WARN] Failed to read metadata for file /appraiser/symlink: No data available [ERROR] Failed to read metadata for directory /appraiser/symlink: No data available Object fails to transfer to Azure Blob Storage with user metadata key error 416 AWS DataSync Action to take User Guide DataSync doesn't support transferring symbolic links (or hard links) when transferring between these location types. For more information, see Links and directories copied by AWS DataSync. Task report errors You might run into one of the following errors while trying to monitor your DataSync transfer with a task report. Error message Workaround File path exceeds the maximum length of 4,096 characters. Cannot write to Task Report N/A (DataSync can't transfer a file with a path that exceeds 4,096 bytes) For more information, see Storage system, file, and object limits. Failed to upload Task Report(s) to S3 due to an invalid bucket or IAM role Check that the DataSync IAM role has the right permissions to upload a task report to your S3 bucket. Execution error occurred prior to generating any Task Reports Check your CloudWatch logs to identify why your task execution failed. Troubleshooting data verification issues By default, AWS DataSync verifies the integrity of your data at the end of a transfer. Use the following information to help you diagnose common verification errors and warnings, such as files being modified or deleted before DataSync finishes verifying your data. With verification issues, many times it helps to review your CloudWatch logs (or task reports) in addition to the task execution error that you're seeing. DataSync provides JSON-structured logs for Enhanced mode tasks, while Basic mode tasks have unstructured logs. There are mismatches between a file's contents When your task execution finishes, you see the following error: Task report errors 417 AWS DataSync User Guide Transfer and verification completed. Verification detected mismatches. Files with mismatches are listed in Cloud Watch Logs In your CloudWatch logs, you might notice failed verifications for contents that differ between the source and destination locations. This can happen if files are modified during your transfer. For example, the following logs shows that file1.txt has different mtime, srcHash, and dstHash values: Basic mode log example [NOTICE] Verification failed <> /directory1/directory2/file1.txt [NOTICE] /directory1/directory2/file1.txt srcMeta: type=R mode=0755 uid=65534 gid=65534 size=534528 atime=1633100003/684349800 mtime=1602647222/222919600 extAttrsHash=0 [NOTICE] srcHash: 0c506c26bd1e43bd3ac346734f1a9c16c4ad100d1b43c2903772ca894fd24e44 [NOTICE] /directory1/directory2/file1.txt dstMeta: type=R mode=0755 uid=65534 gid=65534 size=511001 atime=1633100003/684349800 mtime=1633106855/859227500 extAttrsHash=0 [NOTICE] dstHash: dbd798929f11a7c0201e97f7a61191a83b4e010a449dfc79fbb8233801067c46 In DataSync, mtime represents the last time a file was written to before preparation. When verifying transfers, DataSync compares mtime values between source and destination locations. A verification failure like this occurs if the mtime for a file isn't the same for both locations. The differences between srcHash and dstHash indicate the file's contents don't match at both locations. Actions to take Do the following: 1. Use an epoch time converter to determine whether the source or destination file or object was modified more recently. This can help identify which version is current. 2. To avoid this error again, schedule your task to run during a maintenance window when there's no activity at your source and destination. There's a mismatch between a file's SMB metadata When your task execution finishes, you see the following error: There's a mismatch between a file's SMB metadata 418 AWS DataSync User Guide Transfer and verification completed. Verification detected mismatches. Files with mismatches are listed in Cloud Watch Logs When transferring between storage systems that support the Server Message Block (SMB) protocol, you might see this error when a file's extended SMB attributes don't match between source and destination. For example, the following logs show that file1.txt has a different extAttrsHash value between locations, indicating the file contents are identical but extended attributes weren't set at the destination: Basic mode log example [NOTICE] Verification failed <> /directory1/directory2/file1.txt [NOTICE] /directory1/directory2/file1.txt srcMeta: type=R mode=0755 uid=65534 gid=65534 size=1469752 atime=1631354985/174924200 mtime=1536995541/986211400 extAttrsHash=2272191894 [NOTICE] srcHash: 38571d42b646ac8f4034b7518636b37dd0899c6fc03cdaa8369be6e81a1a2bb5 [NOTICE] /directory1/directory2/file1.txt dstMeta: type=R mode=0755 uid=65534 gid=65534 size=1469752 atime=1631354985/174924200 mtime=1536995541/986211400 extAttrsHash=3051150340 [NOTICE] dstHash: 38571d42b646ac8f4034b7518636b37dd0899c6fc03cdaa8369be6e81a1a2bb5 You might also see a related error message about extended attributes: [ERROR] Deferred error: WriteFileExtAttr2 failed
sync-dg-128
sync-dg.pdf
128
protocol, you might see this error when a file's extended SMB attributes don't match between source and destination. For example, the following logs show that file1.txt has a different extAttrsHash value between locations, indicating the file contents are identical but extended attributes weren't set at the destination: Basic mode log example [NOTICE] Verification failed <> /directory1/directory2/file1.txt [NOTICE] /directory1/directory2/file1.txt srcMeta: type=R mode=0755 uid=65534 gid=65534 size=1469752 atime=1631354985/174924200 mtime=1536995541/986211400 extAttrsHash=2272191894 [NOTICE] srcHash: 38571d42b646ac8f4034b7518636b37dd0899c6fc03cdaa8369be6e81a1a2bb5 [NOTICE] /directory1/directory2/file1.txt dstMeta: type=R mode=0755 uid=65534 gid=65534 size=1469752 atime=1631354985/174924200 mtime=1536995541/986211400 extAttrsHash=3051150340 [NOTICE] dstHash: 38571d42b646ac8f4034b7518636b37dd0899c6fc03cdaa8369be6e81a1a2bb5 You might also see a related error message about extended attributes: [ERROR] Deferred error: WriteFileExtAttr2 failed to setextattrlist(filename="/ directory1/directory2/file1.txt"): Input/output error Action to take This error typically occurs when there are insufficient permissions to copy access control lists (ACLs) to the destination. To resolve this issue, review the following configuration guides based on your destination type: • Required permissions with FSx for Windows File Server file systems • Required permissions with FSx for ONTAP file systems that use SMB There's a mismatch between a file's SMB metadata 419 AWS DataSync User Guide Files to transfer are no longer at source location When your task execution finishes, you see the following error: Transfer and verification completed. Selected files transferred except for files skipped due to errors. If no skipped files are listed in Cloud Watch Logs, please contact AWS Support for further assistance. In your logs, you might see errors indicating that files aren't at the source location. This can happen if files (such as file1.dll and file2.dll) are deleted after preparation but before DataSync transfers them: Basic mode log example [ERROR] Failed to open source file /file1.dll: No such file or directory [ERROR] Failed to open source file /file2.dll: No such file or directory Action to take To avoid these situations, schedule your task to run when there's no activity at the source location. For example, you can run your task during a maintenance window when users and applications aren't actively working with that location. In some cases, you might not see logs associated with this error. If that happens, contact AWS Support Center. DataSync can't verify destination data When your task execution finishes, you see the following error: Transfer and verification completed. Verification detected mismatches. Files with mismatches are listed in Cloud Watch Logs In your logs, you might notice that DataSync can't verify certain folders or files in the destination location. These errors can look like this: Files to transfer are no longer at source location 420 AWS DataSync Basic mode log example User Guide [ERROR] Failed to read metadata for destination file /directory1/directory2/ file1.txt: No such file or directory For files, you might see verification failures like this: Basic mode log example [NOTICE] Verification failed <> /directory1/directory2/file1.txt [NOTICE] /directory1/directory2/file1.txt srcMeta: type=R mode=0755 uid=65534 gid=65534 size=61533 atime=1633099987/747713800 mtime=1536995631/894267700 extAttrsHash=232104771 [NOTICE] srcHash: 1426fe40f669a7d36cca1b5329983df31a9aeff8eb9fe3ac885f26de2f8fff6b [NOTICE] /directory1/directory2/file1.txt dstMeta: type=R mode=0755 uid=65534 gid=65534 size=0 atime=0/0 mtime=0/0 extAttrsHash=0 [NOTICE] dstHash: 0000000000000000000000000000000000000000000000000000000000000000 Action to take These logs indicate that destination data was deleted after the transfer but before verification. (Logs look similar when data is uploaded to a source location during the same time frame.) To avoid these situations, schedule your task to run when there's no activity at the destination location. For example, you can run your task during a maintenance window when users and applications aren't actively working with that location. DataSync can't read object metadata When your task execution finishes, you see the following error: Transfer and verification completed. Selected files transferred except for files skipped due to errors. If no skipped files are listed in Cloud Watch Logs, please contact AWS Support for further assistance. In your logs, you might notice that DataSync can't read file1.png because of a failed Amazon S3 HeadObject request. DataSync makes HeadObject requests with S3 locations during task preparation and verification. DataSync can't read object metadata 421 AWS DataSync Basic mode log example User Guide [WARN] Failed to read metadata for file /file1.png: S3 Head Object Failed Actions to take To fix this issue, verify whether DataSync has the right level of permissions to work with your S3 bucket: • Make sure that the IAM role that DataSync uses to access your Amazon S3 locations allows the s3:GetObject permission. For more information, see Required permissions. • If your S3 bucket uses server-side encryption, make sure that DataSync is allowed to access the objects in that bucket. For more information, see Accessing S3 buckets using server-side encryption. There's a mismatch in an object's system-defined metadata When your Enhanced mode task execution between S3 buckets finishes, you see the following error: Verification failed due to a difference in metadata You might notice in your logs a mismatch in an object’s Amazon S3 system-defined metadata. In this particular example, the source object doesn't have Content-Type metadata but the destination object does. This happened because
sync-dg-129
sync-dg.pdf
129
information, see Required permissions. • If your S3 bucket uses server-side encryption, make sure that DataSync is allowed to access the objects in that bucket. For more information, see Accessing S3 buckets using server-side encryption. There's a mismatch in an object's system-defined metadata When your Enhanced mode task execution between S3 buckets finishes, you see the following error: Verification failed due to a difference in metadata You might notice in your logs a mismatch in an object’s Amazon S3 system-defined metadata. In this particular example, the source object doesn't have Content-Type metadata but the destination object does. This happened because the destination S3 bucket automatically applied "ContentType": "application/octet-stream" metadata to the object when DataSync transferred it there. Enhanced mode log example { "Action": "VERIFY", "Source": { "LocationId": "loc-0b3017fc4ba4a2d8d", "RelativePath": "encoding/content-null", "Metadata": { "Type": "Object", "ContentSize": 24, "LastModified": "2024-12-23T15:48:15Z", There's a mismatch in an object's system-defined metadata 422 AWS DataSync "S3": { "SystemMetadata": { "ETag": "\"68b9c323bb846841ee491481f576ed4a\"" User Guide }, "UserMetadata": {}, "Tags": {} } } }, "Destination": { "LocationId": "loc-abcdef01234567890", "RelativePath": "encoding/content-null", "Metadata": { "Type": "Object", "ContentSize": 24, "LastModified": "2024-12-23T16:00:03Z", "S3": { "SystemMetadata": { "ContentType": "application/octet-stream", "ETag": "\"68b9c323bb846841ee491481f576ed4a\"" }, "UserMetadata": { "file-mtime": "1734968895000" }, "Tags": {} } } }, "TransferType": "CONTENT_AND_METADATA", "ErrorCode": "MetadataDiffers", "ErrorDetail": "Verification failed due to a difference in metadata" } Action to take To avoid this error, update your source location objects to include the Content-Type metadata property. Understanding data verification duration DataSync verification includes an SHA256 checksum on file content and an exact comparison of file metadata between locations. How long verification takes depends on several factors, Understanding data verification duration 423 AWS DataSync User Guide including the number of files or objects involved, the size of the data in the storage systems, and the performance of these systems. Action to take Given the factors that can affect verification time, you shouldn't have to do anything. However, if your task execution seems stuck with a verifying status, contact AWS Support Center. Troubleshooting higher than expected S3 storage costs with DataSync If your Amazon S3 storage costs are higher than you thought they would be following an AWS DataSync transfer, it might be due to one or more of the following reasons: • When transferring to or from S3 buckets, you incur costs related to S3 API requests made by DataSync. • DataSync uses the Amazon S3 multipart upload feature to upload objects to S3 buckets. This approach can result in unexpected storage charges for uploads that don't complete successfully. • Object versioning might be enabled on your S3 bucket. Object versioning results in Amazon S3 storing multiple copies of objects that have the same name. Actions to take In these cases, you can take the following steps: • Make sure you understand how DataSync uses S3 requests and how they might be affecting your storage costs. For more information, see Evaluating S3 request costs when using DataSync. • If the issue's related to multipart uploads, configure a policy for multipart uploads for your S3 bucket to clean up incomplete multipart uploads to reduce storage cost. For more information, see the AWS blog post S3 Lifecycle Management Update - Support for Multipart Uploads and Delete Markers. • If the issue's related to object versioning, disable object versioning on your S3 bucket. • If you need more help, contact AWS Support Center. Troubleshooting S3 storage costs with DataSync 424 AWS DataSync User Guide AWS DataSync tutorials These tutorials walk you through some real-world scenarios with AWS DataSync. Topics • Tutorial: Transferring data from on-premises storage to Amazon S3 across AWS accounts • Tutorial: Transferring data between Amazon S3 buckets across AWS accounts Tutorial: Transferring data from on-premises storage to Amazon S3 across AWS accounts When using AWS DataSync with on-premises storage, you typically transfer data to an AWS storage service that belongs to the same AWS account as your DataSync agent. There are situations, however, where you might need to transfer data to an Amazon S3 bucket that's associated with a different account. Important Transferring data across AWS accounts by using the methods in this tutorial works only when Amazon S3 is one of the DataSync transfer locations. Overview It's not uncommon to need to transfer data between different AWS accounts, especially if you have separate teams managing your organization's resources. Here's what a cross-account transfer using DataSync can look like: • Source account: The AWS account for managing network resources. This is the account that you activate your DataSync agent with. • Destination account: The AWS account for managing the S3 bucket that you need to transfer data to. The following diagram illustrates this kind of scenario. Transferring from on-premises to S3 across accounts 425 AWS DataSync User Guide Prerequisite: Required source account permissions For your source AWS account, there are two sets of permissions to consider with
sync-dg-130
sync-dg.pdf
130
AWS accounts, especially if you have separate teams managing your organization's resources. Here's what a cross-account transfer using DataSync can look like: • Source account: The AWS account for managing network resources. This is the account that you activate your DataSync agent with. • Destination account: The AWS account for managing the S3 bucket that you need to transfer data to. The following diagram illustrates this kind of scenario. Transferring from on-premises to S3 across accounts 425 AWS DataSync User Guide Prerequisite: Required source account permissions For your source AWS account, there are two sets of permissions to consider with this kind of cross- account transfer: • User permissions that allow a user to work with DataSync (this might be you or your storage administrator). These permissions let you create DataSync locations and tasks. • DataSync service permissions that allow DataSync to transfer data to your destination account bucket. User permissions In your source account, add at least the following permissions to an IAM role for creating your DataSync locations and task. For information on how to add permissions to a role, see creating or modifying an IAM role. { "Version": "2012-10-17", "Statement": [ { "Sid": "SourceUserRolePermissions", "Effect": "Allow", "Action": [ "datasync:CreateLocationS3", "datasync:CreateTask", "datasync:DescribeLocation*", "datasync:DescribeTaskExecution", "datasync:ListLocations", "datasync:ListTaskExecutions", "datasync:DescribeTask", "datasync:CancelTaskExecution", Prerequisite: Required source account permissions 426 AWS DataSync User Guide "datasync:ListTasks", "datasync:StartTaskExecution", "iam:CreateRole", "iam:CreatePolicy", "iam:AttachRolePolicy", "iam:ListRoles", "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": [ "datasync.amazonaws.com" ] } } } ] } Tip To set up your user permissions, consider using AWSDataSyncFullAccess. This is an AWS managed policy that provides a user full access to DataSync and minimal access to its dependencies. DataSync service permissions The DataSync service needs the following permissions in your source account to transfer data to your destination account bucket. Prerequisite: Required source account permissions 427 AWS DataSync User Guide Later in this tutorial, you add these permissions when creating an IAM role for DataSync. You also specify this role (source-datasync-role) in your destination bucket policy and when creating your DataSync destination location. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket" }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:GetObjectTagging", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" } ] } Prerequisite: Required destination account permissions In your destination account, your user permissions must allow you to update your destination bucket's policy and disable its access control lists (ACLs). For more information on these specific permissions, see the Amazon S3 User Guide. Prerequisite: Required destination account permissions 428 AWS DataSync User Guide Step 1: In your source account, create a DataSync agent To get started, you must create a DataSync agent that can read from your on-premises storage system and communicate with the DataSync service. This process includes deploying an agent in your on-premises storage environment and activating the agent in your source AWS account. Note The steps in this tutorial apply to any type of agent and service endpoint that you use. To create a DataSync agent 1. Deploy a DataSync agent in your on-premises storage environment. 2. Choose a service endpoint that the agent will use to communicate with AWS. 3. Activate your agent in your source account. Step 2: In your source account, create a DataSync IAM role for destination bucket access In your source account, you need an IAM role that gives DataSync the permissions to transfer data to your destination account bucket. Since you're transferring across accounts, you must create the role manually. (DataSync can create this role for you in the console when transferring in the same account.) Create the DataSync IAM role Create an IAM role with DataSync as the trusted entity. To create the IAM role 1. Log in to the AWS Management Console with your source account. 2. Open the IAM console at https://console.aws.amazon.com/iam/. 3. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 4. On the Select trusted entity page, for Trusted entity type, choose AWS service. Step 1: In your source account, create a DataSync agent 429 AWS DataSync User Guide 5. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 6. On the Add permissions page, choose Next. 7. Give your role a name and choose Create role. For more information, see Creating a role for an AWS service (console) in the IAM User Guide. Add permissions to the DataSync IAM role The IAM role that you just created needs the permissions that allow DataSync to transfer data to the S3 bucket in your destination account. To add permissions to your IAM role 1. On the Roles page of the IAM console, search for the role that you just created and choose its name. 2.
sync-dg-131
sync-dg.pdf
131
and select DataSync. Choose Next. 6. On the Add permissions page, choose Next. 7. Give your role a name and choose Create role. For more information, see Creating a role for an AWS service (console) in the IAM User Guide. Add permissions to the DataSync IAM role The IAM role that you just created needs the permissions that allow DataSync to transfer data to the S3 bucket in your destination account. To add permissions to your IAM role 1. On the Roles page of the IAM console, search for the role that you just created and choose its name. 2. On the role's details page, choose the Permissions tab. Choose Add permissions then Create inline policy. 3. Choose the JSON tab and do the following: a. Paste the following JSON into the policy editor: Note The value for aws:ResourceAccount should be the account ID that owns the Amazon S3 bucket specified in the policy. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket" "Condition": { Step 2: In your source account, create a DataSync IAM role for destination bucket access 430 AWS DataSync User Guide "StringEquals": { "aws:ResourceAccount": "123456789012" } } }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetObjectVersionTagging", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } } } ] } b. Replace each instance of amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 4. Choose Next. Give your policy a name and choose Create policy. Step 3: In your destination account, update your S3 bucket policy In your destination account, modify the destination S3 bucket policy to include the DataSync IAM role that you created in your source account. Before you begin: Make sure that you have the required permissions for your destination account. To update the destination S3 bucket policy 1. In the AWS Management Console, switch to your destination account. Step 3: In your destination account, update your S3 bucket policy 431 AWS DataSync User Guide 2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 3. 4. In the left navigation pane, choose Buckets. In the Buckets list, choose the S3 bucket that you're transferring data to. 5. On the bucket's detail page, choose the Permissions tab. 6. Under Bucket policy, choose Edit and do the following to modify your S3 bucket policy: a. Update what's in the editor to include the following policy statements: { "Version": "2008-10-17", "Statement": [ { "Sid": "DataSyncCreateS3LocationAndTaskAccess", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::source-account:role/source-datasync-role" }, "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:GetObjectTagging", "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::amzn-s3-demo-destination-bucket", "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" ] } ] } b. Replace each instance of source-account with the AWS account ID for your source account. c. Replace source-datasync-role with the IAM role that you created for DataSync in your source account. Step 3: In your destination account, update your S3 bucket policy 432 AWS DataSync User Guide d. Replace each instance of amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 7. Choose Save changes. Step 4: In your destination account, disable ACLs for your S3 bucket It's important that all the data that you copy to the S3 bucket belongs to your destination account. To ensure that this account owns the data, disable the bucket's access control lists (ACLs). For more information, see Controlling ownership of objects and disabling ACLs for your bucket in the Amazon S3 User Guide. To disable ACLs for your destination bucket 1. While still logged in to the S3 console with your destination account, choose the S3 bucket that you're transferring data to. 2. On the bucket's detail page, choose the Permissions tab. 3. Under Object Ownership, choose Edit. 4. If it isn't already selected, choose the ACLs disabled (recommended) option. 5. Choose Save changes. Step 5: In your source account, create a DataSync source location for your on-premises storage In your source account, create a DataSync source location for the on-premises storage system that you're transferring data from. This location uses the agent that you activated in your source account. Step 6: In your source account, create a DataSync destination location for your S3 bucket While still in your source account, create a location for the S3 bucket that you're transferring data to. Before you begin: Make sure that you have the required permissions for your source account. Step 4: In your destination account, disable ACLs for your S3 bucket 433 AWS DataSync User Guide Since you can't create cross-account locations by using the DataSync console interface, these instructions require that you run a create-location-s3 command to create your destination location. We recommend running the command by using AWS CloudShell, a browser-based, pre- authenticated shell that you launch directly from the console. CloudShell allows you to run
sync-dg-132
sync-dg.pdf
132
in your source account, create a location for the S3 bucket that you're transferring data to. Before you begin: Make sure that you have the required permissions for your source account. Step 4: In your destination account, disable ACLs for your S3 bucket 433 AWS DataSync User Guide Since you can't create cross-account locations by using the DataSync console interface, these instructions require that you run a create-location-s3 command to create your destination location. We recommend running the command by using AWS CloudShell, a browser-based, pre- authenticated shell that you launch directly from the console. CloudShell allows you to run AWS CLI commands like create-location-s3 without downloading or installing command line tools. Note To complete the following steps by using a command line tool other than CloudShell, make sure that your AWS CLI profile uses the same IAM role that includes the required user permissions to use DataSync in your source account. To create a DataSync destination location by using CloudShell 1. While still in your source account, do one of the following to launch CloudShell from the console: • Choose the CloudShell icon on the console navigation bar. It's located to the right of the search box. • Use the search box on the console navigation bar to search for CloudShell and then choose the CloudShell option. 2. Copy the following command: aws datasync create-location-s3 \ --s3-bucket-arn arn:aws:s3:::amzn-s3-demo-destination-bucket \ --s3-config '{ "BucketAccessRoleArn":"arn:aws:iam::source-user-account:role/source-datasync- role" }' 3. Replace amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 4. Replace source-user-account with the AWS account ID for your source account. 5. Replace source-datasync-role with the DataSync IAM role that you created in your source account. 6. Run the command in CloudShell. Step 6: In your source account, create a DataSync destination location for your S3 bucket 434 AWS DataSync User Guide If the command returns a DataSync location ARN similar to this, you successfully created the location: { "LocationArn": "arn:aws:datasync:us-east-2:123456789012:location/loc- abcdef01234567890" } 7. In the left navigation pane, expand Data transfer, then choose Locations. From your source account, you can see the S3 location that you just created for your destination account bucket. Step 7: In your source account, create and start your DataSync task Before starting a DataSync task to transfer your data, let's recap what you've done so far: • In your source account, you created your DataSync agent. The agent can read from your on- premises storage system and communicate with the DataSync service. • In your source account, you created an IAM role that allows DataSync to transfer data to the S3 bucket in your destination account. • In your destination account, you configured your S3 bucket so that DataSync can transfer data to it. • In your source account, you created the DataSync source and destination locations for your transfer. To create and start the DataSync task 1. While still using the DataSync console in your source account, expand Data transfer in the left navigation pane, then choose Tasks and Create task. 2. On the Configure source location page, choose Choose an existing location. Choose the source location that you're copying data from (your on-premises storage) then Next. 3. On the Configure destination location page, choose Choose an existing location. Choose the destination location that you're copying data to (the S3 bucket in your destination account) then Next. Step 7: In your source account, create and start your DataSync task 435 AWS DataSync User Guide 4. On the Configure settings page, give the task a name. As needed, configure additional settings, such as specifying an Amazon CloudWatch log group. Choose Next. 5. On the Review page, review your settings and choose Create task. 6. On the task's details page, choose Start, and then choose one of the following: • To run the task without modification, choose Start with defaults. • To modify the task before running it, choose Start with overriding options. When your task finishes, check the S3 bucket in your destination account. You should see the data that moved from your source location. Related resources For more information about what you did in this tutorial, see the following topics: • Creating a role for an AWS service (console) • Modifying a role trust policy (console) • Adding a bucket policy by using the Amazon S3 console • Create an S3 location with the AWS CLI Tutorial: Transferring data between Amazon S3 buckets across AWS accounts With AWS DataSync, you can transfer data between Amazon S3 buckets that belong to different AWS accounts. Important Transferring data across AWS accounts using the methods in this tutorial works only with Amazon S3. Additionally, this tutorial can help you transfer data between S3 buckets that are also in different AWS Regions. Related resources 436 AWS DataSync Overview User Guide It's not uncommon to transfer data between
sync-dg-133
sync-dg.pdf
133
trust policy (console) • Adding a bucket policy by using the Amazon S3 console • Create an S3 location with the AWS CLI Tutorial: Transferring data between Amazon S3 buckets across AWS accounts With AWS DataSync, you can transfer data between Amazon S3 buckets that belong to different AWS accounts. Important Transferring data across AWS accounts using the methods in this tutorial works only with Amazon S3. Additionally, this tutorial can help you transfer data between S3 buckets that are also in different AWS Regions. Related resources 436 AWS DataSync Overview User Guide It's not uncommon to transfer data between AWS accounts, especially if you have separate teams managing your organization's resources. Here's what a cross-account transfer using DataSync can look like: • Source account: The AWS account for managing the S3 bucket that you need to transfer data from. • Destination account: The AWS account for managing the S3 bucket that you need to transfer data to. Transfers across accounts The following diagram illustrates a scenario where you transfer data from an S3 bucket to another S3 bucket that's in a different AWS account. Transfers across accounts and Regions The following diagram illustrates a scenario where you transfer data from an S3 bucket to another S3 bucket that's in a different AWS account and Region. Overview 437 AWS DataSync User Guide Prerequisite: Required source account permissions For your source AWS account, there are two sets of permissions to consider with this kind of cross- account transfer: • User permissions that allow a user to work with DataSync (this might be you or your storage administrator). These permissions let you create DataSync locations and tasks. • DataSync service permissions that allow DataSync to transfer data to your destination account bucket. User permissions for your source account In your source account, add at least the following permissions to an IAM role for creating your DataSync locations and task. For information on how to add permissions to a role, see creating or modifying an IAM role. { "Version": "2012-10-17", "Statement": [ { "Sid": "SourceUserRolePermissions", "Effect": "Allow", Prerequisite: Required source account permissions 438 User Guide AWS DataSync "Action": [ "datasync:CreateLocationS3", "datasync:CreateTask", "datasync:DescribeLocation*", "datasync:DescribeTaskExecution", "datasync:ListLocations", "datasync:ListTaskExecutions", "datasync:DescribeTask", "datasync:CancelTaskExecution", "datasync:ListTasks", "datasync:StartTaskExecution", "iam:CreateRole", "iam:CreatePolicy", "iam:AttachRolePolicy", "iam:ListRoles", "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": [ "datasync.amazonaws.com" ] } } } ] } Tip To set up your user permissions, consider using AWSDataSyncFullAccess. This is an AWS managed policy that provides a user full access to DataSync and minimal access to its dependencies. Prerequisite: Required source account permissions 439 AWS DataSync User Guide DataSync service permissions for your source account The DataSync service needs the following permissions in your source account to transfer data to your destination account bucket. Later in this tutorial, you add these permissions when creating an IAM role for DataSync. You also specify this role (source-datasync-role) in your destination bucket policy and when creating your DataSync destination location. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket" }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:GetObjectTagging", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" } ] } Prerequisite: Required destination account permissions In your destination account, your user permissions must allow you to update your destination bucket's policy and disable its access control lists (ACLs). For more information on these specific permissions, see the Amazon S3 User Guide. Prerequisite: Required destination account permissions 440 AWS DataSync User Guide Step 1: In your source account, create a DataSync IAM role for destination bucket access In your source AWS account, you need an IAM role that gives DataSync the permissions to transfer data to your destination account bucket. Since you're transferring across accounts, you must create the role manually. (DataSync can create this role for you in the console when transferring in the same account.) Create the DataSync IAM role Create an IAM role with DataSync as the trusted entity. 1. Log in to the AWS Management Console with your source account. 2. Open the IAM console at https://console.aws.amazon.com/iam/. 3. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 4. On the Select trusted entity page, for Trusted entity type, choose AWS service. 5. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 6. On the Add permissions page, choose Next. 7. Give your role a name and choose Create role. For more information, see Creating a role for an AWS service (console) in the IAM User Guide. Add permissions to the DataSync IAM role The IAM role that you just created needs the permissions that allow DataSync to transfer data to the S3 bucket in your destination account. 1. On the Roles page of the
sync-dg-134
sync-dg.pdf
134
the Select trusted entity page, for Trusted entity type, choose AWS service. 5. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 6. On the Add permissions page, choose Next. 7. Give your role a name and choose Create role. For more information, see Creating a role for an AWS service (console) in the IAM User Guide. Add permissions to the DataSync IAM role The IAM role that you just created needs the permissions that allow DataSync to transfer data to the S3 bucket in your destination account. 1. On the Roles page of the IAM console, search for the role that you just created and choose its name. 2. On the role's details page, choose the Permissions tab. Choose Add permissions then Create inline policy. 3. Choose the JSON tab and do the following: a. Paste the following JSON into the policy editor: Step 1: In your source account, create a DataSync IAM role for destination bucket access 441 AWS DataSync User Guide Note The value for aws:ResourceAccount should be the account ID that owns the Amazon S3 bucket specified in the policy. { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } } }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetObjectVersionTagging", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } Step 1: In your source account, create a DataSync IAM role for destination bucket access 442 AWS DataSync User Guide } } ] } b. Replace each instance of amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 4. Choose Next. Give your policy a name and choose Create policy. Step 2: In your destination account, update your S3 bucket policy In your destination account, modify the destination S3 bucket policy to include the DataSync IAM role that you created in your source account. Before you begin: Make sure that you have the required permissions for your destination account. Update your destination S3 bucket policy 1. In the AWS Management Console, switch to your destination account. 2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. 3. 4. In the left navigation pane, choose Buckets. In the Buckets list, choose the S3 bucket that you're transferring data to. 5. On the bucket's detail page, choose the Permissions tab. 6. Under Bucket policy, choose Edit and do the following to modify your S3 bucket policy: a. Update what's in the editor to include the following policy statements: { "Version": "2008-10-17", "Statement": [ { "Sid": "DataSyncCreateS3LocationAndTaskAccess", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::source-account:role/source-datasync-role" }, "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads", Step 2: In your destination account, update your S3 bucket policy 443 AWS DataSync User Guide "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:GetObjectTagging", "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::amzn-s3-demo-destination-bucket", "arn:aws:s3:::amzn-s3-demo-destination-bucket/*" ] } ] } b. Replace each instance of source-account with the AWS account ID for your source account. c. Replace source-datasync-role with the IAM role that you created for DataSync in your source account. d. Replace each instance of amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 7. Choose Save changes. Step 3: In your destination account, disable ACLs for your S3 bucket It's important that all the data that you transfer to the S3 bucket belongs to your destination account. To ensure that this account owns the data, disable the bucket's access control lists (ACLs). For more information, see Controlling ownership of objects and disabling ACLs for your bucket in the Amazon S3 User Guide. Before you begin: Make sure that you have the required permissions for your destination account. Disable your destination S3 bucket ACLs 1. While still logged in to the S3 console with your destination account, choose the S3 bucket that you're transferring data to. 2. On the bucket's detail page, choose the Permissions tab. 3. Under Object Ownership, choose Edit. Step 3: In your destination account, disable ACLs for your S3 bucket 444 AWS DataSync User Guide 4. If it isn't already selected, choose the ACLs disabled (recommended) option. 5. Choose Save changes. Step 4: In your source account, create your DataSync locations In your source account, create the DataSync locations for your source and destination S3 buckets. Before you begin: Make sure that you have the required permissions for your source account. Create your DataSync source location • In your source account, create a location for the S3 bucket that you're transferring data from. Create your DataSync destination location While still in your source account, create a location for the S3 bucket that you're transferring data to. Since you can't create cross-account locations by using the DataSync console interface, these instructions require that you run a create-location-s3 command to create your destination location. We recommend running
sync-dg-135
sync-dg.pdf
135
create the DataSync locations for your source and destination S3 buckets. Before you begin: Make sure that you have the required permissions for your source account. Create your DataSync source location • In your source account, create a location for the S3 bucket that you're transferring data from. Create your DataSync destination location While still in your source account, create a location for the S3 bucket that you're transferring data to. Since you can't create cross-account locations by using the DataSync console interface, these instructions require that you run a create-location-s3 command to create your destination location. We recommend running the command by using AWS CloudShell, a browser-based, pre- authenticated shell that you launch directly from the console. CloudShell allows you to run AWS CLI commands like create-location-s3 without downloading or installing command line tools. Note To complete the following steps by using a command line tool other than CloudShell, make sure that your AWS CLI profile uses the same IAM role that includes the required user permissions to use DataSync in your source account. To create a DataSync destination location by using CloudShell 1. While still in your source account, do one of the following to launch CloudShell from the console: • Choose the CloudShell icon on the console navigation bar. It's located to the right of the search box. Step 4: In your source account, create your DataSync locations 445 AWS DataSync User Guide • Use the search box on the console navigation bar to search for CloudShell and then choose the CloudShell option. 2. Copy the following create-location-s3 command: aws datasync create-location-s3 \ --s3-bucket-arn arn:aws:s3:::amzn-s3-demo-destination-bucket \ --region amzn-s3-demo-destination-bucket-region \ --s3-config '{ "BucketAccessRoleArn":"arn:aws:iam::source-account-id:role/source-datasync- role" }' 3. Replace amzn-s3-demo-destination-bucket with the name of the S3 bucket in your destination account. 4. If your destination bucket is in a different Region than your source bucket, replace amzn-s3- demo-destination-bucket-region with the Region where the destination bucket resides (for example, us-east-2). Remove this option if your buckets are in the same Region. 5. Replace source-account-id with the source AWS account ID. 6. Replace source-datasync-role with the DataSync IAM role that you created in your source account. 7. Run the command in CloudShell. If the command returns a DataSync location ARN similar to this, you successfully created the location: { "LocationArn": "arn:aws:datasync:us-east-2:123456789012:location/loc- abcdef01234567890" } 8. 9. In the left navigation pane, expand Data transfer, then choose Locations. If you created the location in a different Region, choose that Region in the navigation pane. From your source account, you can see the S3 location that you just created for your destination account bucket. Step 4: In your source account, create your DataSync locations 446 AWS DataSync User Guide Step 5: In your source account, create and start your DataSync task Before starting a DataSync task to transfer your data, let's recap what you've done so far: • In your source account, you created an IAM role that allows DataSync to transfer data to the S3 bucket in your destination account. • In your destination account, you configured your S3 bucket so that DataSync can transfer data to it. • In your source account, you created the DataSync source and destination locations for your transfer. Create and start your DataSync task 1. While still using the DataSync console in your source account, expand Data transfer in the left navigation pane, then choose Tasks and Create task. 2. If the bucket in your destination account is in a different Region than the bucket in your source account, choose the destination bucket's Region in the top navigation pane. Important To avoid a network connection error, you must create your DataSync task in the same Region as the destination location. 3. On the Configure source location page, do the following: a. b. c. Select Choose an existing location. (For transfers across Regions) In the Region dropdown, choose the Region where the source bucket resides. For Existing locations, choose the source location for the S3 bucket that you're transferring data from, then choose Next. 4. On the Configure destination location page, do the following: a. b. Select Choose an existing location. For Existing locations, choose the destination location for the S3 bucket that you're transferring data to, then choose Next. 5. On the Configure settings page, choose a Task mode. Step 5: In your source account, create and start your DataSync task 447 AWS DataSync Tip User Guide We recommend using Enhanced mode. For more information, see Choosing a task mode for your data transfer. 6. Give the task a name and configure additional settings, such as specifying an Amazon CloudWatch log group. Choose Next. 7. On the Review page, review your settings and choose Create task. 8. On the task's details page, choose Start, and then choose one of the following: • To run the task without modification,
sync-dg-136
sync-dg.pdf
136
Next. 5. On the Configure settings page, choose a Task mode. Step 5: In your source account, create and start your DataSync task 447 AWS DataSync Tip User Guide We recommend using Enhanced mode. For more information, see Choosing a task mode for your data transfer. 6. Give the task a name and configure additional settings, such as specifying an Amazon CloudWatch log group. Choose Next. 7. On the Review page, review your settings and choose Create task. 8. On the task's details page, choose Start, and then choose one of the following: • To run the task without modification, choose Start with defaults. • To modify the task before running it, choose Start with overriding options. When your task finishes, check the S3 bucket in your destination account. You should see the data that moved from your source account bucket. Troubleshooting Refer to the following information if you run into issues trying to complete your cross-account transfer. Connection errors When transferring between S3 buckets in different AWS accounts and Regions, you might get a network connection error when starting your DataSync task. To resolve this, create a task in the same Region as your destination location and try running that task. Related: Cross-account transfers with S3 buckets using server-side encryption If you're trying to do this transfer with S3 buckets using server-side encryption, see the AWS Storage Blog for instructions. Troubleshooting 448 AWS DataSync User Guide Performing a large data migration with AWS DataSync Large-scale data migrations can involve transferring significant volumes of data that encompass millions of files or objects in various formats. AWS DataSync simplifies these complex transfers by managing scheduling, monitoring, encryption, and data verification. What is a large data migration? A large data migration typically involves transferring terabytes or more of data spread across various sources to a new destination storage environment (in this case, AWS). These migrations require careful planning and coordination within your organization to move data successfully while minimizing business disruption. DataSync can simplify these migrations, which are usually complex in nature. Some benefits of using DataSync for your migration include: • Automated management of data-transfer processes and the infrastructure required for high performance and secure data transfers. • End-to-end security, including encryption and data integrity validation, to help ensure that your data arrives securely, intact, and ready to use. • A purpose-built network protocol and a parallel, multi-threaded architecture to speed up migrations. Key stages of a large data migration You can usually break down a large migration into the following stages: • (Stage 1) Planning your data migration - At this stage, you're trying to understand why you're migrating and what sort of data you're working with. Planning activities include: • Understanding why you want to migrate • Assembling a team to help you with all aspects of the migration. • Identifying data locations, formats, and usage patterns • Assessing available hardware resources and network requirements (if you're migrating from an on-premises data center) What is a large data migration? 449 AWS DataSync User Guide • Running proof of concept (POC) tests with DataSync to estimate migration timelines, plan cutover windows, and get a sense of how you need to configure DataSync • (Stage 2) Implementing your large data migration - At this point, you're validating your plan and starting the migration. Implementation activities include: • Validating the migration plan • Executing phased cutovers that include monitoring and verifying your data transfers as expected • Optimizing and adjusting as needed in between each cutover • Cleaning up unused resources once you're done Additional resources AWS Prescriptive Guidance has the following resources that can help you plan and implement a large migration. Use this guide to understand how DataSync can work in the context of common migration processes and activities. • Large migrations to the AWS cloud • Strategy and best practices for AWS large migrations • Migrate shared file systems in an AWS large migration – This resource includes an SFS- Discovery-Workbook that you can download and use to plan a migration at the file share level. Stage 1: Planning your large data migration Planning is essential when migrating a large dataset. You must understand the data you're migrating, your motivations for the migration, and how AWS DataSync can help you get your data where you want it. Topics • Gathering requirements for your migration • Running a DataSync proof of concept • Estimating migration timelines Additional resources 450 AWS DataSync User Guide Gathering requirements for your migration The first step in a large data migration requires collecting a variety of information across your organization. This information helps you create a migration process, which for large migrations can include multiple transfers and procedures for cutting over operations (done in waves) from your source to your destination storage. Understanding why you
sync-dg-137
sync-dg.pdf
137
motivations for the migration, and how AWS DataSync can help you get your data where you want it. Topics • Gathering requirements for your migration • Running a DataSync proof of concept • Estimating migration timelines Additional resources 450 AWS DataSync User Guide Gathering requirements for your migration The first step in a large data migration requires collecting a variety of information across your organization. This information helps you create a migration process, which for large migrations can include multiple transfers and procedures for cutting over operations (done in waves) from your source to your destination storage. Understanding why you want to migrate Before you can start migrating to AWS, you need to clearly understand why you're migrating your data. This helps address common migration challenges such as meeting deadlines, managing resources, and coordinating across teams. If you need help determining your motivations for the migration, answer these questions: • Are you freeing up on-premises storage space? • Are you meeting hardware support contract deadlines? • Is this for a data center exit? • What's your migration timeline? • Are you transferring data from other cloud storage? • Are you migrating partial or complete datasets? • Is this for data archival? • Do applications or users need regular access to this data? Figuring out logistics Address some basic logistics about your storage environment, the migration, and your organization: 1. Get a basic understanding of your current data storage infrastructure. 2. Verify whether you need a DataSync agent. For example, you need an agent if you're transferring from on-premises storage. 3. If you need an agent, make sure that you understand the agent requirements: Gathering requirements 451 AWS DataSync User Guide • An agent can run as a virtual machine (VM) on VMware ESXi, Linux Kernel-based Virtual Machine (KVM), and Microsoft Hyper-V hypervisors. You also can deploy an agent as an Amazon EC2 instance within AWS. • Large migrations are typically memory intensive. Make sure that your agent has enough RAM. 4. Identify key stakeholders from your leadership, networking, storage, and IT departments who need to be involved in the migration. This can include: • Find a single-threaded leader who's dedicated to the project and its results. • Determine who's responsible for the ownership and classification of the data that you're migrating. • Identify who manages your source and who eventually will manage the AWS storage service that you're migrating to. • Find out who will create and manage any other processes for your data once it's in AWS. 5. Establish cross-department communication channels. 6. Create a rollback plan for contingencies. 7. Document the complete migration process, including waves, validation, and cutover procedures. Use this as your runbook for the entire migration. You will update this process as you plan and implement the migration. Reviewing the data you're migrating Work with your storage and application teams to analyze the characteristics of the data you're migrating. This information helps you determine a migration strategy that you can execute with DataSync. Contents • Determining data usage patterns • Identifying data structure and layout • Documenting shares and folders • Analyzing file sizes Gathering requirements 452 AWS DataSync Determining data usage patterns User Guide • For actively used data with frequent modifications, plan for multiple waves of incremental transfers to avoid disrupting business operations. • For read-only data that might be considered archival, you might not need to plan for waves. • If you have a mix of data usage patterns, plan waves that migrate these different datasets separately. For example, you might have one wave for archive data, with the rest of the waves dedicated to migrating active data. Identifying data structure and layout • Determine if data is organized by time periods (year, month, day) or other patterns. • Use this organization structure to plan your migration waves. For example, you might migrate a year's worth of archive data during one wave. Documenting shares and folders • Create an inventory of shares and folders (including file or object counts for each). • Identify shares and folders with active datasets. These might require incremental transfers during the migration. • Review the DataSync quotas. This can help you plan how to partition your dataset when configuring DataSync. Analyzing file sizes • Expect higher data throughput for transfers with larger files (MB or GB) compared to smaller files (KB). • If you're working with a lot of smaller files, expect more metadata operations on your storage system and lower data throughput. DataSync performs these operations when comparing and verifying your source and destination locations. Identifying storage requirements To choose a compatible AWS storage service to migrate your data, you need to evaluate your source storage system's characteristics and performance. Gathering requirements 453 AWS DataSync User Guide This information can also help you schedule your transfers
sync-dg-138
sync-dg.pdf
138
when configuring DataSync. Analyzing file sizes • Expect higher data throughput for transfers with larger files (MB or GB) compared to smaller files (KB). • If you're working with a lot of smaller files, expect more metadata operations on your storage system and lower data throughput. DataSync performs these operations when comparing and verifying your source and destination locations. Identifying storage requirements To choose a compatible AWS storage service to migrate your data, you need to evaluate your source storage system's characteristics and performance. Gathering requirements 453 AWS DataSync User Guide This information can also help you schedule your transfers to minimize impact on business operations during the migration. Contents • Determining source storage support • Reviewing metadata preservation requirements • Collecting performance metrics from source storage • Choosing a destination AWS storage service Determining source storage support DataSync can work with a variety of storage systems that allow access through NFS, SMB, HDFS, and S3 compatible object storage clients. If you're migrating from other cloud storage, verify that DataSync can work with that provider. For a list of supported source locations, see Where can I transfer my data with AWS DataSync? Reviewing metadata preservation requirements DataSync can preserve your file or object metadata during a transfer. How your metadata gets preserved depends on your transfer locations and if those locations use similar types of metadata. DataSync in some cases needs additional permissions to preserve file metadata, such as NTFS discretionary access lists (DACLs). For more information, see Understanding how DataSync handles file and object metadata. Collecting performance metrics from source storage Measure baseline IOPS and disk throughput during average and peak workloads for your source storage. Transferring data adds I/O overhead to both your source and destination storage systems. Compare this performance data against your storage system's specifications to determine available performance resources. Choosing a destination AWS storage service At this point, you might have an idea what AWS storage service makes sense for your data. If not, data usage patterns and storage performance are a couple areas to think about when deciding. For Gathering requirements 454 AWS DataSync User Guide example, you might consider Amazon S3 if you have archive data and Amazon FSx or Amazon EFS for active data. To help you decide the right object or file-based storage for your data, see Choosing an AWS storage service. Determining network requirements To migrate your data with DataSync, you must establish network connections between your source storage, agent, and AWS. You also need to plan for enough network bandwidth and infrastructure. Work with your network engineers and storage administrators to gather the following network requirements. Contents • Assessing your available network bandwidth • Considering options for connecting your network to AWS • Choosing a service endpoint for agent communication • Planning for enough network infrastructure Assessing your available network bandwidth Your available network bandwidth factors into your transfer speeds and overall migration time. If you're transferring from an on-premises storage system, do the following: • Work with your network team to determine average and peak bandwidth utilization. • Identify windows when you can transfer data and avoid disrupting daily operations. This will inform when your migration waves and cutovers happen. You can control how much bandwidth DataSync uses. For more information, see Setting bandwidth limits for your AWS DataSync task. Since transfers from other cloud storage typically happen over the public internet, there usually are less bandwidth restrictions and considerations with these transfers. Considering options for connecting your network to AWS Consider the following options for establishing network connectivity for your DataSync transfer: Gathering requirements 455 AWS DataSync User Guide • AWS Direct Connect - Review the architecture and routing examples for using Direct Connect with DataSync. You can monitor Direct Connect activity using Amazon CloudWatch. • VPN - AWS Site-to-Site VPN offers up to 1.25 Gbps throughput per tunnel. • Public internet - Contact with your internet service provider for network usage data. Choosing a service endpoint for agent communication DataSync agents use service endpoints to communicate with the DataSync service. The type of endpoint you use depends on the how you're connecting for your network to AWS. Planning for enough network infrastructure For every transfer task that you create, DataSync automatically generates and manages the network infrastructure for your data transfers. This infrastructure is known as network interfaces or elastic network interfaces, which are logical networking components in an Amazon virtual private cloud (VPC) that represent virtual network cards. For more information, see the Amazon EC2 User Guide. Each network interface uses a single IP address in your destination VPC subnet. To make sure that you have enough network infrastructure for your migration, do the following: • Note the number of network interfaces that DataSync will create for your DataSync destination location. • Make sure that
sync-dg-139
sync-dg.pdf
139
that you create, DataSync automatically generates and manages the network infrastructure for your data transfers. This infrastructure is known as network interfaces or elastic network interfaces, which are logical networking components in an Amazon virtual private cloud (VPC) that represent virtual network cards. For more information, see the Amazon EC2 User Guide. Each network interface uses a single IP address in your destination VPC subnet. To make sure that you have enough network infrastructure for your migration, do the following: • Note the number of network interfaces that DataSync will create for your DataSync destination location. • Make sure that your subnet has enough IP addresses for your DataSync tasks. For example, a task that uses an agent requires four IP addresses. If you create four tasks for your migration, that means you need 16 available IP addresses in your subnet. Running a DataSync proof of concept Running a proof of concept (POC) with AWS DataSync helps you validate the following aspects of your data migration planning: • Verify network connectivity between source and destination locations. • Validate your initial DataSync task configuration. • Measure data transfer performance. • Estimate migration timelines. Running a proof of concept 456 AWS DataSync User Guide • Define success criteria with the key stakeholders working on the migration. Getting started with your proof of concept 1. Create your DataSync agent: 1. Deploy your agent. 2. Choose a service endpoint for your agent. 3. Activate your agent. 4. Verify your agent's network connections. 2. Select a small subset of data that represents the data that you're migrating. For example, if your source storage has a mix of large and small files, the subset of data you transfer in your POC should reflect that. This gives you a preliminary understanding of performance from the storage systems, your network, and DataSync. 3. Create a DataSync source location for your on-premises or other cloud storage system. 4. Create a DataSync destination location for your AWS storage service. 5. Create a DataSync transfer task with a filter that only transfers your data subset. 6. Start your DataSync task. 7. Collect transfer performance metrics by monitoring the following: • Your task execution's data and file throughput. You can do this through the DataSync console or the DescribeTaskExecution operation. If you use DescribeTaskExecution, here's how you calculate these metrics: • Data throughput: Divide BytesWritten by TransferDuration • File throughput: Divide FilesTransferred by TransferDuration • Source and destination storage utilization. Work closely with your storage administrators to get this information. • Network usage. 8. Verify the transferred data at your destination location: • Review your CloudWatch logs for task execution errors. • Verify that permissions and metadata are preserved at the destination location. • Confirm that applications and users can access destination data as expected. Running a proof of concept 457 AWS DataSync User Guide • Address any issues that you encounter. For more information, see Troubleshooting AWS DataSync issues. 9. Run your task a few more times to get an idea how long it takes DataSync to prepare, transfer, and verify your data. (For more information, see Task execution statuses.) If you run a task more than once, DataSync by default performs an incremental transfer and copies only the data that's changed from the previous task run. While the transfer time will likely be shorter for incremental transfers, DataSync will always prepare your transfer the same way by scanning and comparing your locations to identify what to transfer. You can use these preparation times to estimate cutover timelines for your migration. 10. If needed, update your migration plan based on what you learned during the POC. Estimating migration timelines Using the information you've collected to this point, you can estimate how long the migration will take using AWS DataSync. Estimating data transfer timelines You can estimate how long it takes DataSync to transfer your data based on the following information you collected during migration requirements gathering and your DataSync proof of concept (POC): • Your available network bandwidth • Source and destination storage utilization metrics • Performance metrics from your DataSync POC To estimate a data transfer timeline 1. Compare the data and file throughput from your POC with your available network bandwidth. 2. If your throughput is lower than your available bandwidth (such as 300 MiB/s for throughput with 10 Gbps of network bandwidth), consider partitioning your dataset into multiple tasks to maximize bandwidth usage. Estimating migration timelines 458 AWS DataSync User Guide DataSync has a few options for partitioning your dataset. For more information, see Accelerating your migration with data partitioning. 3. Calculate how many days a transfer takes by using the following formula, which provides a theoretical minimum transfer time: (DATA_SIZE * 8 bits per byte)/(CIRCUIT * NETWORK_UTILIZATION percentage * 3600 seconds per hour * AVAILABLE_HOURS) = Number
sync-dg-140
sync-dg.pdf
140
network bandwidth. 2. If your throughput is lower than your available bandwidth (such as 300 MiB/s for throughput with 10 Gbps of network bandwidth), consider partitioning your dataset into multiple tasks to maximize bandwidth usage. Estimating migration timelines 458 AWS DataSync User Guide DataSync has a few options for partitioning your dataset. For more information, see Accelerating your migration with data partitioning. 3. Calculate how many days a transfer takes by using the following formula, which provides a theoretical minimum transfer time: (DATA_SIZE * 8 bits per byte)/(CIRCUIT * NETWORK_UTILIZATION percentage * 3600 seconds per hour * AVAILABLE_HOURS) = Number of days When using this formula, replace the following with your own values: • DATA_SIZE: The amount of data that you're migrating (expressed in bytes). • CIRCUIT: Your available network bandwidth (expressed in bits per second). • NETWORK_UTILIZATION: What percent of your network is being used. • AVAILABLE_HOURS: The number of operational hours available in each day. For example, you would calculate a migration with 100 TB of data, a 1 Gbps internet connection, 80 percent network utilization, and 24 hours per day availability like this: (100,000,000,000,000 bytes * 8) / (1,000,000,000 bps * 0.80 * 3600 * 24) = 11.57 days In this case, the migration would take almost 12 days before accounting for real-world conditions. 4. Adjust your calculated transfer duration to account for real-world conditions: • Network performance fluctuations • Storage performance variations • Downtime between migration waves Estimating cutover timelines If you're migrating active datasets, you likely need cutovers so that you don't disrupt business operations. Don't underestimate how long cutovers take. With large migrations, it's not uncommon for cutover activities to take up to 30 percent of your overall migration time. Estimating migration timelines 459 AWS DataSync User Guide 1. Evaluate if you need to perform cutovers in waves to reduce the amount of data scanned for incremental changes. One strategy for doing this is cutting over datasets that you partition based on shares, folders, or storage systems. 2. Review how long it generally took DataSync to prepare, transfer, and verify your data during the POC. Note in particular the prepare durations of your task executions. To find this information, run the DescribeTaskExecution operation, then check the value of PrepareDuration for the duration time (in milliseconds). 3. Estimate how long a cutover might take by measuring the time delta across parallel tasks. For more information on parallel tasks, see Accelerating your migration with data partitioning. 4. Use your cutover estimation to schedule your cutovers. These essentially are maintenance windows when your source data can't be modified. Next steps After estimating your timelines, you're ready to start implementing your migration. Stage 2: Implementing your large data migration With the information you gathered during planning, you can begin using AWS DataSync to migrate to your new storage system. If you haven't already, we recommend reviewing the AWS Prescriptive Guidance resources for large migrations. Topics • Accelerating your migration with data partitioning • Running your DataSync transfer tasks • Monitoring your transfers Accelerating your migration with data partitioning With a large migration, we recommend partitioning your dataset with multiple DataSync tasks. Partitioning your source data across multiple tasks (and possibly agents) lets you parallelize your transfers and reduce the migration timeline. Stage 2: Implementing your migration 460 AWS DataSync User Guide Partitioning also helps you stay within DataSync quotas and simplifies the monitoring and debugging of your tasks. The following diagram shows how you might use multiple DataSync tasks and agents to transfer data from the same source storage location. In this scenario, each task focuses on a specific folder in the source location. For more information and examples on these approaches, see How to accelerate your data transfers with AWS DataSync scale out architectures. Partitioning your dataset by folder or prefix When creating your DataSync source location, you can specify a folder, directory, or prefix that DataSync reads from. For example, if you're migrating a file share with top-level directories, you can create multiple locations that specify a different directory path. You can then use these locations to run multiple DataSync tasks during your migration. Partitioning your dataset with filters You can apply filters to include or exclude data from your source location in a transfer. In the context of a large migration, filters can help you scope tasks to specific portions of your dataset. Accelerating your migration with partitioning 461 AWS DataSync User Guide For example, if you’re migrating archive data that’s organized by year, you can create an include filter to match for a specific year or multiple years. You also can modify the filter each time you run the task to match a different year. Partitioning your dataset with manifests A manifest is a list of files or objects that you want DataSync to
sync-dg-141
sync-dg.pdf
141
exclude data from your source location in a transfer. In the context of a large migration, filters can help you scope tasks to specific portions of your dataset. Accelerating your migration with partitioning 461 AWS DataSync User Guide For example, if you’re migrating archive data that’s organized by year, you can create an include filter to match for a specific year or multiple years. You also can modify the filter each time you run the task to match a different year. Partitioning your dataset with manifests A manifest is a list of files or objects that you want DataSync to transfer. With a manifest, DataSync doesn't have to read everything in a source location to determine what to transfer. You can create manifests from inventories of your source storage or through event-driven approaches (for example, see Implementing AWS DataSync with hundreds of millions of objects). You can also use a different manifest each time you start a task, allowing you to transfer different sets of data with the same task. Running your DataSync transfer tasks During each of your migration waves, your data transfer usually follows the same general process: 1. Run an initial full transfer of your data. 2. Verify the data in the destination. 3. Run incremental transfers for any data that might have changed since the initial transfer. 4. Cut over operations to your destination location. 5. Review cutover results. Running your tasks You likely will need to run your DataSync transfer tasks during business hours to minimize your overall migration time. It's common in these situations to run an initial full transfer followed by incremental transfers that account for changes to your source location from users and applications. To avoid network-related issues during business hours, you can limit the amount of bandwidth that your tasks use. For more information, see Setting bandwidth limits for your AWS DataSync task. 1. Run an initial full transfer: a. Start your DataSync task (or tasks if you’re running tasks in parallel). b. Monitor the progress and performance of your task executions. Running your DataSync tasks 462 AWS DataSync User Guide c. Verify that your data transferred the way you expect (for example, file metadata is preserved). 2. Run incremental transfers: a. Schedule your tasks to run periodically. b. Monitor your task executions and fix errors if encountered. Performing a cutover After your initial and incremental transfers, you can start the process of cutting over operations to your destination location. 1. Start the scheduled maintenance window. 2. Update your source storage system to be read only for applications and users. 3. Run final incremental transfers to copy remaining deltas between your source and destination locations. 4. Conduct a thorough data validation (for example, by reviewing CloudWatch logs and task reports). 5. 6. Switch your applications and users to the new environment of your destination location. Test application functionality and make sure that users can access data in your destination location. 7. Schedule a retrospective meeting to review the transfer with the migration teams. Ask the following probing sample questions: • Was the cutover successful? If not, what was the issue? • Did we use all available bandwidth? • Was the source and destination storage fully utilized? • Can we get more data throughput with additional tasks? • Do we need to plan for a longer maintenance window? 8. If needed, update your migration plan before starting the next wave. Monitoring your transfers AWS DataSync provides several monitoring options to help you validate and debug your transfer. Monitoring your transfers 463 AWS DataSync User Guide Monitoring your transfers with CloudWatch metrics You can create custom CloudWatch dashboards with metrics from your DataSync task executions. For more information, see Monitoring data transfers with Amazon CloudWatch metrics. Monitoring your transfers with task reports If you’re transferring millions of files or objects, considering using task reports. Task reports provide detailed information about what DataSync attempts to transfer, skip, verify, and delete during a task execution. For more information, see Monitoring your data transfers with task reports. You can also visualize your task reports by using AWS services such as AWS Glue, Amazon Athena, and Amazon QuickSight. For more information, see the AWS Storage Blog. Monitoring your transfers with CloudWatch Logs At minimum, we recommend that you configure your task to log basic information and transfer errors. For more information, see Monitoring data transfers with Amazon CloudWatch Logs. Monitoring your transfers 464 AWS DataSync User Guide AWS DataSync API In addition to the AWS Management Console and AWS CLI, you can use the AWS DataSync API to configure and manage DataSync with the AWS SDKs. Topics • Actions • Data Types • Common Errors • Common Parameters Actions The following actions are supported: • AddStorageSystem • CancelTaskExecution • CreateAgent • CreateLocationAzureBlob • CreateLocationEfs • CreateLocationFsxLustre •
sync-dg-142
sync-dg.pdf
142
your transfers with CloudWatch Logs At minimum, we recommend that you configure your task to log basic information and transfer errors. For more information, see Monitoring data transfers with Amazon CloudWatch Logs. Monitoring your transfers 464 AWS DataSync User Guide AWS DataSync API In addition to the AWS Management Console and AWS CLI, you can use the AWS DataSync API to configure and manage DataSync with the AWS SDKs. Topics • Actions • Data Types • Common Errors • Common Parameters Actions The following actions are supported: • AddStorageSystem • CancelTaskExecution • CreateAgent • CreateLocationAzureBlob • CreateLocationEfs • CreateLocationFsxLustre • CreateLocationFsxOntap • CreateLocationFsxOpenZfs • CreateLocationFsxWindows • CreateLocationHdfs • CreateLocationNfs • CreateLocationObjectStorage • CreateLocationS3 • CreateLocationSmb • CreateTask • DeleteAgent • DeleteLocation Actions 465 User Guide AWS DataSync • DeleteTask • DescribeAgent • DescribeDiscoveryJob • DescribeLocationAzureBlob • DescribeLocationEfs • DescribeLocationFsxLustre • DescribeLocationFsxOntap • DescribeLocationFsxOpenZfs • DescribeLocationFsxWindows • DescribeLocationHdfs • DescribeLocationNfs • DescribeLocationObjectStorage • DescribeLocationS3 • DescribeLocationSmb • DescribeStorageSystem • DescribeStorageSystemResourceMetrics • DescribeStorageSystemResources • DescribeTask • DescribeTaskExecution • GenerateRecommendations • ListAgents • ListDiscoveryJobs • ListLocations • ListStorageSystems • ListTagsForResource • ListTaskExecutions • ListTasks • RemoveStorageSystem • StartDiscoveryJob • StartTaskExecution Actions 466 User Guide AWS DataSync • StopDiscoveryJob • TagResource • UntagResource • UpdateAgent • UpdateDiscoveryJob • UpdateLocationAzureBlob • UpdateLocationEfs • UpdateLocationFsxLustre • UpdateLocationFsxOntap • UpdateLocationFsxOpenZfs • UpdateLocationFsxWindows • UpdateLocationHdfs • UpdateLocationNfs • UpdateLocationObjectStorage • UpdateLocationS3 • UpdateLocationSmb • UpdateStorageSystem • UpdateTask • UpdateTaskExecution Actions 467 AWS DataSync AddStorageSystem User Guide Creates an AWS resource for an on-premises storage system that you want DataSync Discovery to collect information about. Request Syntax { "AgentArns": [ "string" ], "ClientToken": "string", "CloudWatchLogGroupArn": "string", "Credentials": { "Password": "string", "Username": "string" }, "Name": "string", "ServerConfiguration": { "ServerHostname": "string", "ServerPort": number }, "SystemType": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArns Specifies the Amazon Resource Name (ARN) of the DataSync agent that connects to and reads from your on-premises storage system's management interface. You can only specify one ARN. Type: Array of strings Array Members: Fixed number of 1 item. AddStorageSystem 468 AWS DataSync User Guide Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes ClientToken Specifies a client token to make sure requests with this API operation are idempotent. If you don't specify a client token, DataSync generates one for you automatically. Type: String Pattern: [a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12} Required: Yes CloudWatchLogGroupArn Specifies the ARN of the Amazon CloudWatch log group for monitoring and logging discovery job events. Type: String Length Constraints: Maximum length of 562. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):logs:[a-z\-0-9]+: [0-9]{12}:log-group:([^:\*]*)(:\*)?$ Required: No Credentials Specifies the user name and password for accessing your on-premises storage system's management interface. Type: Credentials object Required: Yes Name Specifies a familiar name for your on-premises storage system. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. AddStorageSystem 469 AWS DataSync User Guide Pattern: ^[\p{L}\p{M}\p{N}\s+=._:@\/-]+$ Required: No ServerConfiguration Specifies the server name and network port required to connect with the management interface of your on-premises storage system. Type: DiscoveryServerConfiguration object Required: Yes SystemType Specifies the type of on-premises storage system that you want DataSync Discovery to collect information about. Note DataSync Discovery currently supports NetApp Fabric-Attached Storage (FAS) and All Flash FAS (AFF) systems running ONTAP 9.7 or later. Type: String Valid Values: NetAppONTAP Required: Yes Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your on-premises storage system. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { AddStorageSystem 470 AWS DataSync User Guide "StorageSystemArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. StorageSystemArn The ARN of the on-premises storage system that you can use with DataSync Discovery. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example adds an on-premises storage system to DataSync Discovery. AddStorageSystem 471 User Guide AWS DataSync { "ServerConfiguration": { "ServerHostname": "172.16.0.0", "ServerPort": 443 }, "SystemType": "NetAppONTAP", "AgentArns": [ "arn:aws:datasync:us-east-1:111222333444:agent/agent-012345abcde012345" ], "CloudWatchLogGroupArn": "arn:aws:logs:us-east-1:111222333444:log-group:/aws/ datasync/discovery:*", "Tags": [ { "Key": "Migration Plan", "Value": "1" } ], "Name": "MyOnPremStorage", "Credentials": { "Username": "admin", "Password": "1234" } } Sample Response A response returns the ARN of the on-premises storage system that you just added to DataSync Discovery. { "StorageSystemArn":
sync-dg-143
sync-dg.pdf
143
DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example adds an on-premises storage system to DataSync Discovery. AddStorageSystem 471 User Guide AWS DataSync { "ServerConfiguration": { "ServerHostname": "172.16.0.0", "ServerPort": 443 }, "SystemType": "NetAppONTAP", "AgentArns": [ "arn:aws:datasync:us-east-1:111222333444:agent/agent-012345abcde012345" ], "CloudWatchLogGroupArn": "arn:aws:logs:us-east-1:111222333444:log-group:/aws/ datasync/discovery:*", "Tags": [ { "Key": "Migration Plan", "Value": "1" } ], "Name": "MyOnPremStorage", "Credentials": { "Username": "admin", "Password": "1234" } } Sample Response A response returns the ARN of the on-premises storage system that you just added to DataSync Discovery. { "StorageSystemArn": "arn:aws:datasync:us-east-1:111222333444:system/storage-system- abcdef01234567890" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET AddStorageSystem 472 User Guide AWS DataSync • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 AddStorageSystem 473 AWS DataSync CancelTaskExecution User Guide Stops an AWS DataSync task execution that's in progress. The transfer of some files are abruptly interrupted. File contents that're transferred to the destination might be incomplete or inconsistent with the source files. However, if you start a new task execution using the same task and allow it to finish, file content on the destination will be complete and consistent. This applies to other unexpected failures that interrupt a task execution. In all of these cases, DataSync successfully completes the transfer when you start the next task execution. Request Syntax { "TaskExecutionArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. TaskExecutionArn The Amazon Resource Name (ARN) of the task execution to stop. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]*:[0-9]{12}:task/task-[0-9a-f]{17}/execution/exec-[0-9a-f]{17}$ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. CancelTaskExecution 474 AWS DataSync Errors User Guide For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CancelTaskExecution 475 AWS DataSync CreateAgent User Guide Activates an AWS DataSync agent that you deploy in your storage environment. The activation process associates the agent with your AWS account. If you haven't deployed an agent yet, see Do I need a DataSync agent? Request Syntax { "ActivationKey": "string", "AgentName": "string", "SecurityGroupArns": [ "string" ], "SubnetArns": [ "string" ], "Tags": [ { "Key": "string", "Value": "string" } ], "VpcEndpointId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ActivationKey Specifies your DataSync agent's activation key. If you don't have an activation key, see Activating your agent. Type: String Length Constraints: Maximum length of 29. Pattern: [A-Z0-9]{5}(-[A-Z0-9]{5}){4} Required: Yes CreateAgent 476 AWS DataSync AgentName User Guide Specifies a name for your agent. We recommend specifying a name that you can remember. Type: String Length Constraints: Minimum length of 0. Maximum length of 256. Pattern: ^[a-zA-Z0-9\s+=._:@/-]+$ Required: No SecurityGroupArns Specifies the Amazon Resource Name (ARN) of the security group that allows traffic between your agent and VPC service endpoint. You can only specify one ARN. Type: Array of strings Array Members: Fixed number of 1 item. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Required: No SubnetArns Specifies the ARN of the subnet where your VPC service endpoint is located. You can only specify one ARN. Type: Array of strings Array Members: Fixed number of 1 item. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:subnet/.*$ Required: No CreateAgent 477 AWS DataSync Tags User Guide Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least one tag for your agent. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No VpcEndpointId Specifies the ID of the VPC service endpoint that you're using. For example,
sync-dg-144
sync-dg.pdf
144
the subnet where your VPC service endpoint is located. You can only specify one ARN. Type: Array of strings Array Members: Fixed number of 1 item. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:subnet/.*$ Required: No CreateAgent 477 AWS DataSync Tags User Guide Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least one tag for your agent. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No VpcEndpointId Specifies the ID of the VPC service endpoint that you're using. For example, a VPC endpoint ID looks like vpce-01234d5aff67890e1. Important The VPC service endpoint you use must include the DataSync service name (for example, com.amazonaws.us-east-2.datasync). Type: String Pattern: ^vpce-[0-9a-f]{17}$ Required: No Response Syntax { "AgentArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CreateAgent 478 AWS DataSync AgentArn User Guide The ARN of the agent that you just activated. Use the ListAgents operation to return a list of agents in your AWS account and AWS Region. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example activates a DataSync agent. { "ActivationKey": "AAAAA-1AAAA-BB1CC-33333-EEEEE", "AgentName": "MyAgent", "Tags": [{ "Key": "Job", "Value": "TransferJob-1" }] } CreateAgent 479 AWS DataSync Sample Response The response returns the ARN of the activated agent. User Guide { "AgentArn": "arn:aws:datasync:us-east-2:111222333444:agent/agent-0b0addbeef44baca3" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateAgent 480 AWS DataSync User Guide CreateLocationAzureBlob Creates a transfer location for a Microsoft Azure Blob Storage container. AWS DataSync can use this location as a transfer source or destination. Before you begin, make sure you know how DataSync accesses Azure Blob Storage and works with access tiers and blob types. You also need a DataSync agent that can connect to your container. Request Syntax { "AccessTier": "string", "AgentArns": [ "string" ], "AuthenticationType": "string", "BlobType": "string", "ContainerUrl": "string", "SasConfiguration": { "Token": "string" }, "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AccessTier Specifies the access tier that you want your objects or files transferred into. This only applies when using the location as a transfer destination. For more information, see Access tiers. Type: String Valid Values: HOT | COOL | ARCHIVE CreateLocationAzureBlob 481 AWS DataSync Required: No AgentArns User Guide Specifies the Amazon Resource Name (ARN) of the DataSync agent that can connect with your Azure Blob Storage container. You can specify more than one agent. For more information, see Using multiple agents for your transfer. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes AuthenticationType Specifies the authentication method DataSync uses to access your Azure Blob Storage. DataSync can access blob storage using a shared access signature (SAS). Type: String Valid Values: SAS Required: Yes BlobType Specifies the type of blob that you want your objects or files to be when transferring them into Azure Blob Storage. Currently, DataSync only supports moving data into Azure Blob Storage as block blobs. For more information on blob types, see the Azure Blob Storage documentation. Type: String Valid Values: BLOCK Required: No ContainerUrl Specifies the URL of the Azure Blob Storage container involved in your transfer. CreateLocationAzureBlob 482 AWS DataSync Type: String Length Constraints: Maximum length of 325. User Guide Pattern: ^https:\/\/[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}\/[a-z0-9](-?[a- z0-9]){2,62}$ Required: Yes SasConfiguration Specifies the SAS configuration that allows DataSync to access your Azure Blob Storage. Type: AzureBlobSasConfiguration object Required: No Subdirectory Specifies path segments if you want to limit your transfer to a virtual directory in your container (for example, /my/images). Type: String Length Constraints: Maximum length of 1024. Pattern: ^[\p{L}\p{M}\p{Z}\p{S}\p{N}\p{P}\p{C}]*$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your transfer location.
sync-dg-145
sync-dg.pdf
145
in your transfer. CreateLocationAzureBlob 482 AWS DataSync Type: String Length Constraints: Maximum length of 325. User Guide Pattern: ^https:\/\/[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}\/[a-z0-9](-?[a- z0-9]){2,62}$ Required: Yes SasConfiguration Specifies the SAS configuration that allows DataSync to access your Azure Blob Storage. Type: AzureBlobSasConfiguration object Required: No Subdirectory Specifies path segments if you want to limit your transfer to a virtual directory in your container (for example, /my/images). Type: String Length Constraints: Maximum length of 1024. Pattern: ^[\p{L}\p{M}\p{Z}\p{S}\p{N}\p{P}\p{C}]*$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your transfer location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { CreateLocationAzureBlob 483 AWS DataSync "LocationArn": "string" } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the Azure Blob Storage transfer location that you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface CreateLocationAzureBlob 484 User Guide AWS DataSync • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationAzureBlob 485 AWS DataSync CreateLocationEfs User Guide Creates a transfer location for an Amazon EFS file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses Amazon EFS file systems. Request Syntax { "AccessPointArn": "string", "Ec2Config": { "SecurityGroupArns": [ "string" ], "SubnetArn": "string" }, "EfsFilesystemArn": "string", "FileSystemAccessRoleArn": "string", "InTransitEncryption": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AccessPointArn Specifies the Amazon Resource Name (ARN) of the access point that DataSync uses to mount your Amazon EFS file system. For more information, see Accessing restricted file systems. Type: String Length Constraints: Maximum length of 128. CreateLocationEfs 486 AWS DataSync User Guide Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):elasticfilesystem: [a-z\-0-9]+:[0-9]{12}:access-point/fsap-[0-9a-f]{8,40}$ Required: No Ec2Config Specifies the subnet and security groups DataSync uses to connect to one of your Amazon EFS file system's mount targets. Type: Ec2Config object Required: Yes EfsFilesystemArn Specifies the ARN for your Amazon EFS file system. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):elasticfilesystem: [a-z\-0-9]*:[0-9]{12}:file-system/fs-.*$ Required: Yes FileSystemAccessRoleArn Specifies an AWS Identity and Access Management (IAM) role that allows DataSync to access your Amazon EFS file system. For information on creating this role, see Creating a DataSync IAM role for file system access. Type: String Length Constraints: Maximum length of 2048. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):iam::[0-9] {12}:role/.*$ Required: No InTransitEncryption Specifies whether you want DataSync to use Transport Layer Security (TLS) 1.2 encryption when it transfers data to or from your Amazon EFS file system. CreateLocationEfs 487 AWS DataSync User Guide If you specify an access point using AccessPointArn or an IAM role using FileSystemAccessRoleArn, you must set this parameter to TLS1_2. Type: String Valid Values: NONE | TLS1_2 Required: No Subdirectory Specifies a mount path for your Amazon EFS file system. This is where DataSync reads or writes data on your file system (depending on if this is a source or destination location). By default, DataSync uses the root directory (or access point if you provide one by using AccessPointArn). You can also include subdirectories using forward slashes (for example, / path/to/folder). Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\p{Zs}]*$ Required: No Tags Specifies the key-value pair that represents a tag that you want to add to the resource. The value can be an empty string. This value helps you manage, filter, and search for your resources. We recommend that you create a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" CreateLocationEfs 488 AWS DataSync } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The
sync-dg-146
sync-dg.pdf
146
that represents a tag that you want to add to the resource. The value can be an empty string. This value helps you manage, filter, and search for your resources. We recommend that you create a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" CreateLocationEfs 488 AWS DataSync } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The Amazon Resource Name (ARN) of the Amazon EFS file system location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example creates a location for an Amazon EFS file system. { CreateLocationEfs 489 AWS DataSync "Ec2Config": { User Guide "SubnetArn": "arn:aws:ec2:us-east-2:11122233344:subnet/ subnet-1234567890abcdef1", "SecurityGroupArns": [ "arn:aws:ec2:us-east-2:11122233344:security-group/sg-1234567890abcdef2" ] }, "EfsFilesystemArn": "arn:aws:elasticfilesystem:us-east-2:111222333444:file-system/ fs-021345abcdef6789", "Subdirectory": "/mount/path", "Tags": [{ "Key": "Name", "Value": "ElasticFileSystem-1" }] } Sample Request: Creating a location for a restricted Amazon EFS file system The following example creates a location for an Amazon EFS file system with restricted access. In this kind of scenario, you might have to specify values for AccessPointArn, FileSystemAccessRoleArn, and InTransitEncryption in your request. { "AccessPointArn": "arn:aws:elasticfilesystem:us-east-2:111222333444:access-point/ fsap-1234567890abcdef0", "Ec2Config": { "SubnetArn": "arn:aws:ec2:us-east-2:111222333444:subnet/ subnet-1234567890abcdef1", "SecurityGroupArns": [ "arn:aws:ec2:us-east-2:111222333444:security-group/sg-1234567890abcdef2" ] }, "FileSystemAccessRoleArn": "arn:aws:iam::111222333444:role/ AwsDataSyncFullAccessNew", "InTransitEncryption": "TLS1_2", "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/loc- abcdef01234567890", "LocationUri": "efs://us-east-2.fs-021345abcdef6789/", "Subdirectory": "/mount/path", "Tags": [{ "Key": "Name", "Value": "ElasticFileSystem-1" }] CreateLocationEfs 490 AWS DataSync } Sample Response User Guide A response returns the location ARN of the Amazon EFS file system. { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-12abcdef012345678" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationEfs 491 AWS DataSync User Guide CreateLocationFsxLustre Creates a transfer location for an Amazon FSx for Lustre file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses FSx for Lustre file systems. Request Syntax { "FsxFilesystemArn": "string", "SecurityGroupArns": [ "string" ], "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. FsxFilesystemArn Specifies the Amazon Resource Name (ARN) of the FSx for Lustre file system. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]*: [0-9]{12}:file-system/fs-.*$ Required: Yes CreateLocationFsxLustre 492 AWS DataSync SecurityGroupArns User Guide Specifies the Amazon Resource Names (ARNs) of up to five security groups that provide access to your FSx for Lustre file system. The security groups must be able to access the file system's ports. The file system must also allow access from the security groups. For information about file system access, see the Amazon FSx for Lustre User Guide. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Required: Yes Subdirectory Specifies a mount path for your FSx for Lustre file system. The path can include subdirectories. When the location is used as a source, DataSync reads data from the mount path. When the location is used as a destination, DataSync writes data to the mount path. If you don't include this parameter, DataSync uses the file system's root directory (/). Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\$\p{Zs}]+$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects CreateLocationFsxLustre 493 AWS DataSync User Guide Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The Amazon Resource Name (ARN) of the FSx for Lustre file system location that you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about
sync-dg-147
sync-dg.pdf
147
We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects CreateLocationFsxLustre 493 AWS DataSync User Guide Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The Amazon Resource Name (ARN) of the FSx for Lustre file system location that you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 CreateLocationFsxLustre 494 AWS DataSync See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationFsxLustre 495 AWS DataSync User Guide CreateLocationFsxOntap Creates a transfer location for an Amazon FSx for NetApp ONTAP file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses FSx for ONTAP file systems. Request Syntax { "Protocol": { "NFS": { "MountOptions": { "Version": "string" } }, "SMB": { "Domain": "string", "MountOptions": { "Version": "string" }, "Password": "string", "User": "string" } }, "SecurityGroupArns": [ "string" ], "StorageVirtualMachineArn": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. CreateLocationFsxOntap 496 AWS DataSync Protocol User Guide Specifies the data transfer protocol that AWS DataSync uses to access your Amazon FSx file system. Type: FsxProtocol object Required: Yes SecurityGroupArns Specifies the Amazon EC2 security groups that provide access to your file system's preferred subnet. The security groups must allow outbound traffic on the following ports (depending on the protocol you use): • Network File System (NFS): TCP ports 111, 635, and 2049 • Server Message Block (SMB): TCP port 445 Your file system's security groups must also allow inbound traffic on the same ports. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Required: Yes StorageVirtualMachineArn Specifies the ARN of the storage virtual machine (SVM) in your file system where you want to copy data to or from. Type: String Length Constraints: Maximum length of 162. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]+: [0-9]{12}:storage-virtual-machine/fs-[0-9a-f]+/svm-[0-9a-f]{17,}$ CreateLocationFsxOntap 497 AWS DataSync Required: Yes Subdirectory User Guide Specifies a path to the file share in the SVM where you want to transfer data to or from. You can specify a junction path (also known as a mount point), qtree path (for NFS file shares), or share name (for SMB file shares). For example, your mount path might be /vol1, /vol1/ tree1, or /share1. Note Don't specify a junction path in the SVM's root volume. For more information, see Managing FSx for ONTAP storage virtual machines in the Amazon FSx for NetApp ONTAP User Guide. Type: String Length Constraints: Maximum length of 255. Pattern: ^[^\u0000\u0085\u2028\u2029\r\n]{1,255}$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" } CreateLocationFsxOntap 498 AWS DataSync Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn Specifies the ARN of the FSx for ONTAP file system location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++
sync-dg-148
sync-dg.pdf
148
file system location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ CreateLocationFsxOntap 499 AWS DataSync • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide CreateLocationFsxOntap 500 AWS DataSync User Guide CreateLocationFsxOpenZfs Creates a transfer location for an Amazon FSx for OpenZFS file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses FSx for OpenZFS file systems. Note Request parameters related to SMB aren't supported with the CreateLocationFsxOpenZfs operation. Request Syntax { "FsxFilesystemArn": "string", "Protocol": { "NFS": { "MountOptions": { "Version": "string" } }, "SMB": { "Domain": "string", "MountOptions": { "Version": "string" }, "Password": "string", "User": "string" } }, "SecurityGroupArns": [ "string" ], "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] CreateLocationFsxOpenZfs 501 AWS DataSync } Request Parameters User Guide For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. FsxFilesystemArn The Amazon Resource Name (ARN) of the FSx for OpenZFS file system. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]*: [0-9]{12}:file-system/fs-.*$ Required: Yes Protocol The type of protocol that AWS DataSync uses to access your file system. Type: FsxProtocol object Required: Yes SecurityGroupArns The ARNs of the security groups that are used to configure the FSx for OpenZFS file system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Required: Yes CreateLocationFsxOpenZfs 502 AWS DataSync Subdirectory User Guide A subdirectory in the location's path that must begin with /fsx. DataSync uses this subdirectory to read or write data (depending on whether the file system is a source or destination location). Type: String Length Constraints: Maximum length of 4096. Pattern: ^[^\u0000\u0085\u2028\u2029\r\n]{1,4096}$ Required: No Tags The key-value pair that represents a tag that you want to add to the resource. The value can be an empty string. This value helps you manage, filter, and search for your resources. We recommend that you create a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the FSx for OpenZFS file system location that you created. CreateLocationFsxOpenZfs 503 AWS DataSync Type: String Length Constraints: Maximum length of 128. User Guide Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationFsxOpenZfs 504 AWS DataSync User Guide CreateLocationFsxOpenZfs 505 AWS DataSync User Guide CreateLocationFsxWindows Creates a transfer location for an Amazon FSx for Windows File Server file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses FSx for Windows File Server file systems. Request Syntax { "Domain": "string", "FsxFilesystemArn": "string", "Password": "string", "SecurityGroupArns": [ "string" ], "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ], "User": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. Domain Specifies the name of the Windows domain that the FSx for Windows File
sync-dg-149
sync-dg.pdf
149
file system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses FSx for Windows File Server file systems. Request Syntax { "Domain": "string", "FsxFilesystemArn": "string", "Password": "string", "SecurityGroupArns": [ "string" ], "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ], "User": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. Domain Specifies the name of the Windows domain that the FSx for Windows File Server file system belongs to. If you have multiple Active Directory domains in your environment, configuring this parameter makes sure that DataSync connects to the right file system. Type: String Length Constraints: Maximum length of 253. CreateLocationFsxWindows 506 AWS DataSync User Guide Pattern: ^[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}$ Required: No FsxFilesystemArn Specifies the Amazon Resource Name (ARN) for the FSx for Windows File Server file system. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]*: [0-9]{12}:file-system/fs-.*$ Required: Yes Password Specifies the password of the user with the permissions to mount and access the files, folders, and file metadata in your FSx for Windows File Server file system. Type: String Length Constraints: Maximum length of 104. Pattern: ^.{0,104}$ Required: Yes SecurityGroupArns Specifies the ARNs of the Amazon EC2 security groups that provide access to your file system's preferred subnet. The security groups that you specify must be able to communicate with your file system's security groups. For information about configuring security groups for file system access, see the Amazon FSx for Windows File Server User Guide. Note If you choose a security group that doesn't allow connections from within itself, do one of the following: • Configure the security group to allow it to communicate within itself. CreateLocationFsxWindows 507 AWS DataSync User Guide • Choose a different security group that can communicate with the mount target's security group. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Required: Yes Subdirectory Specifies a mount path for your file system using forward slashes. This is where DataSync reads or writes data (depending on if this is a source or destination location). Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\$\p{Zs}]+$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No User Specifies the user with the permissions to mount and access the files, folders, and file metadata in your FSx for Windows File Server file system. CreateLocationFsxWindows 508 AWS DataSync User Guide For information about choosing a user with the right level of access for your transfer, see required permissions for FSx for Windows File Server locations. Type: String Length Constraints: Maximum length of 104. Pattern: ^[^\x22\x5B\x5D/\\:;|=,+*?\x3C\x3E]{1,104}$ Required: Yes Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the FSx for Windows File Server file system location you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. CreateLocationFsxWindows 509 AWS DataSync HTTP Status Code: 500 InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationFsxWindows 510 AWS DataSync CreateLocationHdfs User Guide Creates a transfer location for a Hadoop Distributed File System (HDFS). AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses HDFS clusters. Request Syntax { "AgentArns": [ "string" ], "AuthenticationType": "string", "BlockSize": number, "KerberosKeytab": blob, "KerberosKrb5Conf": blob, "KerberosPrincipal": "string", "KmsKeyProviderUri": "string", "NameNodes": [ { "Hostname": "string", "Port": number } ], "QopConfiguration": { "DataTransferProtection": "string", "RpcProtection": "string" }, "ReplicationFactor": number, "SimpleUser": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to
sync-dg-150
sync-dg.pdf
150
Creates a transfer location for a Hadoop Distributed File System (HDFS). AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses HDFS clusters. Request Syntax { "AgentArns": [ "string" ], "AuthenticationType": "string", "BlockSize": number, "KerberosKeytab": blob, "KerberosKrb5Conf": blob, "KerberosPrincipal": "string", "KmsKeyProviderUri": "string", "NameNodes": [ { "Hostname": "string", "Port": number } ], "QopConfiguration": { "DataTransferProtection": "string", "RpcProtection": "string" }, "ReplicationFactor": number, "SimpleUser": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. CreateLocationHdfs 511 AWS DataSync AgentArns User Guide The Amazon Resource Names (ARNs) of the DataSync agents that can connect to your HDFS cluster. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes AuthenticationType The type of authentication used to determine the identity of the user. Type: String Valid Values: SIMPLE | KERBEROS Required: Yes BlockSize The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB). Type: Integer Valid Range: Minimum value of 1048576. Maximum value of 1073741824. Required: No KerberosKeytab The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. Note If KERBEROS is specified for AuthenticationType, this parameter is required. CreateLocationHdfs 512 AWS DataSync User Guide Type: Base64-encoded binary data object Length Constraints: Maximum length of 65536. Required: No KerberosKrb5Conf The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf file by providing the file's address. If you're using the AWS CLI, it performs the base64 encoding for you. Otherwise, provide the base64-encoded text. Note If KERBEROS is specified for AuthenticationType, this parameter is required. Type: Base64-encoded binary data object Length Constraints: Maximum length of 131072. Required: No KerberosPrincipal The Kerberos principal with access to the files and folders on the HDFS cluster. Note If KERBEROS is specified for AuthenticationType, this parameter is required. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^.+$ Required: No KmsKeyProviderUri The URI of the HDFS cluster's Key Management Server (KMS). CreateLocationHdfs 513 AWS DataSync Type: String User Guide Length Constraints: Minimum length of 1. Maximum length of 255. Pattern: ^kms:\/\/http[s]?@(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za- z0-9\-]*[A-Za-z0-9])(;(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A- Za-z0-9]))*:[0-9]{1,5}\/kms$ Required: No NameNodes The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode. Type: Array of HdfsNameNode objects Array Members: Minimum number of 1 item. Required: Yes QopConfiguration The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value. Type: QopConfiguration object Required: No ReplicationFactor The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes. Type: Integer Valid Range: Minimum value of 1. Maximum value of 512. Required: No CreateLocationHdfs 514 AWS DataSync SimpleUser User Guide The user name used to identify the client on the host operating system. Note If SIMPLE is specified for AuthenticationType, this parameter is required. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^[_.A-Za-z0-9][-_.A-Za-z0-9]*$ Required: No Subdirectory A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /. Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\$\p{Zs}]+$ Required: No Tags The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { CreateLocationHdfs 515 AWS DataSync "LocationArn": "string" } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the source HDFS cluster location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown
sync-dg-151
sync-dg.pdf
151
Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { CreateLocationHdfs 515 AWS DataSync "LocationArn": "string" } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the source HDFS cluster location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface CreateLocationHdfs 516 User Guide AWS DataSync • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationHdfs 517 AWS DataSync CreateLocationNfs User Guide Creates a transfer location for a Network File System (NFS) file server. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses NFS file servers. Request Syntax { "MountOptions": { "Version": "string" }, "OnPremConfig": { "AgentArns": [ "string" ] }, "ServerHostname": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MountOptions Specifies the options that DataSync can use to mount your NFS file server. Type: NfsMountOptions object Required: No OnPremConfig Specifies the Amazon Resource Name (ARN) of the DataSync agent that can connect to your NFS file server. CreateLocationNfs 518 AWS DataSync User Guide You can specify more than one agent. For more information, see Using multiple DataSync agents. Type: OnPremConfig object Required: Yes ServerHostname Specifies the DNS name or IP version 4 address of the NFS file server that your DataSync agent connects to. Type: String Length Constraints: Maximum length of 255. Pattern: ^(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A-Za-z0-9])$ Required: Yes Subdirectory Specifies the export path in your NFS file server that you want DataSync to mount. This path (or a subdirectory of the path) is where DataSync transfers data to or from. For information on configuring an export for DataSync, see Accessing NFS file servers. Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\p{Zs}]+$ Required: Yes Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. CreateLocationNfs 519 User Guide AWS DataSync Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the transfer location that you created for your NFS file server. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 CreateLocationNfs 520 AWS DataSync Examples Example User Guide The following example creates a DataSync transfer location for an NFS file server. Sample Request { "MountOptions": { "Version": : "NFS4_0" }, "OnPremConfig": { "AgentArn": [ "arn:aws:datasync:us-east-2:111222333444:agent/ agent-0b0addbeef44b3nfs" ] }, "ServerHostname": "[email protected]", "Subdirectory": "/MyFolder", "Tags": [ { "Key": "Name", "Value": "FileSystem-1" } ] } Example The response returns the ARN of the NFS location. Sample Response { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50aa" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: CreateLocationNfs 521 User Guide AWS DataSync • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationNfs 522 AWS DataSync User Guide CreateLocationObjectStorage Creates a transfer location for an object storage system. AWS DataSync can use this location as a source or destination for transferring data.
sync-dg-152
sync-dg.pdf
152
one of the language-specific AWS SDKs, see the following: CreateLocationNfs 521 User Guide AWS DataSync • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationNfs 522 AWS DataSync User Guide CreateLocationObjectStorage Creates a transfer location for an object storage system. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand the prerequisites for DataSync to work with object storage systems. Request Syntax { "AccessKey": "string", "AgentArns": [ "string" ], "BucketName": "string", "SecretKey": "string", "ServerCertificate": blob, "ServerHostname": "string", "ServerPort": number, "ServerProtocol": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AccessKey Specifies the access key (for example, a user name) if credentials are required to authenticate with the object storage server. Type: String Length Constraints: Minimum length of 0. Maximum length of 200. CreateLocationObjectStorage 523 AWS DataSync Pattern: ^.*$ Required: No AgentArns User Guide Specifies the Amazon Resource Names (ARNs) of the DataSync agents that can connect with your object storage system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes BucketName Specifies the name of the object storage bucket involved in the transfer. Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: ^[a-zA-Z0-9_\-\+\.\(\)\$\p{Zs}]+$ Required: Yes SecretKey Specifies the secret key (for example, a password) if credentials are required to authenticate with the object storage server. Type: String Length Constraints: Minimum length of 0. Maximum length of 200. Pattern: ^.*$ Required: No CreateLocationObjectStorage 524 AWS DataSync ServerCertificate User Guide Specifies a certificate chain for DataSync to authenticate with your object storage system if the system uses a private or self-signed certificate authority (CA). You must specify a single .pem file with a full certificate chain (for example, file:///home/user/.ssh/ object_storage_certificates.pem). The certificate chain might include: • The object storage system's certificate • All intermediate certificates (if there are any) • The root certificate of the signing CA You can concatenate your certificates into a .pem file (which can be up to 32768 bytes before base64 encoding). The following example cat command creates an object_storage_certificates.pem file that includes three certificates: cat object_server_certificate.pem intermediate_certificate.pem ca_root_certificate.pem > object_storage_certificates.pem To use this parameter, configure ServerProtocol to HTTPS. Type: Base64-encoded binary data object Length Constraints: Maximum length of 32768. Required: No ServerHostname Specifies the domain name or IP version 4 (IPv4) address of the object storage server that your DataSync agent connects to. Type: String Length Constraints: Maximum length of 255. Pattern: ^(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A-Za-z0-9])$ Required: Yes ServerPort Specifies the port that your object storage server accepts inbound network traffic on (for example, port 443). CreateLocationObjectStorage 525 AWS DataSync Type: Integer Valid Range: Minimum value of 1. Maximum value of 65536. User Guide Required: No ServerProtocol Specifies the protocol that your object storage server uses to communicate. Type: String Valid Values: HTTPS | HTTP Required: No Subdirectory Specifies the object prefix for your object storage server. If this is a source location, DataSync only copies objects with this prefix. If this is a destination location, DataSync writes all objects with this prefix. Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\p{Zs}]*$ Required: No Tags Specifies the key-value pair that represents a tag that you want to add to the resource. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { CreateLocationObjectStorage 526 AWS DataSync "LocationArn": "string" } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn Specifies the ARN of the object storage system location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface CreateLocationObjectStorage 527 User Guide AWS
sync-dg-153
sync-dg.pdf
153
the ARN of the object storage system location that you create. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface CreateLocationObjectStorage 527 User Guide AWS DataSync • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationObjectStorage 528 AWS DataSync CreateLocationS3 User Guide Creates a transfer location for an Amazon S3 bucket. AWS DataSync can use this location as a source or destination for transferring data. Important Before you begin, make sure that you read the following topics: • Storage class considerations with Amazon S3 locations • Evaluating S3 request costs when using DataSync For more information, see Configuring transfers with Amazon S3. Request Syntax { "AgentArns": [ "string" ], "S3BucketArn": "string", "S3Config": { "BucketAccessRoleArn": "string" }, "S3StorageClass": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. CreateLocationS3 529 AWS DataSync AgentArns User Guide (Amazon S3 on Outposts only) Specifies the Amazon Resource Name (ARN) of the DataSync agent on your Outpost. For more information, see Deploy your DataSync agent on AWS Outposts. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: No S3BucketArn Specifies the ARN of the S3 bucket that you want to use as a location. (When creating your DataSync task later, you specify whether this location is a transfer source or destination.) If your S3 bucket is located on an AWS Outposts resource, you must specify an Amazon S3 access point. For more information, see Managing data access with Amazon S3 access points in the Amazon S3 User Guide. Type: String Length Constraints: Maximum length of 268. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):s3:[a-z\-0-9]*: [0-9]{12}:accesspoint[/:][a-zA-Z0-9\-.]{1,63}$|^arn:(aws|aws-cn|aws-us- gov|aws-iso|aws-iso-b):s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a- zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]{1,63}$|^arn:(aws|aws- cn|aws-us-gov|aws-iso|aws-iso-b):s3:::[a-zA-Z0-9.\-_]{1,255}$ Required: Yes S3Config Specifies the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that DataSync uses to access your S3 bucket. CreateLocationS3 530 AWS DataSync User Guide For more information, see Providing DataSync access to S3 buckets. Type: S3Config object Required: Yes S3StorageClass Specifies the storage class that you want your objects to use when Amazon S3 is a transfer destination. For buckets in AWS Regions, the storage class defaults to STANDARD. For buckets on AWS Outposts, the storage class defaults to OUTPOSTS. For more information, see Storage class considerations with Amazon S3 transfers. Type: String Valid Values: STANDARD | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_INSTANT_RETRIEVAL Required: No Subdirectory Specifies a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Note DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March Type: String Length Constraints: Maximum length of 4096. CreateLocationS3 531 AWS DataSync User Guide Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\p{Zs}]*$ Required: No Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your transfer location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the S3 location that you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. CreateLocationS3 532 AWS DataSync InternalException User Guide This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Step 1. Allow to assume the IAM role required to write to the bucket The following example shows the simplest policy that grants the required permissions for AWS DataSync to access a destination
sync-dg-154
sync-dg.pdf
154
Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. CreateLocationS3 532 AWS DataSync InternalException User Guide This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Step 1. Allow to assume the IAM role required to write to the bucket The following example shows the simplest policy that grants the required permissions for AWS DataSync to access a destination Amazon S3 bucket, followed by an IAM role to which the create-location-s3-iam-role policy has been attached. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } "Role": { "Path": "/", "RoleName": "amzn-s3-demo-bucket-access-role", "RoleId": "role-id", "Arn": "arn:aws:iam::account-id:role/amzn-s3-demo-bucket-access-role", "CreateDate": "2018-07-27T02:49:23.117Z", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ CreateLocationS3 533 AWS DataSync { "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole" User Guide } ] } } } Step 2. Allow the created IAM role to write to the bucket Attach a policy that has sufficient permissions to access the bucket to the role. An example of such policy is the AWSDataSyncFullAccess managed policy. For more information, see AWSDataSyncFullAccess in the IAM console. You don't need to create this policy. It's managed by AWS, so all that you need to do is specify its ARN in the attach-role-policy command. IAM_POLICY_ARN='arn:aws:iam::aws:policy/AWSDataSyncFullAccess' Step 3. Create an endpoint for an Amazon S3 bucket The following example creates an endpoint for an Amazon S3 bucket. When the S3 endpoint is created, a response similar to the second example following returns the Amazon Resource Name (ARN) for the new Amazon S3 location. Sample Request { "S3BucketArn": "arn:aws:s3:::amzn-s3-demo-bucket", "S3Config": { "BucketAccessRoleArn": "arn:aws:iam::111222333444:role/amzn-s3-demo-bucket-access- role", }, "S3StorageClass": "STANDARD", "Subdirectory": "/MyFolder", "Tags": [ CreateLocationS3 534 User Guide AWS DataSync { "Key": "Name", "Value": "s3Bucket-1" } ] } Sample Response { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50s3" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationS3 535 AWS DataSync CreateLocationSmb User Guide Creates a transfer location for a Server Message Block (SMB) file server. AWS DataSync can use this location as a source or destination for transferring data. Before you begin, make sure that you understand how DataSync accesses SMB file servers. For more information, see Providing DataSync access to SMB file servers. Request Syntax { "AgentArns": [ "string" ], "AuthenticationType": "string", "DnsIpAddresses": [ "string" ], "Domain": "string", "KerberosKeytab": blob, "KerberosKrb5Conf": blob, "KerberosPrincipal": "string", "MountOptions": { "Version": "string" }, "Password": "string", "ServerHostname": "string", "Subdirectory": "string", "Tags": [ { "Key": "string", "Value": "string" } ], "User": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArns Specifies the DataSync agent (or agents) that can connect to your SMB file server. You specify an agent by using its Amazon Resource Name (ARN). CreateLocationSmb 536 AWS DataSync Type: Array of strings User Guide Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes AuthenticationType Specifies the authentication protocol that DataSync uses to connect to your SMB file server. DataSync supports NTLM (default) and KERBEROS authentication. For more information, see Providing DataSync access to SMB file servers. Type: String Valid Values: NTLM | KERBEROS Required: No DnsIpAddresses Specifies the IPv4 addresses for the DNS servers that your SMB file server belongs to. This parameter applies only if AuthenticationType is set to KERBEROS. If you have multiple domains in your environment, configuring this parameter makes sure that DataSync connects to the right SMB file server. Type: Array of strings Array Members: Maximum number of 2 items. Length Constraints: Minimum length of 7. Maximum length of 15. Pattern: \A(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)) {3}\z Required: No CreateLocationSmb 537 AWS DataSync Domain User Guide Specifies the Windows domain name that your SMB file server belongs to. This parameter applies only if AuthenticationType is set to NTLM. If you have multiple domains in your environment, configuring this parameter makes sure that DataSync connects to the right file server. Type: String Length Constraints: Maximum length of 253. Pattern: ^[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}$ Required: No KerberosKeytab Specifies your Kerberos key table (keytab) file, which includes mappings between your Kerberos principal and encryption keys. To avoid task execution errors, make sure that the Kerberos principal that you use
sync-dg-155
sync-dg.pdf
155
15. Pattern: \A(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)) {3}\z Required: No CreateLocationSmb 537 AWS DataSync Domain User Guide Specifies the Windows domain name that your SMB file server belongs to. This parameter applies only if AuthenticationType is set to NTLM. If you have multiple domains in your environment, configuring this parameter makes sure that DataSync connects to the right file server. Type: String Length Constraints: Maximum length of 253. Pattern: ^[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}$ Required: No KerberosKeytab Specifies your Kerberos key table (keytab) file, which includes mappings between your Kerberos principal and encryption keys. To avoid task execution errors, make sure that the Kerberos principal that you use to create the keytab file matches exactly what you specify for KerberosPrincipal. Type: Base64-encoded binary data object Length Constraints: Maximum length of 65536. Required: No KerberosKrb5Conf Specifies a Kerberos configuration file (krb5.conf) that defines your Kerberos realm configuration. The file must be base64 encoded. If you're using the AWS CLI, the encoding is done for you. Type: Base64-encoded binary data object Length Constraints: Maximum length of 131072. Required: No CreateLocationSmb 538 AWS DataSync KerberosPrincipal User Guide Specifies a Kerberos prinicpal, which is an identity in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/[email protected]. Principal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this parameter doesn’t exactly match the principal that you use to create the keytab file. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^.+$ Required: No MountOptions Specifies the version of the SMB protocol that DataSync uses to access your SMB file server. Type: SmbMountOptions object Required: No Password Specifies the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. This parameter applies only if AuthenticationType is set to NTLM. Type: String Length Constraints: Maximum length of 104. Pattern: ^.{0,104}$ Required: No ServerHostname Specifies the domain name or IP address of the SMB file server that your DataSync agent connects to. CreateLocationSmb 539 AWS DataSync User Guide Remember the following when configuring this parameter: • You can't specify an IP version 6 (IPv6) address. • If you're using Kerberos authentication, you must specify a domain name. Type: String Length Constraints: Maximum length of 255. Pattern: ^(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A-Za-z0-9])$ Required: Yes Subdirectory Specifies the name of the share exported by your SMB file server where DataSync will read or write data. You can include a subdirectory in the share path (for example, /path/to/ subdirectory). Make sure that other SMB clients in your network can also mount this path. To copy all data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see Providing DataSync access to SMB file servers. Type: String Length Constraints: Maximum length of 4096. Pattern: ^[a-zA-Z0-9_\-\+\./\(\)\$\p{Zs}]+$ Required: Yes Tags Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No User Specifies the user that can mount and access the files, folders, and file metadata in your SMB file server. This parameter applies only if AuthenticationType is set to NTLM. CreateLocationSmb 540 AWS DataSync User Guide For information about choosing a user with the right level of access for your transfer, see Providing DataSync access to SMB file servers. Type: String Length Constraints: Maximum length of 104. Pattern: ^[^\x22\x5B\x5D/\\:;|=,+*?\x3C\x3E]{1,104}$ Required: No Response Syntax { "LocationArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. LocationArn The ARN of the SMB location that you created. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. CreateLocationSmb 541 AWS DataSync HTTP Status Code: 500 InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example creates a location for an SMB file server. { "AgentArns":[ "arn:aws:datasync:us-east-2:111222333444:agent/agent-0b0addbeef44b3nfs", "arn:aws:datasync:us-east-2:111222333444:agent/agent-2345noo35nnee1123ovo3" ], "Domain":"AMAZON", "MountOptions":{ "Version":"SMB3" }, "Password":"string", "ServerHostname":"MyServer.amazon.com", "Subdirectory":"share", "Tags":[ { "Key":"department", "Value":"finance" } ], "User":"user-1" } Sample Response A response returns the location ARN of your SMB file server. { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0f01451b140b2af49" CreateLocationSmb 542 AWS DataSync } See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET •
sync-dg-156
sync-dg.pdf
156
thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example creates a location for an SMB file server. { "AgentArns":[ "arn:aws:datasync:us-east-2:111222333444:agent/agent-0b0addbeef44b3nfs", "arn:aws:datasync:us-east-2:111222333444:agent/agent-2345noo35nnee1123ovo3" ], "Domain":"AMAZON", "MountOptions":{ "Version":"SMB3" }, "Password":"string", "ServerHostname":"MyServer.amazon.com", "Subdirectory":"share", "Tags":[ { "Key":"department", "Value":"finance" } ], "User":"user-1" } Sample Response A response returns the location ARN of your SMB file server. { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0f01451b140b2af49" CreateLocationSmb 542 AWS DataSync } See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 CreateLocationSmb 543 AWS DataSync CreateTask User Guide Configures a task, which defines where and how AWS DataSync transfers your data. A task includes a source location, destination location, and transfer options (such as bandwidth limits, scheduling, and more). Important If you're planning to transfer data to or from an Amazon S3 location, review how DataSync can affect your S3 request charges and the DataSync pricing page before you begin. Request Syntax { "CloudWatchLogGroupArn": "string", "DestinationLocationArn": "string", "Excludes": [ { "FilterType": "string", "Value": "string" } ], "Includes": [ { "FilterType": "string", "Value": "string" } ], "ManifestConfig": { "Action": "string", "Format": "string", "Source": { "S3": { "BucketAccessRoleArn": "string", "ManifestObjectPath": "string", "ManifestObjectVersionId": "string", "S3BucketArn": "string" } } }, CreateTask 544 AWS DataSync User Guide "Name": "string", "Options": { "Atime": "string", "BytesPerSecond": number, "Gid": "string", "LogLevel": "string", "Mtime": "string", "ObjectTags": "string", "OverwriteMode": "string", "PosixPermissions": "string", "PreserveDeletedFiles": "string", "PreserveDevices": "string", "SecurityDescriptorCopyFlags": "string", "TaskQueueing": "string", "TransferMode": "string", "Uid": "string", "VerifyMode": "string" }, "Schedule": { "ScheduleExpression": "string", "Status": "string" }, "SourceLocationArn": "string", "Tags": [ { "Key": "string", "Value": "string" } ], "TaskMode": "string", "TaskReportConfig": { "Destination": { "S3": { "BucketAccessRoleArn": "string", "S3BucketArn": "string", "Subdirectory": "string" } }, "ObjectVersionIds": "string", "OutputType": "string", "Overrides": { "Deleted": { "ReportLevel": "string" }, CreateTask 545 AWS DataSync User Guide "Skipped": { "ReportLevel": "string" }, "Transferred": { "ReportLevel": "string" }, "Verified": { "ReportLevel": "string" } }, "ReportLevel": "string" } } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. CloudWatchLogGroupArn Specifies the Amazon Resource Name (ARN) of an Amazon CloudWatch log group for monitoring your task. For Enhanced mode tasks, you don't need to specify anything. DataSync automatically sends logs to a CloudWatch log group named /aws/datasync. For more information, see Monitoring data transfers with CloudWatch Logs. Type: String Length Constraints: Maximum length of 562. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):logs:[a-z\-0-9]+: [0-9]{12}:log-group:([^:\*]*)(:\*)?$ Required: No DestinationLocationArn Specifies the ARN of your transfer's destination location. Type: String CreateTask 546 AWS DataSync User Guide Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Excludes Specifies exclude filters that define the files, objects, and folders in your source location that you don't want DataSync to transfer. For more information and examples, see Specifying what DataSync transfers by using filters. Type: Array of FilterRule objects Array Members: Minimum number of 0 items. Maximum number of 1 item. Required: No Includes Specifies include filters that define the files, objects, and folders in your source location that you want DataSync to transfer. For more information and examples, see Specifying what DataSync transfers by using filters. Type: Array of FilterRule objects Array Members: Minimum number of 0 items. Maximum number of 1 item. Required: No ManifestConfig Configures a manifest, which is a list of files or objects that you want DataSync to transfer. For more information and configuration examples, see Specifying what DataSync transfers by using a manifest. When using this parameter, your caller identity (the role that you're using DataSync with) must have the iam:PassRole permission. The AWSDataSyncFullAccess policy includes this permission. Type: ManifestConfig object Required: No CreateTask 547 AWS DataSync Name Specifies the name of your task. Type: String User Guide Length Constraints: Minimum length of 0. Maximum length of 256. Pattern: ^[a-zA-Z0-9\s+=._:@/-]+$ Required: No Options Specifies your task's settings, such as preserving file metadata, verifying data integrity, among other options. Type: Options object Required: No Schedule Specifies a schedule for when you want your task to run. For more information, see Scheduling your task. Type: TaskSchedule object Required: No SourceLocationArn Specifies the ARN of your transfer's source location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Tags Specifies the tags that you want to apply to your task. CreateTask 548 AWS DataSync User Guide Tags are key-value pairs that help you manage, filter, and search for your DataSync resources. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50
sync-dg-157
sync-dg.pdf
157
Type: Options object Required: No Schedule Specifies a schedule for when you want your task to run. For more information, see Scheduling your task. Type: TaskSchedule object Required: No SourceLocationArn Specifies the ARN of your transfer's source location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Tags Specifies the tags that you want to apply to your task. CreateTask 548 AWS DataSync User Guide Tags are key-value pairs that help you manage, filter, and search for your DataSync resources. Type: Array of TagListEntry objects Array Members: Minimum number of 0 items. Maximum number of 50 items. Required: No TaskMode Specifies one of the following task modes for your data transfer: • ENHANCED - Transfer virtually unlimited numbers of objects with higher performance than Basic mode. Enhanced mode tasks optimize the data transfer process by listing, preparing, transferring, and verifying data in parallel. Enhanced mode is currently available for transfers between Amazon S3 locations. Note To create an Enhanced mode task, the IAM role that you use to call the CreateTask operation must have the iam:CreateServiceLinkedRole permission. • BASIC (default) - Transfer files or objects between AWS storage and all other supported DataSync locations. Basic mode tasks are subject to quotas on the number of files, objects, and directories in a dataset. Basic mode sequentially prepares, transfers, and verifies data, making it slower than Enhanced mode for most workloads. For more information, see Understanding task mode differences. Type: String Valid Values: BASIC | ENHANCED Required: No TaskReportConfig Specifies how you want to configure a task report, which provides detailed information about your DataSync transfer. For more information, see Monitoring your DataSync transfers with task reports. When using this parameter, your caller identity (the role that you're using DataSync with) must have the iam:PassRole permission. The AWSDataSyncFullAccess policy includes this permission. CreateTask 549 AWS DataSync User Guide Type: TaskReportConfig object Required: No Response Syntax { "TaskArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. TaskArn The Amazon Resource Name (ARN) of the task. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]*:[0-9]{12}:task/task-[0-9a-f]{17}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 CreateTask 550 AWS DataSync Examples User Guide Sample Request for an Enhanced mode task The following example creates a DataSync task that uses Enhanced mode. Unlike when creating Basic mode tasks, you don't have to specify an Amazon CloudWatch log group. With Enhanced mode tasks, DataSync automatically sends task logs to a log group named / aws/datasync. If that log group doesn't exist in your AWS Region, DataSync creates the log group on your behalf when you create the task. { "SourceLocationArn": "arn:aws:datasync:us- east-1:111222333444:location/1111aaaa2222bbbb3", "DestinationLocationArn": "arn:aws:datasync:us- east-1:111222333444:location/0000zzzz1111yyyy2", "Name": "My Enhanced mode task", "TaskMode": "ENHANCED", "Options": { "TransferMode": "CHANGED", "VerifyMode": "ONLY_FILES_TRANSFERRED", "ObjectTags": "PRESERVE", "LogLevel": "TRANSFER" } } Sample Request for a Basic mode task The following example creates a DataSync task that uses Basic mode. { "SourceLocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-1111aaaa2222bbbb3", "DestinationLocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-0000zzzz1111yyyy2", "Name": "My Basic mode task", "TaskMode": "BASIC", "Options": { "Atime": "BEST_EFFORT", "Gid": "NONE", "Mtime": "PRESERVE", "PosixPermissions": "PRESERVE", CreateTask 551 AWS DataSync User Guide "PreserveDevices": "NONE", "PreserveDeletedFiles": "PRESERVE", "Uid": "NONE", "VerifyMode": "ONLY_FILES_TRANSFERRED" }, "Schedule": { "ScheduleExpression": "0 12 ? * SUN,WED *" }, "CloudWatchLogGroupArn": "arn:aws:logs:us-east-2:111222333444:log-group:/log-group- name:*", "Tags": [ { "Key": "Name", "Value": "Migration-wave-1" } ] } Sample Response The following response includes the ARN of a created task. { "TaskArn": "arn:aws:datasync:us-east-2:111222333444:task/task-08de6e6697796f026" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 CreateTask 552 AWS DataSync • AWS SDK for Python • AWS SDK for Ruby V3 User Guide CreateTask 553 AWS DataSync DeleteAgent User Guide Removes an AWS DataSync agent resource from your AWS account. Keep in mind that this operation (which can't be undone) doesn't remove the agent's virtual machine (VM) or Amazon EC2 instance from your storage environment. For next steps, you can delete the VM or instance from your storage environment or reuse it to activate a new agent. Request Syntax { "AgentArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArn The Amazon Resource Name (ARN) of
sync-dg-158
sync-dg.pdf
158
AWS DataSync DeleteAgent User Guide Removes an AWS DataSync agent resource from your AWS account. Keep in mind that this operation (which can't be undone) doesn't remove the agent's virtual machine (VM) or Amazon EC2 instance from your storage environment. For next steps, you can delete the VM or instance from your storage environment or reuse it to activate a new agent. Request Syntax { "AgentArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArn The Amazon Resource Name (ARN) of the agent to delete. Use the ListAgents operation to return a list of agents for your account and AWS Region. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. DeleteAgent 554 AWS DataSync InternalException User Guide This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DeleteAgent 555 AWS DataSync DeleteLocation Deletes a transfer location resource from AWS DataSync. User Guide Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn The Amazon Resource Name (ARN) of the location to delete. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 DeleteLocation 556 AWS DataSync InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DeleteLocation 557 AWS DataSync DeleteTask Deletes a transfer task resource from AWS DataSync. User Guide Request Syntax { "TaskArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. TaskArn Specifies the Amazon Resource Name (ARN) of the task that you want to delete. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]*:[0-9]{12}:task/task-[0-9a-f]{17}$ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 DeleteTask 558 AWS DataSync InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DeleteTask 559 AWS DataSync DescribeAgent User Guide Returns information about an AWS DataSync agent, such as its name, service endpoint type, and status. Request Syntax { "AgentArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArn Specifies the Amazon Resource Name (ARN) of the DataSync agent that you want information about. Type: String Length Constraints: Maximum length
sync-dg-159
sync-dg.pdf
159
JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DeleteTask 559 AWS DataSync DescribeAgent User Guide Returns information about an AWS DataSync agent, such as its name, service endpoint type, and status. Request Syntax { "AgentArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. AgentArn Specifies the Amazon Resource Name (ARN) of the DataSync agent that you want information about. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ Required: Yes Response Syntax { "AgentArn": "string", "CreationTime": number, "EndpointType": "string", "LastConnectionTime": number, "Name": "string", "Platform": { "Version": "string" DescribeAgent 560 User Guide AWS DataSync }, "PrivateLinkConfig": { "PrivateLinkEndpoint": "string", "SecurityGroupArns": [ "string" ], "SubnetArns": [ "string" ], "VpcEndpointId": "string" }, "Status": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AgentArn The ARN of the agent. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ CreationTime The time that the agent was activated. Type: Timestamp EndpointType The type of service endpoint that your agent is connected to. Type: String Valid Values: PUBLIC | PRIVATE_LINK | FIPS LastConnectionTime The last time that the agent was communicating with the DataSync service. Type: Timestamp DescribeAgent 561 AWS DataSync Name The name of the agent. Type: String User Guide Length Constraints: Minimum length of 0. Maximum length of 256. Pattern: ^[a-zA-Z0-9\s+=._:@/-]+$ Platform The platform-related details about the agent, such as the version number. Type: Platform object PrivateLinkConfig The network configuration that the agent uses when connecting to a VPC service endpoint. Type: PrivateLinkConfig object Status The status of the agent. • If the status is ONLINE, the agent is configured properly and ready to use. • If the status is OFFLINE, the agent has been out of contact with DataSync for five minutes or longer. This can happen for a few reasons. For more information, see What do I do if my agent is offline? Type: String Valid Values: ONLINE | OFFLINE Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 DescribeAgent 562 AWS DataSync InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example returns information about an agent specified in a request. { "AgentArn": "arn:aws:datasync:us-east-2:111122223333:agent/agent-1234567890abcdef0" } Sample Response The following example response describes an agent that uses a public service endpoint. { "AgentArn": "arn:aws:datasync:us-east-2:111122223333:agent/ agent-1234567890abcdef0", "Name": "Data center migration agent", "Status": "ONLINE", "LastConnectionTime": "2022-10-17T17:21:35.540000+00:00", "CreationTime": "2022-10-05T20:52:29.499000+00:00", "EndpointType": "PUBLIC", "Platform": { "Version": "2" } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ DescribeAgent 563 AWS DataSync • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide DescribeAgent 564 User Guide AWS DataSync DescribeDiscoveryJob Returns information about a DataSync discovery job. Request Syntax { "DiscoveryJobArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DiscoveryJobArn Specifies the Amazon Resource Name (ARN) of the discovery job that you want information about. Type: String Length Constraints: Maximum length of 256. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}/job/discovery-job-[a-f0-9]{8}-[a-f0-9]{4}- [a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$ Required: Yes Response Syntax { "CollectionDurationMinutes": number, "DiscoveryJobArn": "string", "JobEndTime": number, "JobStartTime": number, "Status": "string", DescribeDiscoveryJob 565 AWS DataSync User Guide "StorageSystemArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CollectionDurationMinutes The number of minutes that the discovery job runs. Type: Integer Valid Range: Minimum value of 60. Maximum value of 44640. DiscoveryJobArn The ARN of the discovery job. Type: String Length Constraints: Maximum length of 256. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}/job/discovery-job-[a-f0-9]{8}-[a-f0-9]{4}- [a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$ JobEndTime The time when the discovery job ended. Type: Timestamp JobStartTime The time when the discovery job started. Type: Timestamp Status Indicates the status of a discovery job. For more information, see Discovery job statuses. DescribeDiscoveryJob 566 AWS DataSync Type: String User Guide Valid Values: RUNNING | WARNING | TERMINATED | FAILED | STOPPED | COMPLETED | COMPLETED_WITH_ISSUES StorageSystemArn The ARN of the on-premises storage system you're running the discovery job on. Type: String Length Constraints: Maximum
sync-dg-160
sync-dg.pdf
160
value of 44640. DiscoveryJobArn The ARN of the discovery job. Type: String Length Constraints: Maximum length of 256. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}/job/discovery-job-[a-f0-9]{8}-[a-f0-9]{4}- [a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$ JobEndTime The time when the discovery job ended. Type: Timestamp JobStartTime The time when the discovery job started. Type: Timestamp Status Indicates the status of a discovery job. For more information, see Discovery job statuses. DescribeDiscoveryJob 566 AWS DataSync Type: String User Guide Valid Values: RUNNING | WARNING | TERMINATED | FAILED | STOPPED | COMPLETED | COMPLETED_WITH_ISSUES StorageSystemArn The ARN of the on-premises storage system you're running the discovery job on. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ DescribeDiscoveryJob 567 AWS DataSync • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide DescribeDiscoveryJob 568 AWS DataSync User Guide DescribeLocationAzureBlob Provides details about how an AWS DataSync transfer location for Microsoft Azure Blob Storage is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of your Azure Blob Storage transfer location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AccessTier": "string", "AgentArns": [ "string" ], "AuthenticationType": "string", "BlobType": "string", "CreationTime": number, "LocationArn": "string", "LocationUri": "string" } DescribeLocationAzureBlob 569 AWS DataSync Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AccessTier The access tier that you want your objects or files transferred into. This only applies when using the location as a transfer destination. For more information, see Access tiers. Type: String Valid Values: HOT | COOL | ARCHIVE AgentArns The ARNs of the DataSync agents that can connect with your Azure Blob Storage container. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ AuthenticationType The authentication method DataSync uses to access your Azure Blob Storage. DataSync can access blob storage using a shared access signature (SAS). Type: String Valid Values: SAS BlobType The type of blob that you want your objects or files to be when transferring them into Azure Blob Storage. Currently, DataSync only supports moving data into Azure Blob Storage as block blobs. For more information on blob types, see the Azure Blob Storage documentation. Type: String DescribeLocationAzureBlob 570 User Guide AWS DataSync Valid Values: BLOCK CreationTime The time that your Azure Blob Storage transfer location was created. Type: Timestamp LocationArn The ARN of your Azure Blob Storage transfer location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URL of the Azure Blob Storage container involved in your transfer. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 DescribeLocationAzureBlob 571 AWS DataSync See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationAzureBlob 572 AWS DataSync DescribeLocationEfs User Guide Provides details about how an AWS DataSync transfer location for an Amazon EFS file system is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn The Amazon Resource Name (ARN) of the Amazon EFS file system location that you want information about. Type: String Length Constraints: Maximum length
sync-dg-161
sync-dg.pdf
161
AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationAzureBlob 572 AWS DataSync DescribeLocationEfs User Guide Provides details about how an AWS DataSync transfer location for an Amazon EFS file system is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn The Amazon Resource Name (ARN) of the Amazon EFS file system location that you want information about. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AccessPointArn": "string", "CreationTime": number, "Ec2Config": { "SecurityGroupArns": [ "string" ], "SubnetArn": "string" }, "FileSystemAccessRoleArn": "string", DescribeLocationEfs 573 AWS DataSync User Guide "InTransitEncryption": "string", "LocationArn": "string", "LocationUri": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AccessPointArn The ARN of the access point that DataSync uses to access the Amazon EFS file system. For more information, see Accessing restricted file systems. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):elasticfilesystem: [a-z\-0-9]+:[0-9]{12}:access-point/fsap-[0-9a-f]{8,40}$ CreationTime The time that the location was created. Type: Timestamp Ec2Config The subnet and security groups that AWS DataSync uses to connect to one of your Amazon EFS file system's mount targets. Type: Ec2Config object FileSystemAccessRoleArn The AWS Identity and Access Management (IAM) role that allows DataSync to access your Amazon EFS file system. For more information, see Creating a DataSync IAM role for file system access. Type: String DescribeLocationEfs 574 AWS DataSync User Guide Length Constraints: Maximum length of 2048. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):iam::[0-9] {12}:role/.*$ InTransitEncryption Indicates whether DataSync uses Transport Layer Security (TLS) encryption when transferring data to or from the Amazon EFS file system. Type: String Valid Values: NONE | TLS1_2 LocationArn The ARN of the Amazon EFS file system location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URL of the Amazon EFS file system location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 DescribeLocationEfs 575 AWS DataSync InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Sample Request The following example shows how to get information about a specific Amazon EFS file system location. { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-12abcdef012345678" } Sample Response The following example returns location details about an Amazon EFS file system. { "CreationTime": 1653319021.353, "Ec2Config": { "SubnetArn": "arn:aws:ec2:us-east-2:111222333444:subnet/ subnet-1234567890abcdef1", "SecurityGroupArns": [ "arn:aws:ec2:us-east-2:111222333444:security-group/sg-1234567890abcdef2" ] }, "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/loc- abcdef01234567890", "LocationUri": "efs://us-east-2.fs-021345abcdef6789/" } Sample Response: Describing a location for a restricted Amazon EFS file system The following example returns location details about an Amazon EFS file system with restricted access, including the AccessPointArn, FileSystemAccessRoleArn, and InTransitEncryption elements. DescribeLocationEfs 576 AWS DataSync User Guide { "CreationTime": 1653319021.353, "AccessPointArn": "arn:aws:elasticfilesystem:us-east-2:111222333444:access-point/ fsap-1234567890abcdef0", "Ec2Config": { "SubnetArn": "arn:aws:ec2:us-east-2:111222333444:subnet/ subnet-1234567890abcdef1", "SecurityGroupArns": [ "arn:aws:ec2:us-east-2:111222333444:security-group/sg-1234567890abcdef2" ] }, "FileSystemAccessRoleArn": "arn:aws:iam::111222333444:role/ AwsDataSyncFullAccessNew", "InTransitEncryption": "TLS1_2", "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/loc- abcdef01234567890", "LocationUri": "efs://us-east-2.fs-021345abcdef6789/", "Subdirectory": "/mount/path", "Tags": [{ "Key": "Name", "Value": "ElasticFileSystem-1" }] } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationEfs 577 AWS DataSync User Guide DescribeLocationEfs 578 AWS DataSync User Guide DescribeLocationFsxLustre Provides details about how an AWS DataSync transfer location for an Amazon FSx for Lustre file system is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn The Amazon Resource Name (ARN) of the FSx for Lustre location to describe. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "SecurityGroupArns": [ "string" ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. DescribeLocationFsxLustre 579 AWS DataSync User Guide The following data is returned in JSON format by the service. CreationTime The time that the FSx for Lustre location was created. Type: Timestamp LocationArn The Amazon Resource Name (ARN) of the FSx for Lustre location that was described. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the FSx for Lustre location that
sync-dg-162
sync-dg.pdf
162
Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "SecurityGroupArns": [ "string" ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. DescribeLocationFsxLustre 579 AWS DataSync User Guide The following data is returned in JSON format by the service. CreationTime The time that the FSx for Lustre location was created. Type: Timestamp LocationArn The Amazon Resource Name (ARN) of the FSx for Lustre location that was described. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the FSx for Lustre location that was described. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ SecurityGroupArns The Amazon Resource Names (ARNs) of the security groups that are configured for the FSx for Lustre file system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Errors For information about the errors that are common to all actions, see Common Errors. DescribeLocationFsxLustre 580 AWS DataSync InternalException User Guide This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationFsxLustre 581 AWS DataSync User Guide DescribeLocationFsxOntap Provides details about how an AWS DataSync transfer location for an Amazon FSx for NetApp ONTAP file system is configured. Note If your location uses SMB, the DescribeLocationFsxOntap operation doesn't actually return a Password. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the FSx for ONTAP file system location that you want information about. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "CreationTime": number, DescribeLocationFsxOntap 582 AWS DataSync User Guide "FsxFilesystemArn": "string", "LocationArn": "string", "LocationUri": "string", "Protocol": { "NFS": { "MountOptions": { "Version": "string" } }, "SMB": { "Domain": "string", "MountOptions": { "Version": "string" }, "Password": "string", "User": "string" } }, "SecurityGroupArns": [ "string" ], "StorageVirtualMachineArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CreationTime The time that the location was created. Type: Timestamp FsxFilesystemArn The ARN of the FSx for ONTAP file system. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]*: [0-9]{12}:file-system/fs-.*$ DescribeLocationFsxOntap 583 AWS DataSync LocationArn The ARN of the FSx for ONTAP file system location. Type: String Length Constraints: Maximum length of 128. User Guide Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The uniform resource identifier (URI) of the FSx for ONTAP file system location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ Protocol Specifies the data transfer protocol that AWS DataSync uses to access your Amazon FSx file system. Type: FsxProtocol object SecurityGroupArns The security groups that DataSync uses to access your FSx for ONTAP file system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ StorageVirtualMachineArn The ARN of the storage virtual machine (SVM) on your FSx for ONTAP file system where you're copying data to or from. DescribeLocationFsxOntap 584 AWS DataSync Type: String Length Constraints: Maximum length of 162. User Guide Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\-0-9]+: [0-9]{12}:storage-virtual-machine/fs-[0-9a-f]+/svm-[0-9a-f]{17,}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationFsxOntap 585 AWS DataSync User Guide DescribeLocationFsxOntap 586 AWS DataSync User Guide DescribeLocationFsxOpenZfs Provides details about how an AWS DataSync transfer location for an Amazon FSx for OpenZFS
sync-dg-163
sync-dg.pdf
163
For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationFsxOntap 585 AWS DataSync User Guide DescribeLocationFsxOntap 586 AWS DataSync User Guide DescribeLocationFsxOpenZfs Provides details about how an AWS DataSync transfer location for an Amazon FSx for OpenZFS file system is configured. Note Response elements related to SMB aren't supported with the DescribeLocationFsxOpenZfs operation. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn The Amazon Resource Name (ARN) of the FSx for OpenZFS location to describe. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { DescribeLocationFsxOpenZfs 587 AWS DataSync User Guide "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "Protocol": { "NFS": { "MountOptions": { "Version": "string" } }, "SMB": { "Domain": "string", "MountOptions": { "Version": "string" }, "Password": "string", "User": "string" } }, "SecurityGroupArns": [ "string" ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CreationTime The time that the FSx for OpenZFS location was created. Type: Timestamp LocationArn The ARN of the FSx for OpenZFS location that was described. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ DescribeLocationFsxOpenZfs 588 AWS DataSync LocationUri User Guide The uniform resource identifier (URI) of the FSx for OpenZFS location that was described. Example: fsxz://us-west-2.fs-1234567890abcdef02/fsx/folderA/folder Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ Protocol The type of protocol that AWS DataSync uses to access your file system. Type: FsxProtocol object SecurityGroupArns The ARNs of the security groups that are configured for the FSx for OpenZFS file system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. DescribeLocationFsxOpenZfs 589 AWS DataSync HTTP Status Code: 400 See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationFsxOpenZfs 590 AWS DataSync User Guide DescribeLocationFsxWindows Provides details about how an AWS DataSync transfer location for an Amazon FSx for Windows File Server file system is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the FSx for Windows File Server location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "CreationTime": number, "Domain": "string", "LocationArn": "string", "LocationUri": "string", "SecurityGroupArns": [ "string" ], "User": "string" } DescribeLocationFsxWindows 591 AWS DataSync Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CreationTime The time that the FSx for Windows File Server location was created. Type: Timestamp Domain The name of the Microsoft Active Directory domain that the FSx for Windows File Server file system belongs to. Type: String Length Constraints: Maximum length of 253. Pattern: ^[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}$ LocationArn The ARN of the FSx for Windows File Server location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The uniform resource identifier (URI) of the FSx for Windows File Server location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ DescribeLocationFsxWindows 592 AWS DataSync SecurityGroupArns User Guide The ARNs of the Amazon EC2 security groups that provide access to your file system's preferred subnet. For information about configuring security groups for file system access, see the Amazon FSx for Windows File Server User Guide. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ User
sync-dg-164
sync-dg.pdf
164
length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The uniform resource identifier (URI) of the FSx for Windows File Server location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ DescribeLocationFsxWindows 592 AWS DataSync SecurityGroupArns User Guide The ARNs of the Amazon EC2 security groups that provide access to your file system's preferred subnet. For information about configuring security groups for file system access, see the Amazon FSx for Windows File Server User Guide. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 5 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):ec2:[a-z\-0-9]*: [0-9]{12}:security-group/sg-[a-f0-9]+$ User The user with the permissions to mount and access the FSx for Windows File Server file system. Type: String Length Constraints: Maximum length of 104. Pattern: ^[^\x22\x5B\x5D/\\:;|=,+*?\x3C\x3E]{1,104}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 DescribeLocationFsxWindows 593 AWS DataSync See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationFsxWindows 594 AWS DataSync DescribeLocationHdfs User Guide Provides details about how an AWS DataSync transfer location for a Hadoop Distributed File System (HDFS) is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the HDFS location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AgentArns": [ "string" ], "AuthenticationType": "string", "BlockSize": number, "CreationTime": number, "KerberosPrincipal": "string", "KmsKeyProviderUri": "string", "LocationArn": "string", "LocationUri": "string", "NameNodes": [ DescribeLocationHdfs 595 AWS DataSync User Guide { "Hostname": "string", "Port": number } ], "QopConfiguration": { "DataTransferProtection": "string", "RpcProtection": "string" }, "ReplicationFactor": number, "SimpleUser": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AgentArns The ARNs of the DataSync agents that can connect with your HDFS cluster. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ AuthenticationType The type of authentication used to determine the identity of the user. Type: String Valid Values: SIMPLE | KERBEROS BlockSize The size of the data blocks to write into the HDFS cluster. Type: Integer DescribeLocationHdfs 596 AWS DataSync User Guide Valid Range: Minimum value of 1048576. Maximum value of 1073741824. CreationTime The time that the HDFS location was created. Type: Timestamp KerberosPrincipal The Kerberos principal with access to the files and folders on the HDFS cluster. This parameter is used if the AuthenticationType is defined as KERBEROS. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^.+$ KmsKeyProviderUri The URI of the HDFS cluster's Key Management Server (KMS). Type: String Length Constraints: Minimum length of 1. Maximum length of 255. Pattern: ^kms:\/\/http[s]?@(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za- z0-9\-]*[A-Za-z0-9])(;(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A- Za-z0-9]))*:[0-9]{1,5}\/kms$ LocationArn The ARN of the HDFS location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the HDFS location. Type: String DescribeLocationHdfs 597 AWS DataSync User Guide Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ NameNodes The NameNode that manages the HDFS namespace. Type: Array of HdfsNameNode objects Array Members: Minimum number of 1 item. QopConfiguration The Quality of Protection (QOP) configuration, which specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the HDFS cluster. Type: QopConfiguration object ReplicationFactor The number of DataNodes to replicate the data to when writing to the HDFS cluster. Type: Integer Valid Range: Minimum value of 1. Maximum value of 512. SimpleUser The user name to identify the client on the host operating system. This parameter is used if the AuthenticationType is defined as SIMPLE. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^[_.A-Za-z0-9][-_.A-Za-z0-9]*$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. DescribeLocationHdfs 598 AWS DataSync HTTP Status Code: 500 InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the
sync-dg-165
sync-dg.pdf
165
the client on the host operating system. This parameter is used if the AuthenticationType is defined as SIMPLE. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^[_.A-Za-z0-9][-_.A-Za-z0-9]*$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. DescribeLocationHdfs 598 AWS DataSync HTTP Status Code: 500 InvalidRequestException User Guide This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationHdfs 599 AWS DataSync DescribeLocationNfs User Guide Provides details about how an AWS DataSync transfer location for a Network File System (NFS) file server is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the NFS location that you want information about. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "MountOptions": { "Version": "string" }, "OnPremConfig": { DescribeLocationNfs 600 AWS DataSync User Guide "AgentArns": [ "string" ] } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. CreationTime The time when the NFS location was created. Type: Timestamp LocationArn The ARN of the NFS location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the NFS location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ MountOptions The mount options that DataSync uses to mount your NFS file server. Type: NfsMountOptions object OnPremConfig The AWS DataSync agents that can connect to your Network File System (NFS) file server. DescribeLocationNfs 601 AWS DataSync Type: OnPremConfig object Errors User Guide For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Example The following example returns information about the NFS location specified in the sample request. Sample Request { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50aa" } Example This example illustrates one usage of DescribeLocationNfs. Sample Response { "CreationTime": 1532660733.39, "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50aa", "LocationUri": "hostname.amazon.com", DescribeLocationNfs 602 AWS DataSync "OnPremConfig": { "AgentArns": [ "arn:aws:datasync:us-east-2:111222333444:agent/ agent-0b0addbeef44b3nfs" ] User Guide } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationNfs 603 AWS DataSync User Guide DescribeLocationObjectStorage Provides details about how an AWS DataSync transfer location for an object storage system is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the object storage system location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AccessKey": "string", "AgentArns": [ "string" ], "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "ServerCertificate": blob, "ServerPort": number, DescribeLocationObjectStorage 604 AWS DataSync User Guide "ServerProtocol": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AccessKey The access key (for example, a user name) required to authenticate with the object storage system. Type: String Length Constraints: Minimum length of 0. Maximum length of 200. Pattern: ^.*$ AgentArns The ARNs of the DataSync agents that can connect with your object storage system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ CreationTime The time that the location was created. Type: Timestamp LocationArn The ARN of the object storage system location. Type: String DescribeLocationObjectStorage 605 AWS DataSync User Guide Length
sync-dg-166
sync-dg.pdf
166
service. AccessKey The access key (for example, a user name) required to authenticate with the object storage system. Type: String Length Constraints: Minimum length of 0. Maximum length of 200. Pattern: ^.*$ AgentArns The ARNs of the DataSync agents that can connect with your object storage system. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ CreationTime The time that the location was created. Type: Timestamp LocationArn The ARN of the object storage system location. Type: String DescribeLocationObjectStorage 605 AWS DataSync User Guide Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the object storage system location. Type: String Length Constraints: Maximum length of 4360. Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ ServerCertificate The certificate chain for DataSync to authenticate with your object storage system if the system uses a private or self-signed certificate authority (CA). Type: Base64-encoded binary data object Length Constraints: Maximum length of 32768. ServerPort The port that your object storage server accepts inbound network traffic on (for example, port 443). Type: Integer Valid Range: Minimum value of 1. Maximum value of 65536. ServerProtocol The protocol that your object storage system uses to communicate. Type: String Valid Values: HTTPS | HTTP Errors For information about the errors that are common to all actions, see Common Errors. DescribeLocationObjectStorage 606 AWS DataSync InternalException User Guide This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeLocationObjectStorage 607 AWS DataSync DescribeLocationS3 User Guide Provides details about how an AWS DataSync transfer location for an S3 bucket is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the Amazon S3 location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AgentArns": [ "string" ], "CreationTime": number, "LocationArn": "string", "LocationUri": "string", "S3Config": { "BucketAccessRoleArn": "string" }, "S3StorageClass": "string" DescribeLocationS3 608 AWS DataSync } Response Elements User Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AgentArns The ARNs of the DataSync agents deployed on your Outpost when using working with Amazon S3 on Outposts. For more information, see Deploy your DataSync agent on AWS Outposts. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ CreationTime The time that the Amazon S3 location was created. Type: Timestamp LocationArn The ARN of the Amazon S3 location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URL of the Amazon S3 location that was described. DescribeLocationS3 609 AWS DataSync Type: String Length Constraints: Maximum length of 4360. User Guide Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ S3Config Specifies the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that DataSync uses to access your S3 bucket. For more information, see Providing DataSync access to S3 buckets. Type: S3Config object S3StorageClass When Amazon S3 is a destination location, this is the storage class that you chose for your objects. Some storage classes have behaviors that can affect your Amazon S3 storage costs. For more information, see Storage class considerations with Amazon S3 transfers. Type: String Valid Values: STANDARD | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_INSTANT_RETRIEVAL Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 DescribeLocationS3 610 AWS DataSync Examples Example User Guide The following example returns information about the Amazon S3 location specified in the sample request. Sample Request { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50s3" } Example This example illustrates one usage of DescribeLocationS3. Sample Response { "CreationTime": 1532660733.39, "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50s3", "LocationUri": "s3://amzn-s3-demo-bucket", "S3Config": { "BucketAccessRoleArn": "arn:aws:iam::111222333444:role/amzn-s3-demo-bucket- access-role", } "S3StorageClass": "STANDARD" } See Also For more information about using this API in one of the
sync-dg-167
sync-dg.pdf
167
thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 DescribeLocationS3 610 AWS DataSync Examples Example User Guide The following example returns information about the Amazon S3 location specified in the sample request. Sample Request { "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50s3" } Example This example illustrates one usage of DescribeLocationS3. Sample Response { "CreationTime": 1532660733.39, "LocationArn": "arn:aws:datasync:us-east-2:111222333444:location/ loc-07db7abfc326c50s3", "LocationUri": "s3://amzn-s3-demo-bucket", "S3Config": { "BucketAccessRoleArn": "arn:aws:iam::111222333444:role/amzn-s3-demo-bucket- access-role", } "S3StorageClass": "STANDARD" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 DescribeLocationS3 611 AWS DataSync • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide DescribeLocationS3 612 AWS DataSync DescribeLocationSmb User Guide Provides details about how an AWS DataSync transfer location for a Server Message Block (SMB) file server is configured. Request Syntax { "LocationArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. LocationArn Specifies the Amazon Resource Name (ARN) of the SMB location that you want information about. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ Required: Yes Response Syntax { "AgentArns": [ "string" ], "AuthenticationType": "string", "CreationTime": number, "DnsIpAddresses": [ "string" ], "Domain": "string", "KerberosPrincipal": "string", "LocationArn": "string", DescribeLocationSmb 613 AWS DataSync User Guide "LocationUri": "string", "MountOptions": { "Version": "string" }, "User": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AgentArns The ARNs of the DataSync agents that can connect with your SMB file server. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 4 items. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ AuthenticationType The authentication protocol that DataSync uses to connect to your SMB file server. Type: String Valid Values: NTLM | KERBEROS CreationTime The time that the SMB location was created. Type: Timestamp DnsIpAddresses The IPv4 addresses for the DNS servers that your SMB file server belongs to. This element applies only if AuthenticationType is set to KERBEROS. DescribeLocationSmb 614 AWS DataSync Type: Array of strings User Guide Array Members: Maximum number of 2 items. Length Constraints: Minimum length of 7. Maximum length of 15. Pattern: \A(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)) {3}\z Domain The name of the Windows domain that the SMB file server belongs to. This element applies only if AuthenticationType is set to NTLM. Type: String Length Constraints: Maximum length of 253. Pattern: ^[A-Za-z0-9]((\.|-+)?[A-Za-z0-9]){0,252}$ KerberosPrincipal The Kerberos principal that has permission to access the files, folders, and file metadata in your SMB file server. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^.+$ LocationArn The ARN of the SMB location. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:location/loc-[0-9a-z]{17}$ LocationUri The URI of the SMB location. DescribeLocationSmb 615 AWS DataSync Type: String Length Constraints: Maximum length of 4360. User Guide Pattern: ^(efs|nfs|s3|smb|hdfs|fsx[a-z0-9-]+)://[a-zA-Z0-9.:/\-]+$ MountOptions The SMB protocol version that DataSync uses to access your SMB file server. Type: SmbMountOptions object User The user that can mount and access the files, folders, and file metadata in your SMB file server. This element applies only if AuthenticationType is set to NTLM. Type: String Length Constraints: Maximum length of 104. Pattern: ^[^\x22\x5B\x5D/\\:;|=,+*?\x3C\x3E]{1,104}$ Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 Examples Example This example illustrates one usage of DescribeLocationSmb. DescribeLocationSmb 616 User Guide AWS DataSync Sample Request { "arn:aws:datasync:us-east-1:111222333444:location/loc-0f01451b140b2af49" } Example This example illustrates one usage of DescribeLocationSmb. Sample Response { "AgentArns":[ "arn:aws:datasync:us-east-2:111222333444:agent/agent-0bc3b3dc9bbc15145", "arn:aws:datasync:us-east-2:111222333444:agent/agent-04b3fe3d261a18c8f" ], "CreationTime":"1532660733.39", "Domain":"AMAZON", "LocationArn":"arn:aws:datasync:us-east-1:111222333444:location/ loc-0f01451b140b2af49", "LocationUri":"smb://hostname.amazon.com/share", "MountOptions":{ "Version":"SMB3" }, "User":"user-1" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin DescribeLocationSmb 617 AWS DataSync • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide DescribeLocationSmb 618 AWS DataSync User Guide DescribeStorageSystem Returns information about an on-premises storage system that you're using with
sync-dg-168
sync-dg.pdf
168
"User":"user-1" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin DescribeLocationSmb 617 AWS DataSync • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 User Guide DescribeLocationSmb 618 AWS DataSync User Guide DescribeStorageSystem Returns information about an on-premises storage system that you're using with DataSync Discovery. Request Syntax { "StorageSystemArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. StorageSystemArn Specifies the Amazon Resource Name (ARN) of an on-premises storage system that you're using with DataSync Discovery. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}$ Required: Yes Response Syntax { "AgentArns": [ "string" ], "CloudWatchLogGroupArn": "string", "ConnectivityStatus": "string", "CreationTime": number, "ErrorMessage": "string", "Name": "string", "SecretsManagerArn": "string", DescribeStorageSystem 619 AWS DataSync User Guide "ServerConfiguration": { "ServerHostname": "string", "ServerPort": number }, "StorageSystemArn": "string", "SystemType": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. AgentArns The ARN of the DataSync agent that connects to and reads from your on-premises storage system. Type: Array of strings Array Members: Fixed number of 1 item. Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:agent/agent-[0-9a-z]{17}$ CloudWatchLogGroupArn The ARN of the Amazon CloudWatch log group that's used to monitor and log discovery job events. Type: String Length Constraints: Maximum length of 562. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):logs:[a-z\-0-9]+: [0-9]{12}:log-group:([^:\*]*)(:\*)?$ ConnectivityStatus Indicates whether your DataSync agent can connect to your on-premises storage system. Type: String DescribeStorageSystem 620 AWS DataSync User Guide Valid Values: PASS | FAIL | UNKNOWN CreationTime The time when you added the on-premises storage system to DataSync Discovery. Type: Timestamp ErrorMessage Describes the connectivity error that the DataSync agent is encountering with your on-premises storage system. Type: String Length Constraints: Maximum length of 128. Pattern: .* Name The name that you gave your on-premises storage system when adding it to DataSync Discovery. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: ^[\p{L}\p{M}\p{N}\s+=._:@\/-]+$ SecretsManagerArn The ARN of the secret that stores your on-premises storage system's credentials. DataSync Discovery stores these credentials in AWS Secrets Manager. Type: String Length Constraints: Maximum length of 2048. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):secretsmanager:[a-z \-0-9]+:[0-9]{12}:secret:.* ServerConfiguration The server name and network port required to connect with your on-premises storage system's management interface. DescribeStorageSystem 621 AWS DataSync User Guide Type: DiscoveryServerConfiguration object StorageSystemArn The ARN of the on-premises storage system that the discovery job looked at. Type: String Length Constraints: Maximum length of 128. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}$ SystemType The type of on-premises storage system. Note DataSync Discovery currently only supports NetApp Fabric-Attached Storage (FAS) and All Flash FAS (AFF) systems running ONTAP 9.7 or later. Type: String Valid Values: NetAppONTAP Errors For information about the errors that are common to all actions, see Common Errors. InternalException This exception is thrown when an error occurs in the AWS DataSync service. HTTP Status Code: 500 InvalidRequestException This exception is thrown when the client submits a malformed request. HTTP Status Code: 400 DescribeStorageSystem 622 AWS DataSync See Also User Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 DescribeStorageSystem 623 AWS DataSync User Guide DescribeStorageSystemResourceMetrics Returns information, including performance data and capacity usage, which DataSync Discovery collects about a specific resource in your-premises storage system. Request Syntax { "DiscoveryJobArn": "string", "EndTime": number, "MaxResults": number, "NextToken": "string", "ResourceId": "string", "ResourceType": "string", "StartTime": number } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DiscoveryJobArn Specifies the Amazon Resource Name (ARN) of the discovery job that collects information about your on-premises storage system. Type: String Length Constraints: Maximum length of 256. Pattern: ^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):datasync:[a-z \-0-9]+:[0-9]{12}:system/storage-system-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9] {4}-[a-f0-9]{4}-[a-f0-9]{12}/job/discovery-job-[a-f0-9]{8}-[a-f0-9]{4}- [a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$ Required: Yes EndTime Specifies a time within the total duration that the discovery job ran. To see information gathered during a certain time frame, use this parameter with StartTime. DescribeStorageSystemResourceMetrics 624 User Guide AWS DataSync Type: Timestamp Required: No MaxResults Specifies how many results that you want in the response. Type: Integer Valid Range: Minimum value of 1.