id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
swf-dg-046
|
swf-dg.pdf
| 46 |
– String constraint. The key is swf:activityType.version. DeprecateActivityType • activityType.name – String constraint. The key is swf:activityType.name. API Summary API Version 2012-01-25 118 Amazon Simple Workflow Service Developer Guide • activityType.version – String constraint. The key is swf:activityType.version. DeprecateDomain • You can't constrain this action's parameters. DeleteWorkflowType • workflowType.name – String constraint. The key is swf:workflowType.name. • workflowType.version – String constraint. The key is swf:workflowType.version. DeprecateWorkflowType • workflowType.name – String constraint. The key is swf:workflowType.name. • workflowType.version – String constraint. The key is swf:workflowType.version. DescribeActivityType • activityType.name – String constraint. The key is swf:activityType.name. • activityType.version – String constraint. The key is swf:activityType.version. DescribeDomain • You can't constrain this action's parameters. DescribeWorkflowExecution • You can't constrain this action's parameters. DescribeWorkflowType • workflowType.name – String constraint. The key is swf:workflowType.name. • workflowType.version – String constraint. The key is swf:workflowType.version. GetWorkflowExecutionHistory API Summary API Version 2012-01-25 119 Amazon Simple Workflow Service Developer Guide • You can't constrain this action's parameters. ListActivityTypes • You can't constrain this action's parameters. ListClosedWorkflowExecutions • tagFilter.tag – String constraint. The key is swf:tagFilter.tag • typeFilter.name – String constraint. The key is swf:typeFilter.name. • typeFilter.version – String constraint. The key is swf:typeFilter.version. Note ListClosedWorkflowExecutions requires typeFilter and tagFilter to be mutually exclusive. ListDomains • You can't constrain this action's parameters. ListOpenWorkflowExecutions • tagFilter.tag – String constraint. The key is swf:tagFilter.tag • typeFilter.name – String constraint. The key is swf:typeFilter.name. • typeFilter.version – String constraint. The key is swf:typeFilter.version. Note ListOpenWorkflowExecutions requires typeFilter and tagFilter to be mutually exclusive. ListWorkflowTypes API Summary API Version 2012-01-25 120 Amazon Simple Workflow Service Developer Guide • You can't constrain this action's parameters. PollForActivityTask • taskList.name – String constraint. The key is swf:taskList.name. PollForDecisionTask • taskList.name – String constraint. The key is swf:taskList.name. RecordActivityTaskHeartbeat • You can't constrain this action's parameters. RegisterActivityType • defaultTaskList.name – String constraint. The key is swf:defaultTaskList.name. • name – String constraint. The key is swf:name. • version – String constraint. The key is swf:version. RegisterDomain • name – The name of the domain being registered is available as the resource of this action. RegisterWorkflowType • defaultTaskList.name – String constraint. The key is swf:defaultTaskList.name. • name – String constraint. The key is swf:name. • version – String constraint. The key is swf:version. RequestCancelWorkflowExecution • You can't constrain this action's parameters. RespondActivityTaskCanceled API Summary API Version 2012-01-25 121 Amazon Simple Workflow Service Developer Guide • You can't constrain this action's parameters. RespondActivityTaskCompleted • You can't constrain this action's parameters. RespondActivityTaskFailed • You can't constrain this action's parameters. RespondDecisionTaskCompleted • decisions.member.N – Restricted indirectly through pseudo API permissions. For details, see Pseudo API. SignalWorkflowExecution • You can't constrain this action's parameters. StartWorkflowExecution • tagList.member.0 – String constraint. The key is swf:tagList.member.0 • tagList.member.1 – String constraint. The key is swf:tagList.member.1 • tagList.member.2 – String constraint. The key is swf:tagList.member.2 • tagList.member.3 – String constraint. The key is swf:tagList.member.3 • tagList.member.4 – String constraint. The key is swf:tagList.member.4 • taskList.name – String constraint. The key is swf:taskList.name. • workflowType.name – String constraint. The key is swf:workflowType.name. • workflowType.version – String constraint. The key is swf:workflowType.version. Note You can't constrain more than five tags. API Summary API Version 2012-01-25 122 Amazon Simple Workflow Service Developer Guide TerminateWorkflowExecution • You can't constrain this action's parameters. Pseudo API This section lists the members of the pseudo API, which represent the decisions included in RespondDecisionTaskCompleted. If you have granted permission to use RespondDecisionTaskCompleted, your policy can express permissions for the members of this API in the same way as the regular API. You can further restrict some members of the pseudo-API by setting conditions on one or more parameters. This section lists the pseudo API members, and briefly describes the parameters that can be constrained and the associated keys. Note The aws:SourceIP, aws:UserAgent, and aws:SecureTransport keys are not available for the pseudo API. If your intended security policy requires these keys to control access to the pseudo API, you can use them with the RespondDecisionTaskCompleted action. CancelTimer • You can't constrain this action's parameters. CancelWorkflowExecution • You can't constrain this action's parameters. CompleteWorkflowExecution • You can't constrain this action's parameters. ContinueAsNewWorkflowExecution • tagList.member.0 – String constraint. The key is swf:tagList.member.0 • tagList.member.1 – String constraint. The key is swf:tagList.member.1 • tagList.member.2 – String constraint. The key is swf:tagList.member.2 API Summary API Version 2012-01-25 123 Amazon Simple Workflow Service Developer Guide • tagList.member.3 – String constraint. The key is swf:tagList.member.3 • tagList.member.4 – String constraint. The key is swf:tagList.member.4 • taskList.name – String constraint. The key is swf:taskList.name. • workflowTypeVersion – String constraint. The key is swf:workflowTypeVersion. Note You can't constrain more than five tags. FailWorkflowExecution • You can't constrain this action's parameters. RecordMarker • You can't constrain this action's parameters. RequestCancelActivityTask • You can't constrain this action's parameters. RequestCancelExternalWorkflowExecution • You can't constrain this action's parameters.
|
swf-dg-047
|
swf-dg.pdf
| 47 |
key is swf:tagList.member.1 • tagList.member.2 – String constraint. The key is swf:tagList.member.2 API Summary API Version 2012-01-25 123 Amazon Simple Workflow Service Developer Guide • tagList.member.3 – String constraint. The key is swf:tagList.member.3 • tagList.member.4 – String constraint. The key is swf:tagList.member.4 • taskList.name – String constraint. The key is swf:taskList.name. • workflowTypeVersion – String constraint. The key is swf:workflowTypeVersion. Note You can't constrain more than five tags. FailWorkflowExecution • You can't constrain this action's parameters. RecordMarker • You can't constrain this action's parameters. RequestCancelActivityTask • You can't constrain this action's parameters. RequestCancelExternalWorkflowExecution • You can't constrain this action's parameters. ScheduleActivityTask • activityType.name – String constraint. The key is swf:activityType.name. • activityType.version – String constraint. The key is swf:activityType.version. • taskList.name – String constraint. The key is swf:taskList.name. SignalExternalWorkflowExecution • You can't constrain this action's parameters. API Summary API Version 2012-01-25 124 Amazon Simple Workflow Service Developer Guide StartChildWorkflowExecution • tagList.member.0 – String constraint. The key is swf:tagList.member.0 • tagList.member.1 – String constraint. The key is swf:tagList.member.1 • tagList.member.2 – String constraint. The key is swf:tagList.member.2 • tagList.member.3 – String constraint. The key is swf:tagList.member.3 • tagList.member.4 – String constraint. The key is swf:tagList.member.4 • taskList.name – String constraint. The key is swf:taskList.name. • workflowType.name – String constraint. The key is swf:workflowType.name. • workflowType.version – String constraint. The key is swf:workflowType.version. Note You can't constrain more than five tags. StartTimer • You can't constrain this action's parameters. Tag-based Policies Amazon SWF supports policies based on tags. For instance, you could restrict Amazon SWF domains that include a tag with the key environment and the value production: { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "swf:*", "Resource": "arn:aws:swf:*:123456789012:/domain/*", "Condition": { "StringEquals": {"aws:ResourceTag/environment": "production"} } } ] Tag-based Policies API Version 2012-01-25 125 Amazon Simple Workflow Service } Developer Guide This policy will Deny the access to any domain that has been tagged as environment/ production. For more information on tagging, see: • Tags in Amazon SWF • Controlling Access Using IAM Tags Amazon VPC endpoints for Amazon SWF Note AWS PrivateLink support is currently available in the AWS Top Secret - East, AWS Secret Region, and China Regions only. If you use Amazon Virtual Private Cloud (Amazon VPC) to host your AWS resources, you can establish a connection between your Amazon VPC and Amazon Simple Workflow Service workflows. You can use this connection with your Amazon SWF workflows without crossing the public internet. Amazon VPC lets you launch AWS resources in a custom virtual network. You can use a VPC to control your network settings, such as the IP address range, subnets, route tables, and network gateways. For more information about VPCs, see the Amazon VPC User Guide. To connect your Amazon VPC to Amazon SWF you must first define an interface VPC endpoint, which lets you connect your VPC to other AWS services. The endpoint provides reliable, scalable connectivity, without requiring an internet gateway, network address translation (NAT) instance, or VPN connection. For more information, see Interface VPC Endpoints (AWS PrivateLink) in the Amazon VPC User Guide. Creating the Endpoint You can create an Amazon SWF endpoint in your VPC using the AWS Management Console, the AWS Command Line Interface (AWS CLI), an AWS SDK, the Amazon SWF API, or AWS CloudFormation. Amazon VPC endpoints API Version 2012-01-25 126 Amazon Simple Workflow Service Developer Guide For information about creating and configuring an endpoint using the Amazon VPC console or the AWS CLI, see Creating an Interface Endpoint in the Amazon VPC User Guide. Note When you create an endpoint, specify Amazon SWF as the service that you want your VPC to connect to. In the Amazon VPC console, service names vary based on the AWS Region. For example, in the AWS Top Secret - East Region, the service name for Amazon SWF is com.amazonaws.us-iso-east-1.swf. For information about creating and configuring an endpoint using AWS CloudFormation, see the AWS::EC2::VPCEndpoint resource in the AWS CloudFormation User Guide. Amazon VPC Endpoint Policies To control connectivity access to Amazon SWF you can attach an AWS Identity and Access Management (IAM) endpoint policy while creating an Amazon VPC endpoint. You can create complex IAM rules by attaching multiple endpoint policies. For more information, see: • Amazon Virtual Private Cloud Endpoint Policies for Amazon SWF • Controlling Access to Services with VPC Endpoints Amazon Virtual Private Cloud Endpoint Policies for Amazon SWF You can create an Amazon VPC endpoint policy for Amazon SWF in which you specify the following: • The principal that can perform actions. • The actions that can be performed. • The resources on which the actions can be performed. The following example shows an Amazon VPC endpoint policy that allows all Amazon SWF operations on a single domain for a specific IAM role. { "Version": "2012-10-17", Amazon VPC endpoints API Version
|
swf-dg-048
|
swf-dg.pdf
| 48 |
• Amazon Virtual Private Cloud Endpoint Policies for Amazon SWF • Controlling Access to Services with VPC Endpoints Amazon Virtual Private Cloud Endpoint Policies for Amazon SWF You can create an Amazon VPC endpoint policy for Amazon SWF in which you specify the following: • The principal that can perform actions. • The actions that can be performed. • The resources on which the actions can be performed. The following example shows an Amazon VPC endpoint policy that allows all Amazon SWF operations on a single domain for a specific IAM role. { "Version": "2012-10-17", Amazon VPC endpoints API Version 2012-01-25 127 Amazon Simple Workflow Service "Statement": [ { "Effect": "Allow", "Action": "swf:*", "Resource": "arn:aws:swf:*:123456789012:/domain/myDomain", "Principal": { "AWS": "arn:aws:iam::123456789012:role/MyRole" Developer Guide } } ] } • For more information about creating endpoint policies, see Controlling Access to Services with VPC Endpoints. • For information about how you can use IAM to control access to your AWS and Amazon SWF resources, see Identity and Access Management in Amazon Simple Workflow Service. Troubleshooting Amazon Simple Workflow Service identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon SWF and IAM. Topics • I am not authorized to perform an action in Amazon SWF • I am not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my Amazon SWF resources I am not authorized to perform an action in Amazon SWF If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action. The following example error occurs when the mateojackson user tries to use the console to view details about a fictional my-example-widget resource but does not have the fictional swf:GetWidget permissions. User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: swf:GetWidget on resource: my-example-widget Troubleshooting API Version 2012-01-25 128 Amazon Simple Workflow Service Developer Guide In this case, Mateo's policy must be updated to allow him to access the my-example-widget resource using the swf:GetWidget action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I am not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to Amazon SWF. Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in Amazon SWF. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to allow her to perform the iam:PassRole action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I want to allow people outside of my AWS account to access my Amazon SWF resources You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether Amazon SWF supports these features, see How Amazon Simple Workflow Service works with IAM. Troubleshooting API Version 2012-01-25 129 Amazon Simple Workflow Service Developer Guide • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM User Guide. • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Logging and Monitoring This section provides information about logging and monitoring Amazon SWF. Topics • Amazon SWF Metrics for CloudWatch • Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console • Recording API calls with AWS CloudTrail • EventBridge for Amazon SWF execution
|
swf-dg-049
|
swf-dg.pdf
| 49 |
in the IAM User Guide. • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Logging and Monitoring This section provides information about logging and monitoring Amazon SWF. Topics • Amazon SWF Metrics for CloudWatch • Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console • Recording API calls with AWS CloudTrail • EventBridge for Amazon SWF execution status changes • Using AWS User Notifications with Amazon Simple Workflow Service Amazon SWF Metrics for CloudWatch Amazon SWF now provides metrics for CloudWatch that you can use to track your workflows and activities and set alarms on threshold values that you choose. You can view metrics using the AWS Management Console. For more information, see Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console. Topics • Reporting Units for Amazon SWF Metrics • API and Decision Event Metrics • Amazon SWF Metrics • Amazon SWF non-ASCII resource names and CloudWatch dimensions Logging and Monitoring API Version 2012-01-25 130 Amazon Simple Workflow Service Developer Guide Reporting Units for Amazon SWF Metrics Metrics that Report a Time Interval Some of the Amazon SWF metrics for CloudWatch are time intervals, always measured in milliseconds. The CloudWatch unit is reported as Time. These metrics generally correspond to stages of your workflow execution for which you can set workflow and activity timeouts, and have similar names. For example, the DecisionTaskStartToCloseTime metric measures the time it took for the decision task to complete after it began executing, which is the same time period for which you can set a DecisionTaskStartToCloseTimeout value. For a diagram of each of these workflow stages and to learn when they occur over the workflow and activity lifecycles, see Amazon SWF Timeout Types . Metrics that Report a Count Some of the Amazon SWF metrics for CloudWatch report results as a count. For example, WorkflowsCanceled, records a result as either one or zero, indicating whether or not the workflow was canceled. A value of zero doesn't indicate that the metric was not reported, only that the condition described by the metric did not occur. Some of the Amazon SWF metrics for CloudWatch that report a Count in CloudWatch are a count per second. For instance, ProvisionedRefillRate, which is reported as a Count in CloudWatch, represents a rate of the Count of requests per second. For count metrics, minimum and maximum will always be either zero or one, but average will be a value ranging from zero to one. API and Decision Event Metrics You can monitor both API and Decision events in CloudWatch to provide insight into your usage and capacity. See deciders in the Basic workflow concepts in Amazon SWF section, and the Decision topic in the Amazon Simple Workflow Service API Reference. You can also monitor these limits to alarm when you are approaching your Amazon SWF throttling limits. See Amazon SWF throttling quotas for a description of these limits and their default settings. These limits are designed to prevent incorrect workflows from consuming excessive system resources. To request an increase to your limits see: ???. Amazon SWF Metrics for CloudWatch API Version 2012-01-25 131 Amazon Simple Workflow Service Developer Guide As a best practice, you should configure CloudWatch alarms at around 60% of your API or decision events capacity. This will allow you to either adjust your workflow, or request a service limit increase, before Amazon SWF throttling is enabled. Depending on the burstiness of your calls, you can configure different alarms to notify when you are approaching your service limits: • If your traffic has significant spikes, set an alarm at 60% of your ProvisionedBucketSize limits. • If your calls have a relatively steady rate, set an alarm at 60% of your ProvisionedRefillRate limit for your related API and decision events. Amazon SWF Metrics The following metrics are available for Amazon SWF: Metric Description DecisionTaskSchedu leToStartTime The time interval, in milliseconds, between the time that the decision task was scheduled and when it was picked up by a worker and started. CloudWatch Units: Time Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Average, Minimum, Maximum DecisionTaskStartT oCloseTime The time interval, in milliseconds, between the time that the decision task was started and when it closed. CloudWatch Units: Time Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Average, Minimum, Maximum DecisionTasksCompl The count of decision tasks that have been completed. eted CloudWatch Units: Count Amazon SWF Metrics for CloudWatch API Version 2012-01-25 132 Amazon Simple Workflow Service Developer Guide Metric Description Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum PendingTasks The count of pending tasks in a 1
|
swf-dg-050
|
swf-dg.pdf
| 50 |
up by a worker and started. CloudWatch Units: Time Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Average, Minimum, Maximum DecisionTaskStartT oCloseTime The time interval, in milliseconds, between the time that the decision task was started and when it closed. CloudWatch Units: Time Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Average, Minimum, Maximum DecisionTasksCompl The count of decision tasks that have been completed. eted CloudWatch Units: Count Amazon SWF Metrics for CloudWatch API Version 2012-01-25 132 Amazon Simple Workflow Service Developer Guide Metric Description Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum PendingTasks The count of pending tasks in a 1 minute interval for a specific Task List. CloudWatch Units: Count Dimensions: Domain, TaskListName Valid statistics: Sum StartedDecisionTas ksTimedOutOnClose The count of decision tasks that started but timed out on closing. CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum WorkflowStartToClo seTime The time, in milliseconds, between the time the workflow started and when it closed. CloudWatch Units: Time Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Average, Minimum, Maximum Amazon SWF Metrics for CloudWatch API Version 2012-01-25 133 Amazon Simple Workflow Service Developer Guide Metric Description WorkflowsCanceled The count of workflows that were canceled. CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum WorkflowsCompleted The count of workflows that completed. CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum WorkflowsContinued The count of workflows that continued as new. AsNew CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum WorkflowsFailed The count of workflows that failed. CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum Amazon SWF Metrics for CloudWatch API Version 2012-01-25 134 Amazon Simple Workflow Service Developer Guide Metric Description WorkflowsTerminated The count of workflows that were terminated. CloudWatch Units: Count Dimensions: Cause, Domain, WorkflowTypeName, WorkflowTypeVersion Valid statistics: Sum WorkflowsTimedOut The count of workflows that timed out, for any reason. CloudWatch Units: Count Dimensions: Domain, WorkflowTypeName, WorkflowT ypeVersion Valid statistics: Sum ActivityTaskSchedu leToCloseTime The time interval, in milliseconds, between the time when the activity was scheduled and when it closed. CloudWatch Units: Time Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Average, Minimum, Maximum ActivityTaskSchedu leToStartTime The time interval, in milliseconds, between the time when the activity task was scheduled and when it started. CloudWatch Units: Time Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Average, Minimum, Maximum Amazon SWF Metrics for CloudWatch API Version 2012-01-25 135 Amazon Simple Workflow Service Developer Guide Metric Description ActivityTaskStartT oCloseTime The time interval, in milliseconds, between the time when the activity task started and when it closed. CloudWatch Units: Time Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Average, Minimum, Maximum ActivityTasksCance The count of activity tasks that were canceled. led CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum ActivityTasksCompl The count of activity tasks that completed. eted CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum ActivityTasksFailed The count of activity tasks that failed. CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum Amazon SWF Metrics for CloudWatch API Version 2012-01-25 136 Amazon Simple Workflow Service Developer Guide Metric Description ScheduledActivityT asksTimedOutOnClose The count of activity tasks that were scheduled but timed out on close. CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum ScheduledActivityT asksTimedOutOnStart The count of activity tasks that were scheduled but timed out on start. CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum StartedActivityTas ksTimedOutOnClose The count of activity tasks that were started but timed out on close. CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum StartedActivityTas ksTimedOutOnHeartb The count of activity tasks that were started but timed out due to a heartbeat timeout. eat CloudWatch Units: Count Dimensions: Domain, ActivityTypeName, ActivityT ypeVersion Valid statistics: Sum Amazon SWF Metrics for CloudWatch API Version 2012-01-25 137 Amazon Simple Workflow Service Developer Guide Metric Description ThrottledEvents The count of requests that have been throttled. CloudWatch Units: Count Dimensions: APIName, DecisionName, Throttlin gScope Valid statistics: Sum ProvisionedBucketS The count of available requests per second. ize Dimensions: APIName, DecisionName Valid statistics: Minimum ConsumedCapacity The count of requests per second. CloudWatch Units: Count Dimensions: APIName, DecisionName Valid statistics: Sum ConsumedLimit The amount of general limit that has been consumed. Dimensions: GeneralLimitType ProvisionedRefillR ate The count of requests per second that are allowed into the bucket. Dimensions: APIName, DecisionName Valid statistics: Minimum ProvisionedLimit The amount of general limit that is provisioned to the account. Dimensions: GeneralLimitType Amazon SWF Metrics for CloudWatch API Version 2012-01-25 138 Amazon Simple Workflow Service Developer Guide Dimension Domain Description Filters data to the Amazon SWF domain that the workflow or activity is running in. ActivityTypeName Filters data to the name of the activity type. ActivityTypeVersion Filters data to the version of the activity type. WorkflowTypeName Filters data to the name of the workflow type for this workflow
|
swf-dg-051
|
swf-dg.pdf
| 51 |
ProvisionedRefillR ate The count of requests per second that are allowed into the bucket. Dimensions: APIName, DecisionName Valid statistics: Minimum ProvisionedLimit The amount of general limit that is provisioned to the account. Dimensions: GeneralLimitType Amazon SWF Metrics for CloudWatch API Version 2012-01-25 138 Amazon Simple Workflow Service Developer Guide Dimension Domain Description Filters data to the Amazon SWF domain that the workflow or activity is running in. ActivityTypeName Filters data to the name of the activity type. ActivityTypeVersion Filters data to the version of the activity type. WorkflowTypeName Filters data to the name of the workflow type for this workflow execution. WorkflowTypeVersion Filters data to the version of the workflow type for this workflow execution. APIName Filters data to an API of the specified API name. DecisionName Filters data to the specified Decision name. TaskListName Filters data to the specified Task List name. TaskListClassifica tion Filters data to the classification of the task list. Value is "D" for Decision Task Lists and "A" for Activity Task Lists. ThrottlingScope Filters data to the specified throttling scope. Value is "Account" when exceeding account-level quota, or "Workflow" when exceeding workflow-level quota. Amazon SWF non-ASCII resource names and CloudWatch dimensions Amazon SWF allows non-ASCII characters in resource names such as TaskList and DomainName. However, the dimension values of CloudWatch metrics can only contain printable ASCII characters. To ensure that Amazon SWF uses dimension values that are compatible with CloudWatch requirements, Amazon SWF resource names that do not meet these requirements are converted and will have a checksum appended as follows: • Any non-ASCII character is replaced with ?. • The input string or converted string will, if necessary, be truncated. This ensures that when the checksum is appended, the new string length will not exceed the CloudWatch maximum. Amazon SWF Metrics for CloudWatch API Version 2012-01-25 139 Amazon Simple Workflow Service Developer Guide • Because any non-ASCII characters are converted to ?, some CloudWatch metric dimension values that were different before conversion may appear to be the same after conversion. To help differentiate between them, an underscore (_) followed by the first 16 characters of the SHA256 checksum of the original resource name is appended to the resource name. Conversion examples: • test àpple would be converted to test ?pple_82cc5b8e3a771d12 • àòà would be converted to ???_2fec5edbb2c05c22. • The TaskList names àpplé and âpplè would both be converted to ?ppl?, and would be identical. Appending the checksum returns distinct values, ?ppl?_f39a36df9d85a69d and ? ppl?_da3efb4f11dd0f7f. Tip You can generate your own SHA256 checksum. For example, to use the shasum command line tool: echo -n "<the original resource name>" | shasum -a 256 | cut -c1-16 Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console Amazon CloudWatch provides a number of viewable metrics for Amazon SWF workflows and activities. You can view the metrics and set alarms for your Amazon SWF workflow executions using the AWS Management Console. You must be logged in to the console to proceed. For a description of each of the available metrics, see Amazon SWF Metrics for CloudWatch. Topics • Viewing Metrics • Setting Alarms Viewing Amazon SWF Metrics API Version 2012-01-25 140 Amazon Simple Workflow Service Viewing Metrics To view your metrics for Amazon SWF Developer Guide 1. Sign in to the AWS Management Console and open the CloudWatch console at https:// console.aws.amazon.com/cloudwatch/. 2. In the navigation pane, under Metrics, choose SWF. If you have run any workflow executions recently, you will see two lists of metrics presented: Workflow Type Metrics and Activity Type Metrics. Viewing Amazon SWF Metrics API Version 2012-01-25 141 Amazon Simple Workflow Service Developer Guide Note Initially you might only see the Workflow Type Metrics; Activity Type Metrics are presented in the same view, but you may need to scroll down to see them. Up to 50 of the most recent metrics will be shown at a time, divided among workflow and activity metrics. You can use the interactive headings above each column in the list to sort your metrics using any of the provided dimensions. For workflows, the dimensions are Domain, WorkflowTypeName, WorkflowTypeVersion, and Metric Name. For activities, the dimensions are Domain, ActivityTypeName, ActivityTypeVersion, and Metric Name. The various types of metrics are described in Amazon SWF Metrics for CloudWatch. You can view graphs for metrics by choosing the boxes next to the metric row in the list, and change the graph parameters using the Time Range controls to the right of the graph view. For details about any point on the graph, place your cursor over the graph point. A detail of the point's dimensions will be shown. Viewing Amazon SWF Metrics API Version 2012-01-25 142 Amazon Simple Workflow Service Developer Guide For more information about working with CloudWatch metrics, see Viewing, Graphing, and Publishing Metrics in the Amazon CloudWatch User Guide.
|
swf-dg-052
|
swf-dg.pdf
| 52 |
described in Amazon SWF Metrics for CloudWatch. You can view graphs for metrics by choosing the boxes next to the metric row in the list, and change the graph parameters using the Time Range controls to the right of the graph view. For details about any point on the graph, place your cursor over the graph point. A detail of the point's dimensions will be shown. Viewing Amazon SWF Metrics API Version 2012-01-25 142 Amazon Simple Workflow Service Developer Guide For more information about working with CloudWatch metrics, see Viewing, Graphing, and Publishing Metrics in the Amazon CloudWatch User Guide. Setting Alarms You can use CloudWatch alarms to perform actions such as notifying you when an alarm threshold is reached. For example, you can set an alarm to send a notification to an SNS topic or to send an email when the WorkflowsFailed metric rises above a certain threshold. To set an alarm on any of your metrics 1. Choose a single metric by choosing its box. 2. To the right of the graph, in the Tools controls, choose Create Alarm. 3. On the Define Alarm screen, enter the alarm threshold value, period parameters, and actions to take. Viewing Amazon SWF Metrics API Version 2012-01-25 143 Amazon Simple Workflow Service Developer Guide For more information about setting and using CloudWatch alarms, see Creating Amazon CloudWatch Alarms in the Amazon CloudWatch User Guide. Recording API calls with AWS CloudTrail Amazon Simple Workflow Service is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service. CloudTrail captures all API calls for Amazon SWF as events. The calls captured include calls from the Amazon SWF console and code calls to the Amazon SWF API operations. Using the information collected by CloudTrail, you can determine the request that was made to Amazon SWF, the IP address from which the request was made, when it was made, and additional details. Every event or log entry contains information about who generated the request. The identity information helps you determine the following: • Whether the request was made with root user or user credentials. • Whether the request was made on behalf of an IAM Identity Center user. Recording to CloudTrail API Version 2012-01-25 144 Amazon Simple Workflow Service Developer Guide • Whether the request was made with temporary security credentials for a role or federated user. • Whether the request was made by another AWS service. CloudTrail is active in your AWS account when you create the account and you automatically have access to the CloudTrail Event history. The CloudTrail Event history provides a viewable, searchable, downloadable, and immutable record of the past 90 days of recorded management events in an AWS Region. For more information, see Working with CloudTrail Event history in the AWS CloudTrail User Guide. There are no CloudTrail charges for viewing the Event history. For an ongoing record of events in your AWS account past 90 days, create a trail or a CloudTrail Lake event data store. CloudTrail trails A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. All trails created using the AWS Management Console are multi-Region. You can create a single-Region or a multi-Region trail by using the AWS CLI. Creating a multi-Region trail is recommended because you capture activity in all AWS Regions in your account. If you create a single-Region trail, you can view only the events logged in the trail's AWS Region. For more information about trails, see Creating a trail for your AWS account and Creating a trail for an organization in the AWS CloudTrail User Guide. You can deliver one copy of your ongoing management events to your Amazon S3 bucket at no charge from CloudTrail by creating a trail, however, there are Amazon S3 storage charges. For more information about CloudTrail pricing, see AWS CloudTrail Pricing. For information about Amazon S3 pricing, see Amazon S3 Pricing. CloudTrail Lake event data stores CloudTrail Lake lets you run SQL-based queries on your events. CloudTrail Lake converts existing events in row-based JSON format to Apache ORC format. ORC is a columnar storage format that is optimized for fast retrieval of data. Events are aggregated into event data stores, which are immutable collections of events based on criteria that you select by applying advanced event selectors. The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see Working with AWS CloudTrail Lake in the AWS CloudTrail User Guide. CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the pricing option you want to use for the event data store. The pricing Recording to CloudTrail API Version 2012-01-25
|
swf-dg-053
|
swf-dg.pdf
| 53 |
are aggregated into event data stores, which are immutable collections of events based on criteria that you select by applying advanced event selectors. The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see Working with AWS CloudTrail Lake in the AWS CloudTrail User Guide. CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the pricing option you want to use for the event data store. The pricing Recording to CloudTrail API Version 2012-01-25 145 Amazon Simple Workflow Service Developer Guide option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For more information about CloudTrail pricing, see AWS CloudTrail Pricing. Data events in CloudTrail Data events provide information about the resource operations performed on or in a resource (for example, reading or writing to an Amazon S3 object). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail doesn’t log data events. The CloudTrail Event history doesn't record data events. Additional charges apply for data events. For more information about CloudTrail pricing, see AWS CloudTrail Pricing. You can log data events for the Amazon SWF resource types by using the CloudTrail console, AWS CLI, or CloudTrail API operations. For more information about how to log data events, see Logging data events with the AWS Management Console and Logging data events with the AWS Command Line Interface in the AWS CloudTrail User Guide. The following table lists the Amazon SWF resource types for which you can log data events. The Data event type column shows the value to choose from the Data event type list on the CloudTrail console. The resources.type value column shows the resources.type value, which you would specify when configuring advanced event selectors using the AWS CLI or CloudTrail APIs. The Data APIs logged to CloudTrail column shows the API calls logged to CloudTrail for the resource type. You can configure advanced event selectors to filter on the eventName, readOnly, and resources.ARN fields to log only those events that are important to you. For more information about these fields, see AdvancedFieldSelector in the AWS CloudTrail API Reference. Data event type resources.type value Data APIs logged to CloudTrail SWF Domain AWS::SWF::Domain Workflow Events • CountClosedWorkflo wExecutions • CountOpenWorkflowE xecutions Recording to CloudTrail API Version 2012-01-25 146 Amazon Simple Workflow Service Developer Guide Data event type resources.type value Data APIs logged to CloudTrail • DescribeWorkflowExecution • ListClosedWorkflow Executions • ListOpenWorkflowEx ecutions • GetWorkflowExecuti onHistory • RequestCancelWorkf lowExecution • SignalWorkflowExecution • StartWorkflowExecution • TerminateWorkflowE xecution Task Events • CountPendingActivityTasks • PollForDecisionTask • PollForActivityTask • RecordActivityTask Heartbeat • RespondActivityTas kCanceled • RespondActivityTas kCompleted • RespondActivityTaskFailed • RespondDecisionTas kCompleted Decision Events Recording to CloudTrail API Version 2012-01-25 147 Amazon Simple Workflow Service Developer Guide Data event type resources.type value Data APIs logged to CloudTrail • CancelTimer • CancelWorkflowExecution • CompleteWorkflowEx ecution • ContinueAsNewWorkf lowExecution • FailWorkflowExecution • RecordMarker • RequestCancelActivityTask • RequestCancelExter nalWorkflowExecution • ScheduleActivityTask • ScheduleLambdaFunction • SignalExternalWork flowExecution • StartChildWorkflow Execution • StartTimer CloudTrail events and RespondDecisionTaskCompleted The RespondDecisionTaskCompleted action takes a list of decisions in the request payload. A completed call will emit N+1 CloudTrail data events, one for each decision plus one for the API call itself. The data events and API event will all have the same request id. Management events in CloudTrail Management events provide information about management operations that are performed on resources in your AWS account. These are also known as control plane operations. By default, CloudTrail logs management events. Recording to CloudTrail API Version 2012-01-25 148 Amazon Simple Workflow Service Developer Guide Amazon Simple Workflow Service logs the following control plane operations to CloudTrail as management events. Domain Events • RegisterDomain • DescribeDomain • ListDomains • DeprecateDomain • UndeprecateDomain Activity Events • RegisterActivityType • DescribeActivityType • ListActivityTypes • DeprecateActivityType • UndeprecateActivityType • DeleteActivityType WorkflowType Events • RegisterWorkflowType • DescribeWorkflowType • ListWorkflowTypes • DeprecateWorkflowType • UndeprecateWorkflowType • DeleteWorkflowType Tag Events • TagResource • UntagResource Recording to CloudTrail API Version 2012-01-25 149 Amazon Simple Workflow Service • ListTagsforResource Example event Developer Guide An event represents a single request from any source and includes information about the requested API operation, the date and time of the operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so events don't appear in any specific order. The following example shows a CloudTrail event that demonstrates the CountClosedWorkflowExecutions operation. { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "1234567890abcdef02345:admin", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/admin", "accountId": "111122223333", "accessKeyId": "abcdef01234567890abc", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "1234567890abcdef02345", "arn": "arn:aws:iam::111122223333:role/Admin", "accountId": "111122223333", "userName": "Admin" }, "attributes": { "creationDate": "2023-11-23T16:37:38Z", "mfaAuthenticated": "false" }
|
swf-dg-054
|
swf-dg.pdf
| 54 |
Guide An event represents a single request from any source and includes information about the requested API operation, the date and time of the operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so events don't appear in any specific order. The following example shows a CloudTrail event that demonstrates the CountClosedWorkflowExecutions operation. { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "1234567890abcdef02345:admin", "arn": "arn:aws:sts::111122223333:assumed-role/Admin/admin", "accountId": "111122223333", "accessKeyId": "abcdef01234567890abc", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "1234567890abcdef02345", "arn": "arn:aws:iam::111122223333:role/Admin", "accountId": "111122223333", "userName": "Admin" }, "attributes": { "creationDate": "2023-11-23T16:37:38Z", "mfaAuthenticated": "false" } } }, "eventTime": "2023-11-23T17:52:46Z", "eventSource": "swf.amazonaws.com", "eventName": "CountClosedWorkflowExecutions", "awsRegion": "us-east-1", "sourceIPAddress": "198.51.100.42", "userAgent": "aws-internal/3 aws-sdk-java/1.11.42", "requestParameters": { "domain": "nsg-domain", Recording to CloudTrail API Version 2012-01-25 150 Developer Guide Amazon Simple Workflow Service "closeTimeFilter": { "oldestDate": "Nov 23, 2023 5:52:46 PM", "latestDate": "Nov 23, 2023 5:52:46 PM" } }, "responseElements": null, "requestID": "a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa", "eventID": "a1b2c3d4-5678-90ab-cdef-EXAMPLEbbbbb", "readOnly": true, "resources": [ { "accountId": "111122223333", "type": "AWS::SWF::Domain", "ARN": "arn:aws:swf:us-east-1:111122223333:/domain/nsg-domain" } ], "eventType": "AwsApiCall", "managementEvent": false, "recipientAccountId": "111122223333", "eventCategory": "Data", "tlsDetails": { "clientProvidedHostHeader": "swf.example.amazondomains.com" } } For information about CloudTrail record contents, see CloudTrail record contents in the AWS CloudTrail User Guide. EventBridge for Amazon SWF execution status changes You use Amazon EventBridge to respond to state changes or events in an AWS resource. When Amazon SWF emits an event, it always goes to the default EventBridge event bus for your account. You can create a rule for events, associate it with the default event bus, and specify a target action to take when EventBridge receives an event that matches the rule. In this way, you can monitor your workflows without having to constantly poll using the GetWorkflowExecutionHistory API. Based on changes in workflow executions, you can use an EventBridge target to call AWS Lambda functions, publish messages to Amazon Simple Notification Service (Amazon SNS) topics, and more. You can see the full contents of an execution status change event using DescribeWorkflowExecution. EventBridge for Amazon SWF API Version 2012-01-25 151 Amazon Simple Workflow Service Developer Guide For more information, see the Amazon EventBridge User Guide. EventBridge events The history event types contain the execution state changes. The detail section of each event contains at least the following parameters: • eventId: the event ID shown by GetWorkflowExecutionHistory. • workflowExecutionDetail: the state of the workflow when the event was emitted. • eventType: the history event type, one of the following: • ActivityTaskCanceled • ActivityTaskFailed • ActivityTaskTimedOut • WorkflowExecutionCanceled • WorkflowExecutionCompleted • WorkflowExecutionFailed • WorkflowExecutionStarted • WorkflowExecutionTerminated • WorkflowExecutionTimedOut • WorkflowExecutionContinuedAsNew • CancelTimerFailed • CancelWorkflowExecutionFailed • ChildWorkflowExecutionFailed • ChildWorkflowExecutionTimedOut • CompleteWorkflowExecutionFailed • ContinueAsNewWorkflowExecutionFailed • DecisionTaskTimedOut • FailWorkflowExecutionFailed • RecordMarkerFailed • RequestCancelActivityTaskFailed • RequestCancelExternalWorkflowExecutionFailed EventBridge for Amazon SWF API Version 2012-01-25 152 • ScheduleActivityTaskFailed Amazon Simple Workflow Service Developer Guide • SignalExternalWorkflowExecutionFailed • StartActivityTaskFailed • StartChildWorkflowExecutionFailed • StartTimerFailed • TimerCanceled • LambdaFunctionFailed • LambdaFunctionTimedOut • StartLambdaFunctionFailed • ScheduleLambdaFunctionFailed Amazon SWF event examples The following are examples of Amazon SWF sending events to EventBridge: Topics • Execution started • Execution completed • Execution failed • Execution timed out • Execution terminated In each case, the detail section in the event data provides the same information as the DescribeWorkflowExecution API. The executionStatus field indicates the status of the execution at the time the event was sent, either OPEN or CLOSED. Execution started { "version": "0", "id": "444444444444", "detail-type": "Simple Workflow Execution State Change", "source": "aws.swf", "account": "444444444444", "time": "2020-05-08T15:57:38Z", "region": "us-east-1", "resources": [ EventBridge for Amazon SWF API Version 2012-01-25 153 Amazon Simple Workflow Service Developer Guide "arn:aws:swf:us-east-1:444444444444:/domain/SimpleWorkflowUserSimulator" ], "detail": { "eventId": 1, "eventType": "WorkflowExecutionStarted", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "123456789012", "runId": "AKIAIOSFODNN7EXAMPLE" }, "workflowType": { "name": "SimpleWorkflowUserSimulator", "version": "myWorkflow" }, "startTimestamp": 1588953458484, "closeTimestamp": null, "executionStatus": "OPEN", "closeStatus": null, "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "1000", "taskList": { "name": "444444444444" }, "taskPriority": null, "childPolicy": "ABANDON", "lambdaRole": "arn:aws:iam::444444444444:role/BasicSWFLambdaExecution" }, "openCounts": { "openActivityTasks": 0, "openDecisionTasks": 1, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": null, } } EventBridge for Amazon SWF API Version 2012-01-25 154 Developer Guide Amazon Simple Workflow Service } Execution completed { "version": "0", "id": "1111-2222-3333", "detail-type": "Simple Workflow Execution State Change", "source": "aws.swf", "account": "444455556666", "time": "2020-05-08T15:57:39Z", "region": "us-east-1", "resources": [ "arn:aws:swf:us-east-1:444455556666:/domain/SimpleWorkflowUserSimulator" ], "detail": { "eventId": 35, "eventType": "WorkflowExecutionCompleted", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "1234-5678-9012", "runId": "777788889999" }, "workflowType": { "name": "SimpleWorkflowUserSimulator", "version": "myWorkflow" }, "startTimestamp": 1588953458820, "closeTimestamp": 1588953459448, "executionStatus": "CLOSED", "closeStatus": "COMPLETED", "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "1000", "taskList": { "name": "1111-1111-1111" }, EventBridge for Amazon SWF API Version 2012-01-25 155 Amazon Simple Workflow Service Developer Guide "taskPriority": null, "childPolicy": "ABANDON", "lambdaRole": "arn:aws:iam::444455556666:role/BasicSWFLambdaExecution" }, "openCounts": { "openActivityTasks": 0, "openDecisionTasks": 0, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": 1588953459402, } } } Execution failed { "version": "0",
|
swf-dg-055
|
swf-dg.pdf
| 55 |
"resources": [ "arn:aws:swf:us-east-1:444455556666:/domain/SimpleWorkflowUserSimulator" ], "detail": { "eventId": 35, "eventType": "WorkflowExecutionCompleted", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "1234-5678-9012", "runId": "777788889999" }, "workflowType": { "name": "SimpleWorkflowUserSimulator", "version": "myWorkflow" }, "startTimestamp": 1588953458820, "closeTimestamp": 1588953459448, "executionStatus": "CLOSED", "closeStatus": "COMPLETED", "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "1000", "taskList": { "name": "1111-1111-1111" }, EventBridge for Amazon SWF API Version 2012-01-25 155 Amazon Simple Workflow Service Developer Guide "taskPriority": null, "childPolicy": "ABANDON", "lambdaRole": "arn:aws:iam::444455556666:role/BasicSWFLambdaExecution" }, "openCounts": { "openActivityTasks": 0, "openDecisionTasks": 0, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": 1588953459402, } } } Execution failed { "version": "0", "id": "1111-2222-3333", "detail-type": "Simple Workflow Execution State Change", "source": "aws.swf", "account": "444455556666", "time": "2020-05-08T15:57:38Z", "region": "us-east-1", "resources": [ "arn:aws:swf:us-east-1:444455556666:/domain/SimpleWorkflowUserSimulator" ], "detail": { "eventId": 11, "eventType": "WorkflowExecutionFailed", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "1234-5678-9012", "runId": "777788889999" }, "workflowType": { "name": "SimpleWorkflowUserSimulator", "version": "myWorkflow" }, "startTimestamp": 1588953158481, EventBridge for Amazon SWF API Version 2012-01-25 156 Amazon Simple Workflow Service Developer Guide "closeTimestamp": 1588953458560, "executionStatus": "CLOSED", "closeStatus": "FAILED", "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "1000", "taskList": { "name": "1111-1111-1111" }, "taskPriority": null, "childPolicy": "ABANDON", "lambdaRole": "arn:aws:iam::444455556666:role/BasicSWFLambdaExecution" }, "openCounts": { "openActivityTasks": 0, "openDecisionTasks": 0, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": null, } } } Execution timed out { "version": "0", "id": "1111-2222-3333", "detail-type": "Simple Workflow Execution State Change", "source": "aws.swf", "account": "444455556666", "time": "2020-05-05T17:26:30Z", "region": "us-east-1", "resources": [ "arn:aws:swf:us-east-1:444455556666:/domain/SimpleWorkflowUserSimulator" ], EventBridge for Amazon SWF API Version 2012-01-25 157 Developer Guide Amazon Simple Workflow Service "detail": { "eventId": 6, "eventType": "WorkflowExecutionTimedOut", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "1234-5678-9012", "runId": "777788889999" }, "workflowType": { "name": "SimpleWorkflowUserSimulator", "version": "myWorkflow" }, "startTimestamp": 1588698073748, "closeTimestamp": 1588699590745, "executionStatus": "CLOSED", "closeStatus": "TIMED_OUT", "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "1000", "taskList": { "name": "1111-1111-1111" }, "taskPriority": null, "childPolicy": "ABANDON", "lambdaRole": "arn:aws:iam::444455556666:role/BasicSWFLambdaExecution" }, "openCounts": { "openActivityTasks": 1, "openDecisionTasks": 0, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": 1588699585802, } } } EventBridge for Amazon SWF API Version 2012-01-25 158 Developer Guide Amazon Simple Workflow Service Execution terminated { "version": "0", "id": "1111-2222-3333", "detail-type": "Simple Workflow Execution State Change", "source": "aws.swf", "account": "444455556666", "time": "2020-05-08T22:37:26Z", "region": "us-east-1", "resources": [ "arn:aws:swf:us-east-1:444455556666:/domain/canary" ], "detail": { "eventId": 48, "eventType": "WorkflowExecutionTerminated", "workflowExecutionDetail": { "executionInfo": { "execution": { "workflowId": "1234-5678-9012", "runId": "777788889999" }, "workflowType": { "name": "1111-1111-1111", "version": "1.3" }, "startTimestamp": 1588977445279, "closeTimestamp": 1588977446062, "executionStatus": "CLOSED", "closeStatus": "TERMINATED", "parent": null, "parentExecutionArn": null, "tagList": null, "cancelRequested": false }, "executionConfiguration": { "taskStartToCloseTimeout": "60", "executionStartToCloseTimeout": "120", "taskList": { "name": "1111-1111-1111-2222-2222-2222" }, "taskPriority": null, "childPolicy": "TERMINATE", EventBridge for Amazon SWF API Version 2012-01-25 159 Amazon Simple Workflow Service Developer Guide "lambdaRole": null }, "openCounts": { "openActivityTasks": 0, "openDecisionTasks": 1, "openTimers": 0, "openChildWorkflowExecutions": 0, "openLambdaFunctions": 0 }, "latestActivityTaskTimestamp": 1588977445882, } } } Using AWS User Notifications with Amazon Simple Workflow Service You can use AWS User Notifications to set up delivery channels to get notified about Amazon Simple Workflow Service events. You receive a notification when an event matches a rule that you specify. You can receive notifications for events through multiple channels, including email, Amazon Q Developer in chat applications chat notifications, or AWS Console Mobile Application push notifications. You can also see notifications in the Console Notifications Center. User Notifications supports aggregation, which can reduce the number of notifications you receive during specific events. Compliance Validation for Amazon Simple Workflow Service Third-party auditors assess the security and compliance of Amazon Simple Workflow Service as part of multiple AWS compliance programs. These include SOC, PCI, FedRAMP, HIPAA, and others. For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by Compliance Program. For general information, see AWS Compliance Programs. You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Your compliance responsibility when using Amazon SWF is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: Using AWS User Notifications with Amazon SWF API Version 2012-01-25 160 Amazon Simple Workflow Service Developer Guide • Security and Compliance Quick Start Guides – These deployment guides discuss architectural considerations and provide steps for deploying security- and compliance-focused baseline environments on AWS. • Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how companies can use AWS to create HIPAA-compliant applications. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS that helps you check your compliance
|
swf-dg-056
|
swf-dg.pdf
| 56 |
security- and compliance-focused baseline environments on AWS. • Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how companies can use AWS to create HIPAA-compliant applications. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS that helps you check your compliance with security industry standards and best practices. Resilience in Amazon Simple Workflow Service The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. In addition to the AWS global infrastructure, Amazon SWF offers several features to help support your data resiliency and backup needs. Infrastructure Security in Amazon Simple Workflow Service As a managed service, is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see AWS Cloud Security. To design your AWS environment using the best practices for infrastructure security, see Infrastructure Protection in Security Pillar AWS Well‐Architected Framework. You use AWS published API calls to access through the network. Clients must support the following: Resilience API Version 2012-01-25 161 Amazon Simple Workflow Service Developer Guide • Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3. • Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. You can call these API operations from any network location, but Amazon SWF does support resource-based access policies, which can include restrictions based on the source IP address. You can also use Amazon SWF policies to control access from specific Amazon Virtual Private Cloud (Amazon VPC) endpoints or specific VPCs. Effectively, this isolates network access to a given Amazon SWF resource from only the specific VPC within the AWS network. Configuration and Vulnerability Analysis in Amazon Simple Workflow Service Configuration and IT controls are a shared responsibility between AWS and you, our customer. For more information, see the AWS shared responsibility model. Configuration and Vulnerability Analysis API Version 2012-01-25 162 Amazon Simple Workflow Service Developer Guide Using the AWS CLI with Amazon Simple Workflow Service Many of the features of Amazon Simple Workflow Service can be accessed from the AWS CLI. The AWS CLI provides an alternative to using Amazon SWF with the AWS Management Console or in some cases, to programming with the Amazon SWF API and the AWS Flow Framework. For example, you can use the AWS CLI to register a new workflow type: aws swf register-workflow-type --domain MyDomain --name "MySimpleWorkflow" --workflow- version "v1" You can also list your registered workflow types: aws swf list-workflow-types --domain MyDomain --registration-status REGISTERED The following shows an example of the default output in JSON: { "typeInfos": [ { "status": "REGISTERED", "creationDate": 1377471607.752, "workflowType": { "version": "v1", "name": "MySimpleWorkflow" } }, { "status": "REGISTERED", "creationDate": 1371454149.598, "description": "MyDomain subscribe workflow", "workflowType": { "version": "v3", "name": "subscribe" } } ] } API Version 2012-01-25 163 Amazon Simple Workflow Service Developer Guide The Amazon SWF commands in AWS CLI provide the ability to start and manage workflow executions, poll for activity tasks, record task heartbeats, and more! For a complete list of Amazon SWF commands, with descriptions of the available arguments and examples showing their use, see Amazon SWF commands in the AWS CLI Command Reference. The AWS CLI commands follow the Amazon SWF API closely, so you can use the AWS CLI to learn about the underlying Amazon SWF API. You can also use your existing API knowledge to prototype code or perform Amazon SWF actions on the command line. To learn more about the AWS CLI, see the AWS Command Line Interface User Guide. API Version 2012-01-25 164 Amazon Simple Workflow Service Developer Guide Working with Amazon SWF APIs In addition to using the AWS SDKs that are described in Develop with AWS SDKs, you can use the HTTP API
|
swf-dg-057
|
swf-dg.pdf
| 57 |
AWS CLI Command Reference. The AWS CLI commands follow the Amazon SWF API closely, so you can use the AWS CLI to learn about the underlying Amazon SWF API. You can also use your existing API knowledge to prototype code or perform Amazon SWF actions on the command line. To learn more about the AWS CLI, see the AWS Command Line Interface User Guide. API Version 2012-01-25 164 Amazon Simple Workflow Service Developer Guide Working with Amazon SWF APIs In addition to using the AWS SDKs that are described in Develop with AWS SDKs, you can use the HTTP API directly. To use the API, you send HTTP requests to the SWF endpoint that matches the region that you want to use for your domains, workflows and activities. For more information about making HTTP requests for Amazon SWF, see Making HTTP Requests to Amazon SWF. This section provides basic information about using the HTTP API to develop your workflows with Amazon SWF. More advanced features, such as using timers, logging with CloudTrail and tagging your workflows are provided in the section, Basic workflow concepts in Amazon SWF. Topics • Making HTTP Requests to Amazon SWF • List of Amazon SWF Actions by Category • Registering a Domain with Amazon SWF • Setting timeout values in Amazon SWF • Registering a Workflow Type with Amazon SWF • Registering an Activity Type with Amazon SWF • AWS Lambda tasks in Amazon SWF • Developing an Activity Worker in Amazon SWF • Developing deciders in Amazon SWF • Starting workflows in Amazon SWF • Setting task priority in Amazon SWF • Handling errors in Amazon SWF Making HTTP Requests to Amazon SWF If you don't use one of the AWS SDKs, you can perform Amazon Simple Workflow Service (Amazon SWF) operations over HTTP using the POST request method. The POST method requires that you specify the operation in the header of the request and provide the data for the operation in JSON format in the body of the request. Making HTTP Requests API Version 2012-01-25 165 Amazon Simple Workflow Service Developer Guide HTTP Header Contents Amazon SWF requires the following information in the header of an HTTP request: • host The Amazon SWF endpoint. • x-amz-date You must provide the time stamp in either the HTTP Date header or the AWS x- amz-date header (some HTTP client libraries don't let you set the Date header). When an x-amz-date header is present, the system ignores any Date header when authenticating the request. The date must be specified in one of the following three formats, as specified in the HTTP/1.1 RFC: • Sun, 06 Nov 1994 08:49:37 GMT (RFC 822, updated by RFC 1123) • Sunday, 06-Nov-94 08:49:37 GMT (RFC 850, obsoleted by RFC 1036) • Sun Nov 6 08:49:37 1994 (ANSI C's asctime() format) • x-amzn-authorization The signed request parameters in the format: AWS3 AWSAccessKeyId=####,Algorithm=HmacSHA256, [,SignedHeaders=Header1;Header2;...] Signature=S(StringToSign) AWS3 – This is an AWS implementation-specific tag that denotes the authentication version used to sign the request (currently, for Amazon SWF this value is always AWS3). AWSAccessKeyId – Your AWS Access Key ID. Algorithm – The algorithm used to create the HMAC-SHA value of the string-to-sign, such as HmacSHA256 or HmacSHA1. Signature – Base64( Algorithm( StringToSign, SigningKey ) ). For details see Calculating the HMAC-SHA Signature for Amazon SWF SignedHeaders – (Optional) If present, must contain a list of all the HTTP Headers used in the Canonicalized HttpHeaders calculation. A single semicolon character (;) (ASCII character 59) must be used as the delimiter for list values. • x-amz-target – The destination service of the request and the operation for the data, in the format com.amazonaws.swf.service.model.SimpleWorkflowService. + <action> HTTP Header Contents API Version 2012-01-25 166 Amazon Simple Workflow Service For example, Developer Guide com.amazonaws.swf.service.model.SimpleWorkflowService.RegisterDomain • content-type – The type needs to specify JSON and the character set, as application/ json; charset=UTF-8 The following is an example header for an HTTP request to create a domain. POST http://swf.us-east-1.amazonaws.com/ HTTP/1.1 Host: swf.us-east-1.amazonaws.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.25) Gecko/20111212 Firefox/3.6.25 ( .NET CLR 3.5.30729; .NET4.0E) Accept: application/json, text/javascript, */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Content-Type: application/json; charset=UTF-8 X-Requested-With: XMLHttpRequest X-Amz-Date: Fri, 13 Jan 2012 18:42:12 GMT X-Amz-Target: com.amazonaws.swf.service.model.SimpleWorkflowService.RegisterDomain Content-Encoding: amz-1.0 X-Amzn-Authorization: AWS3 AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE,Algorithm=HmacSHA256,SignedHeaders=Host;X-Amz- Date;X-Amz-Target;Content-Encoding,Signature=tzjkF55lxAxPhzp/BRGFYQRQRq6CqrM254dTDE/ EncI= Referer: http://swf.us-east-1.amazonaws.com/explorer/index.html Content-Length: 91 Pragma: no-cache Cache-Control: no-cache {"name": "867530902", "description": "music", "workflowExecutionRetentionPeriodInDays": "60"} Here is an example of the corresponding HTTP response. HTTP/1.1 200 OK Content-Length: 0 Content-Type: application/json HTTP Header Contents API Version 2012-01-25 167 Amazon Simple Workflow Service Developer Guide x-amzn-RequestId: 4ec4ac3f-3e16-11e1-9b11-7182192d0b57 HTTP Body Content The body of an HTTP request contains the data for the operation specified in the header of the HTTP request. Use the JSON data format to convey data values and data structure, simultaneously. Elements can be nested within
|
swf-dg-058
|
swf-dg.pdf
| 58 |
13 Jan 2012 18:42:12 GMT X-Amz-Target: com.amazonaws.swf.service.model.SimpleWorkflowService.RegisterDomain Content-Encoding: amz-1.0 X-Amzn-Authorization: AWS3 AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE,Algorithm=HmacSHA256,SignedHeaders=Host;X-Amz- Date;X-Amz-Target;Content-Encoding,Signature=tzjkF55lxAxPhzp/BRGFYQRQRq6CqrM254dTDE/ EncI= Referer: http://swf.us-east-1.amazonaws.com/explorer/index.html Content-Length: 91 Pragma: no-cache Cache-Control: no-cache {"name": "867530902", "description": "music", "workflowExecutionRetentionPeriodInDays": "60"} Here is an example of the corresponding HTTP response. HTTP/1.1 200 OK Content-Length: 0 Content-Type: application/json HTTP Header Contents API Version 2012-01-25 167 Amazon Simple Workflow Service Developer Guide x-amzn-RequestId: 4ec4ac3f-3e16-11e1-9b11-7182192d0b57 HTTP Body Content The body of an HTTP request contains the data for the operation specified in the header of the HTTP request. Use the JSON data format to convey data values and data structure, simultaneously. Elements can be nested within other elements using bracket notation. For example, the following shows a request to list all workflow executions that started between two specified points in time— using Unix Time notation. { "domain": "867530901", "startTimeFilter": { "oldestDate": 1325376070, "latestDate": 1356998399 }, "tagFilter": { "tag": "music purchase" } } Sample Amazon SWF JSON Request and Response The following example shows a request to Amazon SWF for a description of the domain that we created previously. Then it shows the Amazon SWF response. HTTP POST Request POST http://swf.us-east-1.amazonaws.com/ HTTP/1.1 Host: swf.us-east-1.amazonaws.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.25) Gecko/20111212 Firefox/3.6.25 ( .NET CLR 3.5.30729; .NET4.0E) Accept: application/json, text/javascript, */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Content-Type: application/json; charset=UTF-8 HTTP Body Content API Version 2012-01-25 168 Amazon Simple Workflow Service Developer Guide X-Requested-With: XMLHttpRequest X-Amz-Date: Sun, 15 Jan 2012 03:13:33 GMT X-Amz-Target: com.amazonaws.swf.service.model.SimpleWorkflowService.DescribeDomain Content-Encoding: amz-1.0 X-Amzn-Authorization: AWS3 AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE,Algorithm=HmacSHA256,SignedHeaders=Host;X-Amz- Date;X-Amz-Target;Content- Encoding,Signature=IFJtq3M366CHqMlTpyqYqd9z0ChCoKDC5SCJBsLifu4= Referer: http://swf.us-east-1.amazonaws.com/explorer/index.html Content-Length: 21 Pragma: no-cache Cache-Control: no-cache {"name": "867530901"} Amazon SWF Response HTTP/1.1 200 OK Content-Length: 137 Content-Type: application/json x-amzn-RequestId: e86a6779-3f26-11e1-9a27-0760db01a4a8 {"configuration": {"workflowExecutionRetentionPeriodInDays": "60"}, "domainInfo": {"description": "music", "name": "867530901", "status": "REGISTERED"} } Notice the protocol (HTTP/1.1) is followed by a status code (200). A code value of 200 indicates a successful operation. Amazon SWF doesn't serialize null values. If your JSON parser is set to serialize null values for requests, Amazon SWF ignores them. Calculating the HMAC-SHA Signature for Amazon SWF Every request to Amazon SWF must be authenticated. The AWS SDKs automatically sign your requests and manage your token-based authentication. However, if you want to write your own HTTP POST requests, you need to create an x-amzn-authorization value for the HTTP POST Header content as part of your request authentication. Calculating the HMAC-SHA Signature API Version 2012-01-25 169 Amazon Simple Workflow Service Developer Guide For more information about formatting headers, see HTTP Header Contents. For the AWS SDK for Java implementation of AWS Version 3 signing, see the AWSSigner.java class. Creating a Request Signature Before you create an HMAC-SHA request signature, you must get your AWS credentials (the Access Key ID and the Secret Key). Important You can use either SHA1 or SHA256 to sign your requests. However, make sure that you use the same method throughout the signing process. The method you choose must match the value of the Algorithm name in the HTTP header. To create the request signature 1. Create a canonical form of the HTTP request headers. The canonical form of the HTTP header includes the following: • host • Any header element starting with x-amz- For more information about the included headers, see HTTP Header Contents. a. For each header name-value pair, convert the header name (but not the header value) to lowercase. b. Build a map of the header name to comma-separated header values. x-amz-example: value1 x-amz-example: value2 => x-amz-example:value1,value2 For more information, see Section 4.2 of RFC 2616. c. For each header name-value pair, convert the name-value pair into a string in the format headerName:headerValue. Trim any whitespace from the beginning and end of both headerName and headerValue, with no spaces on each side of the colon. x-amz-example1:value1,value2 Calculating the HMAC-SHA Signature API Version 2012-01-25 170 Amazon Simple Workflow Service Developer Guide x-amz-example2:value3 d. e. Insert a new line (U+000A) after each converted string, including the last string. Sort the collection of converted strings alphabetically, by header name. 2. Create a string-to-sign value that includes the following items: • Line 1: The HTTP method (POST), followed by a newline. • Line 2: The request URI (/), followed by a newline. • Line 3: An empty string followed by a newline. Note Typically, the query string appears here, but Amazon SWF doesn't use a query string. • Lines 4–n: The string representing the canonicalized request headers that you computed in Step 1, followed by a newline. This newline creates a blank line between the headers and the body of the HTTP request. For more information, see RFC 2616. • The request body, not followed by a newline. 3. Compute the SHA256 or SHA1 digest of the string-to-sign value. Use the same SHA method throughout the process. 4. Compute and Base64-encode the HMAC-SHA using either a SHA256 or a SHA1 digest (depending
|
swf-dg-059
|
swf-dg.pdf
| 59 |
Note Typically, the query string appears here, but Amazon SWF doesn't use a query string. • Lines 4–n: The string representing the canonicalized request headers that you computed in Step 1, followed by a newline. This newline creates a blank line between the headers and the body of the HTTP request. For more information, see RFC 2616. • The request body, not followed by a newline. 3. Compute the SHA256 or SHA1 digest of the string-to-sign value. Use the same SHA method throughout the process. 4. Compute and Base64-encode the HMAC-SHA using either a SHA256 or a SHA1 digest (depending on the method you used) of the value resulting from the previous step and the temporary secret access key from the AWS Security Token Service using the GetSessionToken API action. Note Amazon SWF expects an equals sign (=) at the end of the Base64-encoded HMAC- SHA value. If your Base64 encoding routine doesn't include the appended equals sign, append one to the end of the value. For more information about using temporary security credentials with Amazon SWF and other AWS services, see AWS Services That Work with IAM in the IAM User Guide. 5. Place the resulting value as the value for the Signature name in the x-amzn- authorization header of the HTTP request to Amazon SWF. Calculating the HMAC-SHA Signature API Version 2012-01-25 171 Amazon Simple Workflow Service Developer Guide 6. Amazon SWF verifies the request and performs the specified operation. List of Amazon SWF Actions by Category This section lists the reference topics for Amazon SWF actions in the Amazon SWF application programming interface (API). These are listed by functional category. For an alphabetic list of actions, see the Amazon Simple Workflow Service API Reference. Topics • Actions Related to Activities • Actions Related to Deciders • Actions Related to Workflow Executions • Actions Related to Administration • Visibility Actions Actions Related to Activities Activity workers use PollForActivityTask to get new activity tasks. After a worker receives an activity task from Amazon SWF, it performs the task and responds using RespondActivityTaskCompleted if successful or RespondActivityTaskFailed if unsuccessful. The following are actions that are performed by activity workers. • PollForActivityTask • RespondActivityTaskCompleted • RespondActivityTaskFailed • RespondActivityTaskCanceled • RecordActivityTaskHeartbeat Actions Related to Deciders Deciders use PollForDecisionTask to get decision tasks. After a decider receives a decision task from Amazon SWF, it examines its workflow execution history and decides what to do next. It calls List of Amazon SWF Actions API Version 2012-01-25 172 Amazon Simple Workflow Service Developer Guide RespondDecisionTaskCompleted to complete the decision task and provides zero or more next decisions. The following are actions that are performed by deciders. • PollForDecisionTask • RespondDecisionTaskCompleted Actions Related to Workflow Executions The following actions operate on a workflow execution. • RequestCancelWorkflowExecution • StartWorkflowExecution • SignalWorkflowExecution • TerminateWorkflowExecution Actions Related to Administration Although you can perform administrative tasks from the Amazon SWF console, you can use the actions in this section to automate functions or build your own administrative tools. Activity Management • RegisterActivityType • DeprecateActivityType • UndeprecateActivityType • DeleteActivityType Workflow Management • RegisterWorkflowType • DeprecateWorkflowType • UndeprecateWorkflowType Actions Related to Workflow Executions API Version 2012-01-25 173 Amazon Simple Workflow Service • DeleteWorkflowType Domain Management Developer Guide These actions allow you to register and deprecate Amazon SWF domains. • RegisterDomain • DeprecateDomain • UndeprecateDomain For more information and examples of these domain management actions, see Registering a Domain with Amazon SWF. Workflow Execution Management • RequestCancelWorkflowExecution • TerminateWorkflowExecution Visibility Actions Although you can perform visibility actions from the Amazon SWF console, you can use the actions in this section to build your own console or administrative tools. Activity Visibility • ListActivityTypes • DescribeActivityType Workflow Visibility • ListWorkflowTypes • DescribeWorkflowType Workflow Execution Visibility • DescribeWorkflowExecution Visibility Actions API Version 2012-01-25 174 Amazon Simple Workflow Service Developer Guide • ListOpenWorkflowExecutions • ListClosedWorkflowExecutions • CountOpenWorkflowExecutions • CountClosedWorkflowExecutions • GetWorkflowExecutionHistory Domain Visibility • ListDomains • DescribeDomain Task List Visibility • CountPendingActivityTasks • CountPendingDecisionTasks Registering a Domain with Amazon SWF Your workflow and activity types and the workflow execution itself are all scoped to a domain. Domains isolate a set of types, executions, and task lists from others within the same account. You can register a domain by using the AWS Management Console or by using the RegisterDomain action in the Amazon SWF API. The following example uses the API. https://swf.us-east-1.amazonaws.com RegisterDomain { "name" : "867530901", "description" : "music", "workflowExecutionRetentionPeriodInDays" : "60" } The parameters are specified in JavaScript Object Notation (JSON) format. Here, the retention period is set to 60 days. During the retention period, all information about the workflow execution is available through visibility operations using either the AWS Management Console or the Amazon SWF API. Registering a Domain API Version 2012-01-25 175 Amazon Simple Workflow Service Developer Guide After registering the domain, you should
|
swf-dg-060
|
swf-dg.pdf
| 60 |
by using the AWS Management Console or by using the RegisterDomain action in the Amazon SWF API. The following example uses the API. https://swf.us-east-1.amazonaws.com RegisterDomain { "name" : "867530901", "description" : "music", "workflowExecutionRetentionPeriodInDays" : "60" } The parameters are specified in JavaScript Object Notation (JSON) format. Here, the retention period is set to 60 days. During the retention period, all information about the workflow execution is available through visibility operations using either the AWS Management Console or the Amazon SWF API. Registering a Domain API Version 2012-01-25 175 Amazon Simple Workflow Service Developer Guide After registering the domain, you should register the workflow type and the activity types used by the workflow. You need to register the domain first because a registered domain name is part of the required information for registering workflow and activity types. See Also RegisterDomain in the Amazon Simple Workflow Service API Reference Setting timeout values in Amazon SWF Topics • Quotas on Timeout Values • Workflow Execution and Decision Task Timeouts • Activity Task Timeouts • See Also Quotas on Timeout Values Timeout values are always declared in seconds, and can be set to any number of seconds up to a year (31536000 seconds)—the maximum execution limit for any workflow or activity. The special value NONE is used to set a timeout parameter to "no timeout", or infinite, but the maximum limit of a year still applies. Workflow Execution and Decision Task Timeouts You can set timeout values for your Workflow and Decision tasks when registering the workflow type. For example: https://swf.us-east-1.amazonaws.com RegisterWorkflowType { "domain": "867530901", "name": "customerOrderWorkflow", "version": "1.0", "description": "Handle customer orders", "defaultTaskStartToCloseTimeout": "600", "defaultExecutionStartToCloseTimeout": "3600", See Also API Version 2012-01-25 176 Amazon Simple Workflow Service Developer Guide "defaultTaskList": { "name": "mainTaskList" }, "defaultChildPolicy": "TERMINATE" } This workflow type registration sets the defaultTaskStartToCloseTimeout to 600 seconds (10 minutes), and defaultExecutionStartToCloseTimeout to 3600 seconds (1 hour). For more information about workflow type registration, see Registering a Workflow Type with Amazon SWF, and RegisterWorkflowType in the Amazon Simple Workflow Service API Reference. You can override the value set for defaultExecutionStartToCloseTimeout by specifying executionStartToCloseTimeout i. Activity Task Timeouts You can set timeout values for your activity tasks when registering the activity type. For example: https://swf.us-east-1.amazonaws.com RegisterActivityType { "domain": "867530901", "name": "activityVerify", "version": "1.0", "description": "Verify the customer credit", "defaultTaskStartToCloseTimeout": "600", "defaultTaskHeartbeatTimeout": "120", "defaultTaskList": { "name": "mainTaskList" }, "defaultTaskScheduleToStartTimeout": "1800", "defaultTaskScheduleToCloseTimeout": "5400" } This activity type registration sets the defaultTaskStartToCloseTimeout to 600 seconds (10 minutes), the defaultTaskHeartbeatTimeout to 120 seconds (2 minutes), the defaultTaskScheduleToStartTimeout to 1800 seconds (30 minutes) and defaultTaskScheduleToCloseTimeout to 5400 seconds (1.5 hours). For more information about activity type registration, see Registering an Activity Type with Amazon SWF, and RegisterActivityType in the Amazon Simple Workflow Service API Reference. You can override the value set for defaultTaskStartToCloseTimeout by specifying taskStartToCloseTimeout when scheduling the activity task. Activity Task Timeouts API Version 2012-01-25 177 Amazon Simple Workflow Service See Also Amazon SWF Timeout Types Developer Guide Registering a Workflow Type with Amazon SWF The example discussed in this section registers a workflow type using the Amazon SWF API. The name and version that you specify during registration form a unique identifier for the workflow type. The specified domain must have already been registered using the RegisterDomain API action. The timeout parameters in the following example are duration values specified in seconds. For the defaultTaskStartToCloseTimeout parameter, you can use the duration specifier NONE to indicate no timeout. However, you can't specify a value of NONE for defaultExecutionStartToCloseTimeout; there is a one-year maximum limit on the time that a workflow execution can run. Exceeding this limit always causes the workflow execution to time out. If you specify a value for defaultExecutionStartToCloseTimeout that is greater than one year, the registration will fail. https://swf.us-east-1.amazonaws.com RegisterWorkflowType { "domain" : "867530901", "name" : "customerOrderWorkflow", "version" : "1.0", "description" : "Handle customer orders", "defaultTaskStartToCloseTimeout" : "600", "defaultExecutionStartToCloseTimeout" : "3600", "defaultTaskList" : { "name": "mainTaskList" }, "defaultChildPolicy" : "TERMINATE" } See Also RegisterWorkflowType in the Amazon Simple Workflow Service API Reference Registering an Activity Type with Amazon SWF The following example registers an activity type by using the Amazon SWF API. The name and version that you specify during registration form a unique identifier for the activity type within See Also API Version 2012-01-25 178 Amazon Simple Workflow Service Developer Guide the domain. The specified domain must have already been registered using the RegisterDomain action. The timeout parameters in this example are duration values specified in seconds. You can use the duration specifier NONE to indicate no timeout. https://swf.us-east-1.amazonaws.com RegisterActivityType { "domain" : "867530901", "name" : "activityVerify", "version" : "1.0", "description" : "Verify the customer credit", "defaultTaskStartToCloseTimeout" : "600", "defaultTaskHeartbeatTimeout" : "120", "defaultTaskList" : { "name" : "mainTaskList" }, "defaultTaskScheduleToStartTimeout" : "1800", "defaultTaskScheduleToCloseTimeout" : "5400" } See Also RegisterActivityType in the Amazon Simple Workflow Service
|
swf-dg-061
|
swf-dg.pdf
| 61 |
the activity type within See Also API Version 2012-01-25 178 Amazon Simple Workflow Service Developer Guide the domain. The specified domain must have already been registered using the RegisterDomain action. The timeout parameters in this example are duration values specified in seconds. You can use the duration specifier NONE to indicate no timeout. https://swf.us-east-1.amazonaws.com RegisterActivityType { "domain" : "867530901", "name" : "activityVerify", "version" : "1.0", "description" : "Verify the customer credit", "defaultTaskStartToCloseTimeout" : "600", "defaultTaskHeartbeatTimeout" : "120", "defaultTaskList" : { "name" : "mainTaskList" }, "defaultTaskScheduleToStartTimeout" : "1800", "defaultTaskScheduleToCloseTimeout" : "5400" } See Also RegisterActivityType in the Amazon Simple Workflow Service API Reference AWS Lambda tasks in Amazon SWF Topics • About AWS Lambda • Benefits and limitations of using Lambda tasks • Using Lambda tasks in your workflows About AWS Lambda AWS Lambda is a fully managed compute service that runs your code in response to events generated by custom code or from various AWS services such as Amazon S3, DynamoDB, Amazon Kinesis, Amazon SNS, and Amazon Cognito. For more information about Lambda, see the AWS Lambda Developer Guide. See Also API Version 2012-01-25 179 Amazon Simple Workflow Service Developer Guide Amazon Simple Workflow Service provides a Lambda task so that you can run Lambda functions in place of, or alongside traditional Amazon SWF activities. Important Your AWS account will be charged for Lambda executions (requests) executed by Amazon SWF on your behalf. For details about Lambda pricing, see https://aws.amazon.com/ lambda/pricing/. Benefits and limitations of using Lambda tasks There are a number of benefits of using Lambda tasks in place of a traditional Amazon SWF activity: • Lambda tasks don’t need to be registered or versioned like Amazon SWF activity types. • You can use any existing Lambda functions that you've already defined in your workflows. • Lambda functions are called directly by Amazon SWF; there is no need for you to implement a worker program to execute them as you must do with traditional activities. • Lambda provides you with metrics and logs for tracking and analyzing your function executions. There are also a number of limitations regarding Lambda tasks that you should be aware of: • Lambda tasks can only be run in AWS regions that provide support for Lambda. See Lambda Regions and Endpoints in the Amazon Web Services General Reference for details about the currently-supported regions for Lambda. • Lambda tasks are currently supported only by the base SWF HTTP API and in the AWS Flow Framework for Java. There is currently no support for Lambda tasks in the AWS Flow Framework for Ruby. Using Lambda tasks in your workflows To use Lambda tasks in your Amazon SWF workflows, you will need to: 1. Set up IAM roles to provide Amazon SWF with permission to invoke Lambda functions. 2. Attach the IAM roles to your workflows. Benefits and limitations of using Lambda tasks API Version 2012-01-25 180 Amazon Simple Workflow Service Developer Guide 3. Call your Lambda function during a workflow execution. Set up an IAM role Before you can invoke Lambda functions from Amazon SWF you must provide an IAM role that provides access to Lambda from Amazon SWF. You can either: • choose a pre-defined role, AWSLambdaRole, to give your workflows permission to invoke any Lambda function associated with your account. • define your own policy and associated role to give workflows permission to invoke particular Lambda functions, specified by their Amazon Resource Names (ARNs). Limit permissions on an IAM role You can limit permissions on an IAM role you provide to Amazon SWF by using the SourceArn and SourceAccount context keys in your resource trust policy. These keys limit the usage of an IAM policy so that it is used only from Amazon Simple Workflow Service executions that belong in the specified domain ARN. If you use both global condition context keys, the aws:SourceAccount value and the account referenced in the aws:SourceArn value must use the same account ID when used in the same policy statement. In the following trust policy example, we use the SourceArn context key to restrict the IAM service role to only be used in Amazon Simple Workflow Service executions that belong to someDomain in the account, 123456789012. { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "swf.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:swf:*:123456789012:/domain/someDomain" } } Using Lambda tasks in your workflows API Version 2012-01-25 181 Amazon Simple Workflow Service Developer Guide } ] } In the following trust policy example, we use the SourceAccount context key to restrict the IAM service role to only be used in Amazon Simple Workflow Service executions in the account, 123456789012. { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "swf.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringLike": { "aws:SourceAccount": "123456789012" } }
|
swf-dg-062
|
swf-dg.pdf
| 62 |
"Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "swf.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:swf:*:123456789012:/domain/someDomain" } } Using Lambda tasks in your workflows API Version 2012-01-25 181 Amazon Simple Workflow Service Developer Guide } ] } In the following trust policy example, we use the SourceAccount context key to restrict the IAM service role to only be used in Amazon Simple Workflow Service executions in the account, 123456789012. { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "swf.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringLike": { "aws:SourceAccount": "123456789012" } } } ] } Providing Amazon SWF with access to invoke any Lambda role You can use the pre-defined role, AWSLambdaRole, to give your Amazon SWF workflows the ability to invoke any Lambda function associated with your account. To use AWSLambdaRole to give Amazon SWF access to invoke Lambda functions 1. Open the Amazon IAM console. 2. Choose Roles, then Create New Role. 3. Give your role a name, such as swf-lambda and choose Next Step. 4. Under AWS Service Roles, choose Amazon SWF, and choose Next Step. 5. On the Attach Policy screen, choose AWSLambdaRole from the list. 6. Choose Next Step and then Create Role once you've reviewed the role. Using Lambda tasks in your workflows API Version 2012-01-25 182 Amazon Simple Workflow Service Developer Guide Defining an IAM role to provide access to invoke a specific Lambda function If you want to provide access to invoke a specific Lambda function from your workflow, you will need to define your own IAM policy. To create an IAM policy to provide access to a particular Lambda function 1. Open the Amazon IAM console. 2. Choose Policies, then Create Policy. 3. Choose Copy an AWS Managed Policy and select AWSLambdaRole from the list. A policy will be generated for you. Optionally edit its name and description to suit your needs. 4. In the Resource field of the Policy Document, add the ARN of your Lambda function(s). For example: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:InvokeFunction" ], "Resource": [ "arn:aws:lambda:us-east-1:111111000000:function:hello_lambda_function" ] } ] } Note For a complete description of how to specify resources in an IAM role, see Overview of IAM Policies in Using IAM. 5. Choose Create Policy to finish creating your policy. You can then select this policy when creating a new IAM role, and use that role to give invoke access to your Amazon SWF workflows. This procedure is very similar to creating a role with the AWSLambdaRole policy. instead, choose your own policy when creating the role. Using Lambda tasks in your workflows API Version 2012-01-25 183 Amazon Simple Workflow Service Developer Guide To create a Amazon SWF role using your Lambda policy 1. Open the Amazon IAM console. 2. Choose Roles, then Create New Role. 3. Give your role a name, such as swf-lambda-function and choose Next Step. 4. Under AWS Service Roles, choose Amazon SWF, and choose Next Step. 5. On the Attach Policy screen, choose your Lambda function-specific policy from the list. 6. Choose Next Step and then Create Role once you've reviewed the role. Attach the IAM role to your workflow Once you've defined your IAM role, you will need to attach it to the workflow that will be using it to call the Lambda functions you provided Amazon SWF with access to. There are two places where you can attach the role to your workflow: • During workflow type registration. This role then may be used as the default Lambda role for every execution of that workflow type. • When starting a workflow execution. This role will be used only during this workflow's execution (and throughout the entire execution). To provide a default Lambda role for a workflow type • When calling RegisterWorkflowType, set the defaultLambdaRole field to the ARN of the role that you defined. To provide a Lambda role to be used during a workflow execution • When calling StartWorkflowExecution, set the lambdaRole field to the ARN of the role that you defined. Note if the account calling RegisterWorkflowType or StartWorkflowExecution doesn't have permission to use the given role, then the call will fail with an OperationNotPermittedFault. Using Lambda tasks in your workflows API Version 2012-01-25 184 Amazon Simple Workflow Service Developer Guide Call your Lambda function from a Amazon SWF workflow You can use the ScheduleLambdaFunctionDecisionAttributes data type to identify the Lambda function to call during a workflow execution. During a call to RespondDecisionTaskCompleted, provide a ScheduleLambdaFunctionDecisionAttributes to your decisions list. For example: { "decisions": [{ "ScheduleLambdaFunctionDecisionAttributes": { "id": "lambdaTaskId", "name": "myLambdaFunctionName", "input": "inputToLambdaFunction", "startToCloseTimeout": "30" }, }], } Set the following parameters: • id with an identifier for the Lambda task. This must be a string from 1-256 characters and must not contain the
|
swf-dg-063
|
swf-dg.pdf
| 63 |
OperationNotPermittedFault. Using Lambda tasks in your workflows API Version 2012-01-25 184 Amazon Simple Workflow Service Developer Guide Call your Lambda function from a Amazon SWF workflow You can use the ScheduleLambdaFunctionDecisionAttributes data type to identify the Lambda function to call during a workflow execution. During a call to RespondDecisionTaskCompleted, provide a ScheduleLambdaFunctionDecisionAttributes to your decisions list. For example: { "decisions": [{ "ScheduleLambdaFunctionDecisionAttributes": { "id": "lambdaTaskId", "name": "myLambdaFunctionName", "input": "inputToLambdaFunction", "startToCloseTimeout": "30" }, }], } Set the following parameters: • id with an identifier for the Lambda task. This must be a string from 1-256 characters and must not contain the characters : (colon), / (slash), | (vertical bar), nor any control characters (\u0000 - \u001f and \u007f - \u009f), nor the literal string arn. • name with the name of your Lambda function. Your Amazon SWF workflow must be provided with an IAM role that gives it access to call the Lambda function. The name provided must follow the constraints for the FunctionName parameter like in the Lambda Invoke action. • input with optional input data for the function. If set, this must follow the constraints for the ClientContext parameter like in the Lambda Invoke action. • startToCloseTimeout with an optional maximum period, in seconds, that the function can take to execute before the task fails with a timeout exception. The value NONE can be used to specify unlimited duration. For more information, see Implementing AWS Lambda Tasks Using Lambda tasks in your workflows API Version 2012-01-25 185 Amazon Simple Workflow Service Developer Guide Developing an Activity Worker in Amazon SWF An activity worker provides the implementation of one or more activity types. An activity worker communicates with Amazon SWF to receive activity tasks and perform them. You can have a fleet of multiple activity workers performing activity tasks of the same activity type. Amazon SWF makes an activity task available to activity workers when the decider schedules the activity task. When a decider schedules an activity task, it provides the data (which you determine) that the activity worker needs to perform the activity task. Amazon SWF inserts this data into the activity task before sending it to the activity worker. Activity workers are managed by you. They can be written in any language. A worker can be run anywhere, as long as it can communicate with Amazon SWF through the API. Because Amazon SWF provides all the information needed to perform an activity task, all activity workers can be stateless. Statelessness enables your workflows to be highly scalable; to handle increased capacity requirements, simply add more activity workers. This section explains how to implement an activity worker. The activity workers should repeatedly do the following. 1. Poll Amazon SWF for an activity task. 2. Begin performing the task. 3. Periodically report a heartbeat to Amazon SWF if the task is long-lived. 4. Report that the task completed or failed and return the results to Amazon SWF. Topics • Polling for Activity Tasks • Performing the Activity Task • Reporting Activity Task Heartbeats • Completing or Failing an Activity Task • Launching Activity Workers Polling for Activity Tasks To perform activity tasks, each activity worker must poll Amazon SWF by periodically calling the PollForActivityTask action. Developing an Activity Worker API Version 2012-01-25 186 Amazon Simple Workflow Service Developer Guide In the following example, the activity worker ChargeCreditCardWorker01 polls for a task on the task list, ChargeCreditCard-v0.1. If no activity tasks are available, after 60 seconds, Amazon SWF sends back an empty response. An empty response is a Task structure in which the value of the taskToken is an empty string. https://swf.us-east-1.amazonaws.com PollForActivityTask { "domain" : "867530901", "taskList" : { "name": "ChargeCreditCard-v0.1" }, "identity" : "ChargeCreditCardWorker01" } If an activity task becomes available, Amazon SWF returns it to the activity worker. The task contains the data that the decider specifies when it schedules the activity. After an activity worker receives an activity task, it is ready to perform the work. The next section provides information about performing an activity task. Performing the Activity Task After receiving an activity task, the activity worker is ready to perform it. To perform an activity task 1. Program your activity worker to interpret the content in the input field of the task. This field contains the data specified by the decider when the task was scheduled. 2. Program the activity worker to begin processing the data and executing your logic. The next section describes how to program your activity workers to provide status updates to Amazon SWF for long running activities. Reporting Activity Task Heartbeats If a heartbeat timeout was registered with the activity type, then the activity worker must record a heartbeat before the heartbeat timeout is exceeded. If an activity task doesn't provide a heartbeat within the timeout, the task times out, Amazon
|
swf-dg-064
|
swf-dg.pdf
| 64 |
in the input field of the task. This field contains the data specified by the decider when the task was scheduled. 2. Program the activity worker to begin processing the data and executing your logic. The next section describes how to program your activity workers to provide status updates to Amazon SWF for long running activities. Reporting Activity Task Heartbeats If a heartbeat timeout was registered with the activity type, then the activity worker must record a heartbeat before the heartbeat timeout is exceeded. If an activity task doesn't provide a heartbeat within the timeout, the task times out, Amazon SWF closes it and schedules a new decision task to inform a decider of the timeout. The decider can then reschedule the activity task or take another action. Performing the Activity Task API Version 2012-01-25 187 Amazon Simple Workflow Service Developer Guide If, after timing out, the activity worker attempts to contact Amazon SWF, such as by calling RespondActivityTaskCompleted, Amazon SWF will return an UnknownResource fault. This section describes how to provide an activity heartbeat. To record an activity task heartbeat, program your activity worker to call the RecordActivityTaskHeartbeat action. This action also provides a string field that you can use to store free-form data to quantify progress in whatever way works for your application. In this example, the activity worker reports heartbeat to Amazon SWF and uses the details field to report that the activity task is 40 percent complete. To report heartbeat, the activity worker must specify the task token of the activity task. https://swf.us-east-1.amazonaws.com RecordActivityTaskHeartbeat { "taskToken" : "12342e17-80f6-FAKE-TASK-TOKEN32f0223", "details" : "40" } This action doesn't in itself create an event in the workflow execution history; however, if the task times out, the workflow execution history will contain a ActivityTaskTimedOut event that contains the information from the last heartbeat generated by the activity worker. Completing or Failing an Activity Task After executing a task, the activity worker should report whether the activity task completed or failed. Completing an Activity Task To complete an activity task, program the activity worker to call the RespondActivityTaskCompleted action after it successfully completes an activity task, specifying the task token. In this example, the activity worker indicates that the task completed successfully. https://swf.us-east-1.amazonaws.com RespondActivityTaskCompleted { "taskToken": "12342e17-80f6-FAKE-TASK-TOKEN32f0223", "results": "40" Completing or Failing an Activity Task API Version 2012-01-25 188 Amazon Simple Workflow Service } Developer Guide When the activity completes, Amazon SWF schedules a new decision task for the workflow execution with which the activity is associated. Program the activity worker to poll for another activity task after it has completed the task at hand. This creates a loop where the activity worker continuously polls for and completes tasks. If the activity doesn't respond within the StartToCloseTimeout period, or if ScheduleToCloseTimeout has been exceeded, Amazon SWF times out the activity task and schedules a decision task. This enables a decider to take an appropriate action, such as rescheduling the task. For example, if an Amazon EC2 instance is executing an activity task and the instance fails before the task is complete, the decider receives a timeout event in the workflow execution history. If the activity task is using a heartbeat, the decider receives the event when the task fails to deliver the next heartbeat after the Amazon EC2 instance fails. If not, the decider eventually receives the event when the activity task fails to complete before it hits one of its overall timeout values. It is then up to the decider to re-assign the task or take some other action. Failing an Activity Task If an activity worker can't perform an activity task for some reason, but it can still communicate with Amazon SWF, you can program it to fail the task. To program an activity worker to fail an activity task, program the activity worker to call the RespondActivityTaskFailed action that specifies the task token of the task. https://swf.us-east-1.amazonaws.com RespondActivityTaskFailed { "taskToken" : "12342e17-80f6-FAKE-TASK-TOKEN32f0223", "reason" : "CC-Invalid", "details" : "Credit Card Number Checksum Failed" } As the developer, you define the values that are stored in the reason and details fields. These are free-form strings; you can use any error code conventions that serve your application. Amazon SWF doesn't process these values. However, Amazon SWF may display these values in the console. When an activity task is failed, Amazon SWF schedules a decision task for the workflow execution with which the activity task is associated to inform the decider of the failure. Program your decider Completing or Failing an Activity Task API Version 2012-01-25 189 Amazon Simple Workflow Service Developer Guide to handle failed activities, such as by rescheduling the activity or failing the workflow execution, depending on the nature of the failure. Launching Activity Workers To launch activity workers, package your logic into an executable that you can use on
|
swf-dg-065
|
swf-dg.pdf
| 65 |
values. However, Amazon SWF may display these values in the console. When an activity task is failed, Amazon SWF schedules a decision task for the workflow execution with which the activity task is associated to inform the decider of the failure. Program your decider Completing or Failing an Activity Task API Version 2012-01-25 189 Amazon Simple Workflow Service Developer Guide to handle failed activities, such as by rescheduling the activity or failing the workflow execution, depending on the nature of the failure. Launching Activity Workers To launch activity workers, package your logic into an executable that you can use on your activity worker platform. For example, you might package your activity code as a Java executable that you can run on both Linux and Windows servers. Once launched, your workers start polling for tasks. Until the decider schedules activity tasks, though, these polls time out with no tasks and your workers just continue polling. Because polls are outbound requests, activity worker can run on any network that has access to the Amazon SWF endpoint. You can launch as many activity workers as you like. As the decider schedules activity tasks, Amazon SWF automatically distributes the activity tasks to the polling activity workers. Developing deciders in Amazon SWF A decider is an implementation of the coordination logic of your workflow type that runs during the execution of your workflow. You can run multiple deciders for a single workflow type. Because the execution state for a workflow execution is stored in its workflow history, deciders can be stateless. Amazon SWF maintains the workflow execution history and provides it to a decider with each decision task. This enables you to dynamically add and remove deciders as necessary, which makes the processing of your workflows highly scalable. As the load on your system grows, you simply add more deciders to handle the increased capacity. Note, however, that there can be only one decision task open at any time for a given workflow execution. Every time a state change occurs for a workflow execution, Amazon SWF schedules a decision task. Each time a decider receives a decision task, it does the following: • Interprets the workflow execution history provided with the decision task • Applies the coordination logic based on the workflow execution history and makes decisions on what to do next. Each decision is represented by a Decision structure • Completes the decision task and provides a list of decisions to Amazon SWF. This section describes how to develop a decider, which involves: Launching Activity Workers API Version 2012-01-25 190 Amazon Simple Workflow Service Developer Guide • Programming your decider to poll for decision tasks • Programming your decider to interpret the workflow execution history and make decisions • Programming your decider to respond to a decision task. The examples in this section show how you might program a decider for the e-commerce example workflow. You can implement the decider in any language that you like and run it anywhere, as long as it can communicate with Amazon SWF through its service API. Topics • Defining Coordination Logic • Polling for Decision Tasks • Applying the Coordination Logic • Responding with Decisions • Closing a Workflow Execution • Launching Deciders Defining Coordination Logic The first thing to do when developing a decider is to define the coordination logic. In the e- commerce example, coordination logic that schedules each activity after the previous activity completes might look similar to the following: IF lastEvent = "StartWorkflowInstance" addToDecisions ScheduleVerifyOrderActivity ELSIF lastEvent = "CompleteVerifyOrderActivity" addToDecisions ScheduleChargeCreditCardActivity ELSIF lastEvent = "CompleteChargeCreditCardActivity" addToDecisions ScheduleCompleteShipOrderActivity ELSIF lastEvent = "CompleteShipOrderActivity" addToDecisions ScheduleRecordOrderCompletion ELSIF lastEvent = "CompleteRecordOrderCompletion" addToDecisions CloseWorkflow Defining Coordination Logic API Version 2012-01-25 191 Amazon Simple Workflow Service Developer Guide ENDIF The decider applies the coordination logic to the workflow execution history, and creates a list of decisions when completing the decision task using the RespondDecisionTaskCompleted action. Polling for Decision Tasks Each decider polls for decision tasks. The decision tasks contain the information that the decider uses to generate decisions such as scheduling activity tasks. To poll for decision tasks, the decider uses the PollForDecisionTask action. In this example, the decider polls for a decision task, specifying the customerOrderWorkflow-0.1 tasklist. https://swf.us-east-1.amazonaws.com PollForDecisionTask { "domain": "867530901", "taskList": {"name": "customerOrderWorkflow-v0.1"}, "identity": "Decider01", "maximumPageSize": 50, "reverseOrder": true } If a decision task is available from the task list specified, Amazon SWF returns it immediately. If no decision task is available, Amazon SWF holds the connection open for up to 60 seconds, and returns a task as soon as it becomes available. If no task becomes available, Amazon SWF returns an empty response. An empty response is a Task structure in which the value of taskToken is an empty string. Make sure to program your decider to poll for another task if it receives an
|
swf-dg-066
|
swf-dg.pdf
| 66 |
PollForDecisionTask { "domain": "867530901", "taskList": {"name": "customerOrderWorkflow-v0.1"}, "identity": "Decider01", "maximumPageSize": 50, "reverseOrder": true } If a decision task is available from the task list specified, Amazon SWF returns it immediately. If no decision task is available, Amazon SWF holds the connection open for up to 60 seconds, and returns a task as soon as it becomes available. If no task becomes available, Amazon SWF returns an empty response. An empty response is a Task structure in which the value of taskToken is an empty string. Make sure to program your decider to poll for another task if it receives an empty response. If a decision task is available, Amazon SWF returns a response that contains the decision task as well as a paginated view of the workflow execution history. In this example, the type of the most recent event indicates the workflow execution started and the input element contains the information needed to perform the first task. { "events": [ { Polling for Decision Tasks API Version 2012-01-25 192 Amazon Simple Workflow Service Developer Guide "decisionTaskStartedEventAttributes": { "identity": "Decider01", "scheduledEventId": 2 }, "eventId": 3, "eventTimestamp": 1326593394.566, "eventType": "DecisionTaskStarted" }, { "decisionTaskScheduledEventAttributes": { "startToCloseTimeout": "600", "taskList": { "name": "specialTaskList" } }, "eventId": 2, "eventTimestamp": 1326592619.474, "eventType": "DecisionTaskScheduled" }, { "eventId": 1, "eventTimestamp": 1326592619.474, "eventType": "WorkflowExecutionStarted", "workflowExecutionStartedEventAttributes": { "childPolicy" : "TERMINATE", "executionStartToCloseTimeout" : "3600", "input" : "data-used-decider-for-first-task", "parentInitiatedEventId": 0, "tagList" : ["music purchase", "digital", "ricoh-the-dog"], "taskList": { "name": "specialTaskList" }, "taskStartToCloseTimeout": "600", "workflowType": { "name": "customerOrderWorkflow", "version": "1.0" } } } ], ... } After receiving the workflow execution history, the decider interprets history and makes decisions based on its coordination logic. Because the number of workflow history events for a single workflow execution might be large, the result returned might be split up across a number of pages. To retrieve subsequent pages, make additional calls to PollForDecisionTask using the nextPageToken returned by the initial call. Polling for Decision Tasks API Version 2012-01-25 193 Amazon Simple Workflow Service Developer Guide Note that you do not call GetWorkflowExecutionHistory with this nextPageToken. Instead, call PollForDecisionTask again. Applying the Coordination Logic After the decider receives a decision task, program it to interpret the workflow execution history to determine what has happened so far. Based on this, it should generate a list of decisions. In the e-commerce example, we are concerned only with the last event in the workflow history, so we define the following logic. IF lastEvent = "StartWorkflowInstance" addToDecisions ScheduleVerifyOrderActivity ELSIF lastEvent = "CompleteVerifyOrderActivity" addToDecisions ScheduleChargeCreditCardActivity ELSIF lastEvent = "CompleteChargeCreditCardActivity" addToDecisions ScheduleCompleteShipOrderActivity ELSIF lastEvent = "CompleteShipOrderActivity" addToDecisions ScheduleRecordOrderCompletion ELSIF lastEvent = "CompleteRecordOrderCompletion" addToDecisions CloseWorkflow ENDIF If the lastEvent is CompleteVerifyOrderActivity, you would add the ScheduleChargeCreditCardActivity activity to the list of decisions. After the decider determines the decision(s) to make, it can respond to Amazon SWF with appropriate decisions. Responding with Decisions After interpreting the workflow history and generating a list of decisions, the decider is ready to respond back to Amazon SWF with those decisions. Program your decider to extract the data that it needs from the workflow execution history, then create decisions that specify the next appropriate actions for the workflow. The decider transmits Applying the Coordination Logic API Version 2012-01-25 194 Amazon Simple Workflow Service Developer Guide these decision back to Amazon SWF using the RespondDecisionTaskCompleted action. See the Amazon Simple Workflow Service API Reference for a list of the available decision types. In the e-commerce example, when the decider responds with the set of decisions that it generated, it also includes the credit card input from the workflow execution history. The activity worker then has the information it needs to perform the activity task. When all activities in the workflow execution are complete, the decider closes the workflow execution. https://swf.us-east-1.amazonaws.com RespondDecisionTaskCompleted { "taskToken" : "12342e17-80f6-FAKE-TASK-TOKEN32f0223", "decisions" : [ { "decisionType" :"ScheduleActivityTask", "scheduleActivityTaskDecisionAttributes" : { "control" :"OPTIONAL_DATA_FOR_DECIDER", "activityType" : { "name" :"ScheduleChargeCreditCardActivity", "version" :"1.1" }, "activityId" :"3e2e6e55-e7c4-beef-feed-aa815722b7be", "scheduleToCloseTimeout" :"360", "taskList" : { "name" :"CC_TASKS" }, "scheduleToStartTimeout" :"60", "startToCloseTimeout" :"300", "heartbeatTimeout" :"60", "input" : "4321-0001-0002-1234: 0212 : 234" } } ] } Closing a Workflow Execution When the decider determines that the business process is complete, that is, that there are no more activities to perform, the decider generates a decision to close the workflow execution. Closing a Workflow Execution API Version 2012-01-25 195 Amazon Simple Workflow Service Developer Guide To close a workflow execution, program your decider to interpret the events in the workflow history to determine what has happened in the execution so far and see if the workflow execution should be closed. If the workflow has completed successfully, then close the workflow execution by calling RespondDecisionTaskCompleted with the CompleteWorkflowExecution decision. Alternatively, you can fail an erroneous execution using the FailWorkflowExecution decision. In the e-commerce example, the decider reviews the history and based on the coordination logic adds a decision to
|
swf-dg-067
|
swf-dg.pdf
| 67 |
the workflow execution. Closing a Workflow Execution API Version 2012-01-25 195 Amazon Simple Workflow Service Developer Guide To close a workflow execution, program your decider to interpret the events in the workflow history to determine what has happened in the execution so far and see if the workflow execution should be closed. If the workflow has completed successfully, then close the workflow execution by calling RespondDecisionTaskCompleted with the CompleteWorkflowExecution decision. Alternatively, you can fail an erroneous execution using the FailWorkflowExecution decision. In the e-commerce example, the decider reviews the history and based on the coordination logic adds a decision to close the workflow execution to its list of decisions, and initiates a RespondDecisionTaskCompleted action with a close workflow decision. Note There are some cases where closing a workflow execution fails. For example, if a signal is received while the decider is closing the workflow execution, the close decision will fail. To handle this possibility, ensure that the decider continues polling for decision tasks. Also, ensure that the decider that receives the next decision task responds to the event—in this case, a signal—that prevented the execution from closing. You might also support cancellation of workflow executions. This could be especially useful for long running workflows. To support cancellation, your decider should handle the WorkflowExecutionCancelRequested event in the history. This event indicates that cancellation of the execution has been requested. Your decider should perform appropriate clean-up actions, such as canceling ongoing activity tasks, and closing the workflow calling the RespondDecisionTaskCompleted action with the CancelWorkflowExecution decision. The following example calls RespondDecisionTaskCompleted to specify that the current workflow execution is canceled. https://swf.us-east-1.amazonaws.com RespondDecisionTaskCompleted { "taskToken" : "12342e17-80f6-FAKE-TASK-TOKEN32f0223", "decisions" : [ { "decisionType":"CancelWorkflowExecution", "CancelWorkflowExecutionAttributes":{ "Details": "Customer canceled order" Closing a Workflow Execution API Version 2012-01-25 196 Amazon Simple Workflow Service Developer Guide } } ] } Amazon SWF checks to ensure that the decision to close or cancel the workflow execution is the last decision sent by the decider. That is, it isn't valid to have a set of decisions in which there are decisions after the one that closes the workflow. Launching Deciders After completing decider development, you are ready to launch one or more deciders. To launch deciders, package your coordination logic into an executable that you can use on your decider platform. For example, you might package your decider code as a Java executable that you can run on both Linux and Windows computers. Once launched, your deciders should start polling Amazon SWF for tasks. Until you start workflow executions and Amazon SWF schedules decision tasks, these polls will time out and get empty responses. An empty response is a Task structure in which the value of taskToken is an empty string. Your deciders should simply continue to poll. Amazon SWF ensures that only one decision task can be active for a workflow execution at any time. This prevents issues such as conflicting decisions. Additionally, Amazon SWF ensures that a single decision task is assigned to a single decider, regardless of the number of deciders that are running. If something occurs that generates a decision task while a decider is processing another decision task, Amazon SWF queues the new task until the current task completes. After the current task completes, Amazon SWF makes the new decision task available. Also, decision tasks are batched in the sense that, if multiple activities complete while a decider is processing a decision task, Amazon SWF will create only a single new decision task to account for the multiple task completions. However, each task completion will receive an individual event in the workflow execution history. Because polls are outbound requests, deciders can run on any network that has access to the Amazon SWF endpoint. In order for workflow executions to progress, one or more deciders must be running. You can launch as many deciders as you like. Amazon SWF supports multiple deciders polling on the same task list. Launching Deciders API Version 2012-01-25 197 Amazon Simple Workflow Service Developer Guide Starting workflows in Amazon SWF You can start a workflow execution of a registered workflow type from any application using the StartWorkflowExecution action. When you start the execution you associate an identifier, called the workflowId, with it. The workflowId can be any string that is appropriate for your application, such as the order number in an order processing application. You can't use the same workflowId for multiple open workflow executions within the same domain. For example, if you start two workflow executions with the workflowId Customer Order 01, the second workflow execution will not start and the request will fail. You can, however, reuse the workflowId of a closed execution. Amazon SWF also associates a unique system generated identifier, called the runId, with each workflow execution. After the workflow and activity types are registered, start the workflow
|
swf-dg-068
|
swf-dg.pdf
| 68 |
The workflowId can be any string that is appropriate for your application, such as the order number in an order processing application. You can't use the same workflowId for multiple open workflow executions within the same domain. For example, if you start two workflow executions with the workflowId Customer Order 01, the second workflow execution will not start and the request will fail. You can, however, reuse the workflowId of a closed execution. Amazon SWF also associates a unique system generated identifier, called the runId, with each workflow execution. After the workflow and activity types are registered, start the workflow by calling the StartWorkflowExecution action. The value of the input parameter can be any string specified by the application that is starting the workflow. The executionStartToCloseTimeout is the length of time in seconds that the workflow execution can consume from start to close. Exceeding this limit causes the workflow execution to time out. Unlike some of the other timeout parameters in Amazon SWF, you can't specify a value of NONE for this timeout; there is a one-year maximum limit on the time that a workflow execution can run. Similarly, the taskStartToCloseTimeout is the length of time in seconds that a decision task associated with this workflow execution can take before timing out. https://swf.us-east-1.amazonaws.com StartWorkflowExecution { "domain" : "867530901", "workflowId" : "20110927-T-1", "workflowType" : { "name" : "customerOrderWorkflow", "version" : "1.1" }, "taskList" : { "name" : "specialTaskList" }, "input" : "arbitrary-string-that-is-meaningful-to-the-workflow", "executionStartToCloseTimeout" : "1800", "tagList" : [ "music purchase", "digital", "ricoh-the-dog" ], "taskStartToCloseTimeout" : "1800", "childPolicy" : "TERMINATE" } Starting workflows API Version 2012-01-25 198 Amazon Simple Workflow Service Developer Guide If the StartWorkflowExecution action is successful, Amazon SWF returns the runId for the workflow execution. The runId for a workflow execution is unique within a specific region. Save the runId in case you later need to specify this workflow execution in a call to Amazon SWF. For example, you would use the runId if you later needed to send a signal to the workflow execution. {"runId": "9ba33198-4b18-4792-9c15-7181fb3a8852"} Setting task priority in Amazon SWF By default, tasks on a task list are delivered based upon their arrival time: tasks that are scheduled first are generally run first, as far as possible. By setting an optional task priority, you can give priority to certain tasks: Amazon SWF will attempt to deliver higher-priority tasks on a task list before those with lower priority. Note Tasks that are scheduled first generally run first, but this is not guaranteed. You can set task priorities for both workflows and activities. A workflow's task priority doesn't affect the priority of any activity tasks it schedules, nor does it affect any child workflows it starts. The default priority for an activity or workflow is set (either by you or by Amazon SWF) during registration, and the registered task priority is always used unless it is overridden while scheduling the activity or starting a workflow execution. Task priority values can range from "-2147483648" to "2147483647", with higher numbers indicating higher priority. If you don't set the task priority for an activity or workflow, it will be assigned a priority of zero ("0"). Topics • Setting Task Priority for Workflows • Setting Task Priority for Activities • Actions that Return Task Priority Information Setting task priority API Version 2012-01-25 199 Amazon Simple Workflow Service Developer Guide Setting Task Priority for Workflows You can set the task priority for a workflow when you register it or start it. The task priority that is set when the workflow type is registered is used as the default for any workflow executions of that type, unless it is overridden when starting the workflow execution. To register a workflow type with a default task priority, set the defaultTaskPriority option when using the RegisterWorkflowType action: { "domain": "867530901", "name": "expeditedOrderWorkflow", "version": "1.0", "description": "Expedited customer orders workflow", "defaultTaskStartToCloseTimeout": "600", "defaultExecutionStartToCloseTimeout": "3600", "defaultTaskList": {"name": "mainTaskList"}, "defaultTaskPriority": "10", "defaultChildPolicy": "TERMINATE" } You can override a workflow type's registered task priority when you start a workflow execution with StartWorkflowExecution: { "childPolicy": "TERMINATE", "domain": "867530901", "executionStartToCloseTimeout": "1800", "input": "arbitrary-string-that-is-meaningful-to-the-workflow", "tagList": ["music purchase", "digital", "ricoh-the-dog"], "taskList": {"name": "specialTaskList"}, "taskPriority": "-20", "taskStartToCloseTimeout": "600", "workflowId": "20110927-T-1", "workflowType": {"name": "customerOrderWorkflow", "version": "1.0"}, } You can also override the registered task priority when starting a child workflow or when continuing a workflow as new, such as when responding to a decision with RespondDecisionTaskCompleted. Setting Task Priority for Workflows API Version 2012-01-25 200 Amazon Simple Workflow Service Developer Guide To set a child workflow's task priority, provide the value in startChildWorkflowExecutionDecisionAttributes: { "taskToken": "AAAAKgAAAAEAAAAAAAAAA...", "decisions": [ { "decisionType": "StartChildWorkflowExecution", "startChildWorkflowExecutionDecisionAttributes": { "childPolicy": "TERMINATE", "control": "digital music", "executionStartToCloseTimeout": "900", "input": "201412-Smith-011x", "taskList": {"name": "specialTaskList"}, "taskPriority": "5", "taskStartToCloseTimeout": "600", "workflowId": "verification-workflow", "workflowType": { "name": "MyChildWorkflow", "version": "1.0" } } } ] } When continuing a workflow as
|
swf-dg-069
|
swf-dg.pdf
| 69 |
You can also override the registered task priority when starting a child workflow or when continuing a workflow as new, such as when responding to a decision with RespondDecisionTaskCompleted. Setting Task Priority for Workflows API Version 2012-01-25 200 Amazon Simple Workflow Service Developer Guide To set a child workflow's task priority, provide the value in startChildWorkflowExecutionDecisionAttributes: { "taskToken": "AAAAKgAAAAEAAAAAAAAAA...", "decisions": [ { "decisionType": "StartChildWorkflowExecution", "startChildWorkflowExecutionDecisionAttributes": { "childPolicy": "TERMINATE", "control": "digital music", "executionStartToCloseTimeout": "900", "input": "201412-Smith-011x", "taskList": {"name": "specialTaskList"}, "taskPriority": "5", "taskStartToCloseTimeout": "600", "workflowId": "verification-workflow", "workflowType": { "name": "MyChildWorkflow", "version": "1.0" } } } ] } When continuing a workflow as new, set the task priority in continueAsNewWorkflowExecutionDecisionAttributes: { "taskToken": "AAAAKgAAAAEAAAAAAAAAA...", "decisions": [ { "decisionType": "ContinueAsNewWorkflowExecution", "continueAsNewWorkflowExecutionDecisionAttributes": { "childPolicy": "TERMINATE", "executionStartToCloseTimeout": "1800", "input": "5634-0056-4367-0923,12/12,437", "taskList": {"name": "specialTaskList"}, "taskStartToCloseTimeout": "600", "taskPriority": "100", "workflowTypeVersion": "1.0" } Setting Task Priority for Workflows API Version 2012-01-25 201 Amazon Simple Workflow Service Developer Guide } ] } Setting Task Priority for Activities You can set the task priority for an activity either when registering it or when scheduling it. The task priority that is set when registering an activity type is used as the default priority when the activity is run, unless it is overridden when scheduling the activity. To set task priority when registering an activity type, set the defaultTaskPriority option when using the RegisterActivityType action: { "defaultTaskHeartbeatTimeout": "120", "defaultTaskList": {"name": "mainTaskList"}, "defaultTaskPriority": "10", "defaultTaskScheduleToCloseTimeout": "900", "defaultTaskScheduleToStartTimeout": "300", "defaultTaskStartToCloseTimeout": "600", "description": "Verify the customer credit card", "domain": "867530901", "name": "activityVerify", "version": "1.0" } To schedule a task with a task priority, use the taskPriority option when scheduling the activity with the RespondDecisionTaskCompleted action: { "taskToken": "AAAAKgAAAAEAAAAAAAAAA...", "decisions": [ { "decisionType": "ScheduleActivityTask", "scheduleActivityTaskDecisionAttributes": { "activityId": "verify-account", "activityType": { "name": "activityVerify", "version": "1.0" }, "control": "digital music", Setting Task Priority for Activities API Version 2012-01-25 202 Amazon Simple Workflow Service Developer Guide "input": "abab-101", "taskList": {"name": "mainTaskList"}, "taskPriority": "15" } } ] } Actions that Return Task Priority Information You can get information about the set task priority (or set default task priority) from the following Amazon SWF actions: • DescribeActivityType returns the defaultTaskPriority of the activity type in the configuration section of the response. • DescribeWorkflowExecution returns the taskPriority of the workflow execution in the executionConfiguration section of the response. • DescribeWorkflowType returns the defaultTaskPriority of the workflow type in the configuration section of the response. • GetWorkflowExecutionHistory and PollForDecisionTask provide task priority information in the activityTaskScheduledEventAttributes, decisionTaskScheduledEventAttributes, workflowExecutionContinuedAsNewEventAttributes, and workflowExecutionStartedEventAttributes sections of the response. Handling errors in Amazon SWF There are a number of different types of errors that can occur during the course of a workflow execution. Topics • Validation Errors • Errors in Enacting Actions or Decisions • Timeouts • Errors raised by user code • Errors related to closing a workflow execution Actions that Return Task Priority Information API Version 2012-01-25 203 Amazon Simple Workflow Service Validation Errors Developer Guide Validation errors occur when a request to Amazon SWF fails because it isn't properly formed or it contains invalid data. In this context, a request could be an action such as DescribeDomain or it could be a decision such as StartTimer. If the request is an action, Amazon SWF returns an error code in the response. Check this error code as it may provide information about what aspect of the request caused the failure. For example, one or more of the arguments passed with the request might be invalid. For a list of common error codes, go to the topic for the action in the Amazon Simple Workflow Service API Reference. If the request that failed is a decision, an appropriate event will be listed in the workflow execution history. For example, if the StartTimer decision failed, you would see a StartTimerFailed event in the history. The decider should check for these events when it receives the history in response to PollForDecisionTask or GetWorkflowExecutionHistory. Below is a list of possible decision failure events that can occur when the decision isn't correctly formed or contains invalid data. Errors in Enacting Actions or Decisions Even if the request is properly formed, errors may occur when Amazon SWF attempts to carry out the request. In these cases, one of the following events in the history will indicate that an error occurred. Look at the reason field of the event to determine the cause of failure. • CancelTimerFailed • RequestCancelActivityTaskFailed • RequestCancelExternalWorkflowExecutionFailed • ScheduleActivityTaskFailed • SignalExternalWorkflowExecutionFailed • StartChildWorkflowExecutionFailed • StartTimerFailed Timeouts Deciders, activity workers, and workflow executions all operate within the constraints of timeout periods. In this type of error, a task or a child workflow times out. An event will appear in the history that describes the timeout. The decider should handle this event by, for example, Validation Errors API Version 2012-01-25 204 Amazon Simple Workflow Service Developer Guide rescheduling the task or restarting the
|
swf-dg-070
|
swf-dg.pdf
| 70 |
will indicate that an error occurred. Look at the reason field of the event to determine the cause of failure. • CancelTimerFailed • RequestCancelActivityTaskFailed • RequestCancelExternalWorkflowExecutionFailed • ScheduleActivityTaskFailed • SignalExternalWorkflowExecutionFailed • StartChildWorkflowExecutionFailed • StartTimerFailed Timeouts Deciders, activity workers, and workflow executions all operate within the constraints of timeout periods. In this type of error, a task or a child workflow times out. An event will appear in the history that describes the timeout. The decider should handle this event by, for example, Validation Errors API Version 2012-01-25 204 Amazon Simple Workflow Service Developer Guide rescheduling the task or restarting the child workflow. For more information about timeouts, see Amazon SWF Timeout Types • ActivityTaskTimedOut • ChildWorkflowExecutionTimedOut • DecisionTaskTimedOut • WorkflowExecutionTimedOut Errors raised by user code Examples of this type of error condition are activity task failures and child-workflow failures. As with timeout errors, Amazon SWF adds an appropriate event to the workflow execution history. The decider should handle this event, possibly by rescheduling the task or restarting the child workflow. • ActivityTaskFailed • ChildWorkflowExecutionFailed Errors related to closing a workflow execution Deciders may also see the following events if they attempt to close a workflow that has a pending decision task. • FailWorkflowExecutionFailed • CompleteWorkFlowExecutionFailed • ContinueAsNewWorkflowExecutionFailed • CancelWorkflowExecutionFailed For more information about any of the events listed above, see History Event in the Amazon SWF API Reference. Errors raised by user code API Version 2012-01-25 205 Amazon Simple Workflow Service Developer Guide Amazon SWF Quotas Amazon SWF places quotas on the sizes of certain workflow parameters, such as on the number of domains per account and on the size of the workflow execution history. These quotas are designed to prevent erroneous workflows from consuming all of the resources of the system, but are not hard limits. If you find that your application is frequently exceeding these quotas, you can request a service quota increase. Contents • General Account Quotas for Amazon SWF • Quotas on Workflow Executions • Quotas on Task Executions • Amazon SWF throttling quotas • Throttling quotas for all Regions • Decision quotas for all Regions • Workflow-level quotas • Requesting a quota increase General Account Quotas for Amazon SWF • Maximum registered domains – 100 This quota includes both registered and deprecated domains. • Maximum workflow and activity types – 10,000 each per domain This quota includes both registered and deprecated types. • API call quota – Beyond infrequent spikes, applications may be throttled if they make a large number of API calls in a very short period of time. • Maximum request size – 1 MB per request This is the total data size per Amazon SWF API request, including the request header and all other associated request data. • Truncated responses for Count APIs – Indicates that an internal quota was reached and that the response is not the full count. General Account Quotas for Amazon SWF API Version 2012-01-25 206 Amazon Simple Workflow Service Developer Guide Some queries will internally reach the 1 MB quota mentioned above before returning a full response. The following can return a truncated response instead of the full count. • CountClosedWorkflowExecutions • CountOpenWorkflowExecutions • CountPendingActivityTasks • CountPendingDecisionTasks For each of these, if the truncated response is set to true, the count is less than the full amount. This internal quota can not be increased. • Maximum number of tags – 50 tags per resource. Attempting to add tags beyond 50 will result in a 400 error, TooManyTagsFault. Quotas on Workflow Executions • Maximum open workflow executions – 100,000 per domain This count includes child workflow executions. • Maximum workflow execution time – 1 year. This is a hard quota that can't be changed. • Maximum workflow execution history size – 25,000 events. This is a hard quota that can't be changed. Best practice is to structure each workflow such that its history does not grow beyond 10,000 events. Because the decider has to fetch the workflow history, a smaller history allows the decider to complete more quickly. If using the Flow Framework, you can use ContinueAsNew to continue a workflow with a fresh history. • Maximum open child workflow executions – 1,000 per workflow execution If your use case requires you to go beyond these quotas, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions. If you find that you still need a quota increase, see Requesting a quota increase. Quotas on Task Executions • Maximum pollers per task list – 1,000 per task list Quotas on Workflow Executions API Version 2012-01-25 207 Amazon Simple Workflow Service Developer Guide You can have a maximum of 1,000 pollers which simultaneously poll a particular task list. If you go over 1,000, you receive a LimitExceededException. Note While the maximum is
|
swf-dg-071
|
swf-dg.pdf
| 71 |
use case requires you to go beyond these quotas, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions. If you find that you still need a quota increase, see Requesting a quota increase. Quotas on Task Executions • Maximum pollers per task list – 1,000 per task list Quotas on Workflow Executions API Version 2012-01-25 207 Amazon Simple Workflow Service Developer Guide You can have a maximum of 1,000 pollers which simultaneously poll a particular task list. If you go over 1,000, you receive a LimitExceededException. Note While the maximum is 1,000, you might encounter LimitExceededException errors well before this quota. This error does not mean your tasks are being delayed. Instead, it means that you have the maximum amount of idle pollers on a task list. Amazon SWF sets this limit to save resources on both the client and server side. Setting the limit prevents an excessive number of pollers from waiting unnecessarily. You can reduce the LimitExceededException errors by using multiple task lists to distribute polling. • Maximum tasks scheduled per second – 2,000 per task list You can schedule a maximum of 2,000 tasks per second on a particular task list. If you exceed 2,000, your ScheduleActivityTask decisions will fail with ACTIVITY_CREATION_RATE_EXCEEDED error. Note While the maximum is 2,000, you might encounter ACTIVITY_CREATION_RATE_EXCEEDED errors well before this quota. To reduce these errors, use multiple task lists to distribute the load. • Maximum task execution time – 1 year (constrained by workflow execution time maximum) You can configure activity timeouts to cause a timeout event to occur if a particular stage of your activity task execution takes too long. • Maximum time SWF will keep a task in the queue – 1 year (constrained by workflow execution time quota) You can configure default activity timeouts during activity registration that will cause a timeout event to occur if a particular stage of your activity task execution takes too long. You can also override the default activity timeouts when you schedule an activity task in your decider code. • Maximum open activity tasks – 1,000 per workflow execution. This quota includes both activity tasks that have been scheduled and those being processed by workers. Quotas on Task Executions API Version 2012-01-25 208 Amazon Simple Workflow Service Developer Guide • Maximum open timers – 1,000 per workflow execution • Maximum input/result data size – 32,768 characters This quota affects activity or workflow execution result data, input data when scheduling activity tasks or workflow executions, and input sent with a workflow execution signal. • Maximum decisions in a decision task response – varies Due to the 1 MB quota on the maximum API request size, the number of decisions returned in a single call to RespondDecisionTaskCompleted will be limited according to the size of the data used by each decision, including the size of any input data provided to scheduled activity tasks or to workflow executions. Amazon SWF throttling quotas In addition to the service quotas described previously, certain Amazon SWF API calls and decision events are throttled to maintain service bandwidth, using a token bucket scheme. If your rate of requests consistently exceeds the rates that are listed here, you can request a throttle quota increase. The throttling and decision quotas are same across all regions. Throttling quotas for all Regions The following quotas are applicable at individual account-levels. You can also request an increase to the following quotas. For information about doing this, see Requesting a quota increase. API name Bucket size Refill rate per second CountClosedWorkflowExecutions CountOpenWorkflowExecutions CountPendingActivityTasks CountPendingDecisionTasks DeleteActivityType 2000 2000 200 200 200 6 6 6 6 6 Amazon SWF throttling quotas API Version 2012-01-25 209 Amazon Simple Workflow Service Developer Guide API name Bucket size Refill rate per second DeleteWorkflowType DeprecateActivityType DeprecateDomain DeprecateWorkflowType DescribeActivityType DescribeDomain DescribeWorkflowExecution DescribeWorkflowType GetWorkflowExecutionHistory ListActivityTypes ListClosedWorkflowExecutions ListDomains ListOpenWorkflowExecutions ListTagsForResource ListWorkflowTypes PollForActivityTask PollForDecisionTask RecordActivityTaskHeartbeat RegisterActivityType RegisterDomain 200 200 100 200 2000 200 2000 2000 2000 200 200 100 200 50 200 2000 2000 2000 200 100 6 6 6 6 6 6 6 6 60 6 6 6 48 30 6 200 200 160 60 6 Throttling quotas for all Regions API Version 2012-01-25 210 Amazon Simple Workflow Service Developer Guide API name Bucket size Refill rate per second RegisterWorkflowType RequestCancelWorkflowExecution RespondActivityTaskCanceled RespondActivityTaskCompleted RespondActivityTaskFailed RespondDecisionTaskCompleted SignalWorkflowExecution StartWorkflowExecution TagResource TerminateWorkflowExecution UndeprecateActivityType UndeprecateDomain UndeprecateWorkflowType UntagResource Decision quotas for all Regions 200 2000 2000 2000 2000 2000 2000 2000 50 2000 200 100 200 50 60 30 200 200 200 200 30 200 30 60 6 6 6 30 The following quotas are applicable at individual account-levels. You can also request an increase to the following quotas. For information about doing this, see Requesting a quota increase. Decision quotas for all Regions API Version 2012-01-25 211 Amazon
|
swf-dg-072
|
swf-dg.pdf
| 72 |
210 Amazon Simple Workflow Service Developer Guide API name Bucket size Refill rate per second RegisterWorkflowType RequestCancelWorkflowExecution RespondActivityTaskCanceled RespondActivityTaskCompleted RespondActivityTaskFailed RespondDecisionTaskCompleted SignalWorkflowExecution StartWorkflowExecution TagResource TerminateWorkflowExecution UndeprecateActivityType UndeprecateDomain UndeprecateWorkflowType UntagResource Decision quotas for all Regions 200 2000 2000 2000 2000 2000 2000 2000 50 2000 200 100 200 50 60 30 200 200 200 200 30 200 30 60 6 6 6 30 The following quotas are applicable at individual account-levels. You can also request an increase to the following quotas. For information about doing this, see Requesting a quota increase. Decision quotas for all Regions API Version 2012-01-25 211 Amazon Simple Workflow Service Developer Guide API name Bucket size Refill rate per second RequestCancelExternalWorkfl 1200 owExecution ScheduleActivityTask SignalExternalWorkflowExecution StartChildWorkflowExecution StartTimer 1000 1200 500 2000 120 200 120 12 200 Workflow-level quotas The following quotas are applicable at workflow-levels and can't be increased. API name Bucket size Refill rate per second GetWorkflowExecuti 400 onHistory SignalWorkflowExec 1000 ution RecordActivityTask 1000 Heartbeat RequestCancelWorkf 200 lowExecution 200 1000 1000 200 Requesting a quota increase For more information, see AWS service quotas in the AWS General Reference. Workflow-level quotas API Version 2012-01-25 212 Amazon Simple Workflow Service Developer Guide Additional resources and reference info for Amazon SWF This chapter provides additional resources and reference information that is useful when developing workflows with Amazon SWF. Topics • Amazon SWF Timeout Types • Amazon Simple Workflow Service Endpoints • Additional Documentation for the Amazon Simple Workflow Service • Web Resources for the Amazon Simple Workflow Service • Migration options for Ruby Flow Amazon SWF Timeout Types To ensure that workflow executions run correctly, you can set different types of timeouts with Amazon SWF. Some timeouts specify how long the workflow can run in its entirety. Other timeouts specify how long activity tasks can take before being assigned to a worker and how long they can take to complete from the time they are scheduled. All timeouts in the Amazon SWF API are specified in seconds. Amazon SWF also supports the string NONE as a timeout value, which indicates no timeout. For timeouts related to decision tasks and activity tasks, Amazon SWF adds an event to the workflow execution history. The attributes of the event provide information about what type of timeout occurred and which decision task or activity task was affected. Amazon SWF also schedules a decision task. When the decider receives the new decision task, it will see the timeout event in the history and take an appropriate action by calling the RespondDecisionTaskCompleted action. A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed when a worker reports it as completed, canceled, or failed. A task may also be closed by Amazon SWF as the result of a timeout. Timeouts in Workflow and Decision Tasks The following diagram shows how workflow and decision timeouts are related to the lifetime of a workflow: Timeout Types API Version 2012-01-25 213 Amazon Simple Workflow Service Developer Guide There are two timeout types that are relevant to workflow and decision tasks: • Workflow Start to Close (timeoutType: START_TO_CLOSE) – This timeout specifies the maximum time that a workflow execution can take to complete. It is set as a default during workflow registration, but it can be overridden with a different value when the workflow is started. If this timeout is exceeded, Amazon SWF closes the workflow execution and adds an event of type WorkflowExecutionTimedOut to the workflow execution history. In addition to the timeoutType, the event attributes specify the childPolicy that is in effect for this workflow execution. The child policy specifies how child workflow executions are handled if the parent workflow execution times out or otherwise terminates. For example, if the childPolicy is set to TERMINATE, then child workflow executions will be terminated. Once a workflow execution has timed out, you can't take any action on it other than visibility calls. • Decision Task Start to Close (timeoutType: START_TO_CLOSE) – This timeout specifies the maximum time that the corresponding decider can take to complete a decision task. It is set during workflow type registration. If this timeout is exceeded, the task is marked as timed out in the workflow execution history, and Amazon SWF adds an event of type DecisionTaskTimedOut to the workflow history. The event attributes will include the IDs for the events that correspond to when this decision task was scheduled (scheduledEventId) and when it was started (startedEventId). In addition to adding the event, Amazon SWF also schedules a new decision task to alert the decider that this decision task timed out. After this timeout occurs, an attempt to complete the timed-out decision task using RespondDecisionTaskCompleted will fail. Timeouts in Activity Tasks The following
|
swf-dg-073
|
swf-dg.pdf
| 73 |
timeout is exceeded, the task is marked as timed out in the workflow execution history, and Amazon SWF adds an event of type DecisionTaskTimedOut to the workflow history. The event attributes will include the IDs for the events that correspond to when this decision task was scheduled (scheduledEventId) and when it was started (startedEventId). In addition to adding the event, Amazon SWF also schedules a new decision task to alert the decider that this decision task timed out. After this timeout occurs, an attempt to complete the timed-out decision task using RespondDecisionTaskCompleted will fail. Timeouts in Activity Tasks The following diagram shows how timeouts are related to the lifetime of an activity task: Timeouts in Activity Tasks API Version 2012-01-25 214 Amazon Simple Workflow Service Developer Guide There are four timeout types that are relevant to activity tasks: • Activity Task Start to Close (timeoutType: START_TO_CLOSE) – This timeout specifies the maximum time that an activity worker can take to process a task after the worker has received the task. Attempts to close a timed out activity task using RespondActivityTaskCanceled, RespondActivityTaskCompleted, and RespondActivityTaskFailed will fail. • Activity Task Heartbeat (timeoutType: HEARTBEAT) – This timeout specifies the maximum time that a task can run before providing its progress through the RecordActivityTaskHeartbeat action. • Activity Task Schedule to Start (timeoutType: SCHEDULE_TO_START) – This timeout specifies how long Amazon SWF waits before timing out the activity task if no workers are available to perform the task. Once timed out, the expired task will not be assigned to another worker. • Activity Task Schedule to Close (timeoutType: SCHEDULE_TO_CLOSE) – This timeout specifies how long the task can take from the time it is scheduled to the time it is complete. As a best practice, this value should not be greater than the sum of the task schedule-to-start timeout and the task start-to-close timeout. Note Each of the timeout types has a default value, which is generally set to NONE (infinite). The maximum time for any activity execution is limited to one year, however. Timeouts in Activity Tasks API Version 2012-01-25 215 Amazon Simple Workflow Service Developer Guide You set default values for these during activity type registration, but you can override them with new values when you schedule the activity task. When one of these timeouts occurs, Amazon SWF will add an event of type ActivityTaskTimedOut to the workflow history. The timeoutType value attribute of this event will specify which of these timeouts occurred. For each of the timeouts, the value of timeoutType is shown in parentheses. The event attributes will also include the IDs for the events that correspond to when the activity task was scheduled (scheduledEventId) and when it was started (startedEventId). In addition to adding the event, Amazon SWF also schedules a new decision task to alert the decider that the timeout occurred. Amazon Simple Workflow Service Endpoints A list of the current Amazon SWF Regions and Endpoints are provided in the Amazon Web Services General Reference, along with the endpoints for other services. Amazon SWF domains and all related workflows and activities must exist within the same region to communicate with each other. Furthermore, any registered domains, workflows and activities within a region don't exist in other regions. For example, if you create a domain named "MySampleDomain" in both us-east-1 and in us-west-2, they exist as separate domains: none of the workflows, task lists, activities, or data associated with your domains are shared across regions. If you use other AWS resources in your workflows, such as Amazon EC2 instances, these must also exist in the same region as your Amazon SWF resources. The only exceptions to this are services that span regions, such as Amazon S3 and IAM. You can access these services from workflows that exist in any region that supports them. Additional Documentation for the Amazon Simple Workflow Service In addition to this Developer Guide, you may find the following documentation useful. Amazon Simple Workflow Service API Reference The Amazon Simple Workflow Service API Reference provides detailed information about the Amazon SWF HTTP API, including actions, request and response structures and error codes. Endpoints API Version 2012-01-25 216 Amazon Simple Workflow Service Developer Guide AWS Flow Framework Documentation The AWS Flow Framework is a programming framework that simplifies the process of implementing distributed asynchronous applications that use Amazon SWF to manage their workflows and activities, so you can focus on implementing your workflow logic. Each AWS Flow Framework is designed to work idiomatically in the language for which it is designed, so you can work naturally with your language of choice to implement workflows with all of the benefits of Amazon SWF. There is an AWS Flow Framework for Java. The AWS Flow Framework for Java Developer Guide provides information about how to obtain, set up and use
|
swf-dg-074
|
swf-dg.pdf
| 74 |
The AWS Flow Framework is a programming framework that simplifies the process of implementing distributed asynchronous applications that use Amazon SWF to manage their workflows and activities, so you can focus on implementing your workflow logic. Each AWS Flow Framework is designed to work idiomatically in the language for which it is designed, so you can work naturally with your language of choice to implement workflows with all of the benefits of Amazon SWF. There is an AWS Flow Framework for Java. The AWS Flow Framework for Java Developer Guide provides information about how to obtain, set up and use the AWS Flow Framework for Java. AWS SDK Documentation The AWS Software Development Kits (SDKs) provide access to Amazon SWF in many different programming languages. The SDKs follow the HTTP API closely, but also provide language-specific programming interfaces for some Amazon SWF features. You can find out more information about each SDK by visiting the following links. Note Only SDKs that have support for Amazon SWF at the time of writing are listed here. For a full list of the available AWS SDKs, visit the Tools for Amazon Web Services page. Java The AWS SDK for Java provides a Java API for AWS infrastructure services. To view the available documentation, see the AWS SDK for Java Documentation page. You can also go directly to the Amazon SWF sections in the SDK reference by following these links: • Class: AmazonSimpleWorkflowClient • Class: AmazonSimpleWorkflowAsyncClient • Interface: AmazonSimpleWorkflow • Interface: AmazonSimpleWorkflowAsync AWS Flow Framework Documentation API Version 2012-01-25 217 Amazon Simple Workflow Service JavaScript Developer Guide The AWS SDK for JavaScript allows developers to build libraries or applications that make use of AWS services using a simple and easy-to-use API available both in the browser or inside of Node.js applications on the server. To view the available documentation, see the AWS SDK for JavaScript Documentation page. You can also go directly to the Amazon SWF section in the SDK reference by following this link: • Class: AWS.SimpleWorkflow .NET The AWS SDK for .NET is a single, downloadable package that includes Visual Studio project templates, the AWS .NET library, C# code samples, and documentation. The AWS SDK for .NET makes it easier for Windows developers to build .NET applications for Amazon SWF and other services. To view the available documentation, see the AWS SDK for .NET Documentation page. You can also go directly to the Amazon SWF sections in the SDK reference by following these links: • Namespace: Amazon.SimpleWorkflow • Namespace: Amazon.SimpleWorkflow.Model PHP The AWS SDK for PHP provides a PHP programming interface to Amazon SWF. To view the available documentation, see the AWS SDK for PHP Documentation page. You can also go directly to the Amazon SWF section in the SDK reference by following this link: • Class: SwfClient Python The AWS SDK for Python (Boto) provides a Python programming interface to Amazon SWF. To view the available documentation, see the boto: A Python interface to Amazon Web Services page. You can also go directly to the Amazon SWF sections in the documentation by following these links: • Amazon SWF Tutorial • Amazon SWF Reference AWS SDK Documentation API Version 2012-01-25 218 Amazon Simple Workflow Service Ruby Developer Guide The AWS SDK for Ruby provides a Ruby programming interface to Amazon SWF. To view the available documentation, see the AWS SDK for Ruby Documentation page. You can also go directly to the Amazon SWF section in the SDK reference by following this link: • Class: AWS::SimpleWorkflow AWS CLI Documentation The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. For more information about the AWS CLI, see the AWS Command Line Interface page. For an overview of the available commands for Amazon SWF, see swf in the AWS CLI Command Reference. Web Resources for the Amazon Simple Workflow Service There are a number of Web resources that you can use to learn more about Amazon SWF or to get help with using the service and developing workflows. Amazon SWF Forum The Amazon SWF forum provides a place for you to communicate with other Amazon SWF developers and members of the Amazon SWF development team at Amazon to ask questions and to get answers. You can visit the forum at: Forum: Amazon Simple Workflow Service. Amazon SWF FAQ The Amazon SWF FAQ provide answers to frequently-asked questions about Amazon SWF, including an overview of common use cases, differences between Amazon SWF and other services, and more. AWS CLI Documentation API Version 2012-01-25 219 Amazon Simple Workflow Service Developer Guide You can access the FAQ here: Amazon SWF FAQ. Amazon SWF Videos The Amazon Web Services
|
swf-dg-075
|
swf-dg.pdf
| 75 |
provides a place for you to communicate with other Amazon SWF developers and members of the Amazon SWF development team at Amazon to ask questions and to get answers. You can visit the forum at: Forum: Amazon Simple Workflow Service. Amazon SWF FAQ The Amazon SWF FAQ provide answers to frequently-asked questions about Amazon SWF, including an overview of common use cases, differences between Amazon SWF and other services, and more. AWS CLI Documentation API Version 2012-01-25 219 Amazon Simple Workflow Service Developer Guide You can access the FAQ here: Amazon SWF FAQ. Amazon SWF Videos The Amazon Web Services channel on YouTube provides video training for all of Amazon's Web Services, including Amazon SWF. For a full list of Amazon SWF-related videos, use the following query: Simple Workflow in Amazon Web Services Migration options for Ruby Flow The AWS Flow Framework for Ruby is no longer under active development. While existing code will continue to work indefinitely, there will be no new features or versions. This topic will cover usage and migration options to continue working with Amazon SWF, and information on how to migrate to Step Functions. Option Description Continue to use the Ruby Flow Framework For now, the Ruby Flow Framework will continue to work. If you do nothing, your code will continue to function as it is. Plan to migrate off of the AWS Flow Framework for Ruby in the near future. Migrate to the Java Flow Framework The Java Flow Framework remains in active development and will continue to receive new features and updates. Migrate to Step Functions Step Functions provides a way to coordinate the components of distributed applications using visual workflows controlled by a state machine. Use the SWF API directly, without the Flow Framework You can continue to work in Ruby and use the SWF API directly instead of the Ruby Flow Framework. The advantage the Flow Framework provides, for either Ruby or Java, is that it allows you to focus on your workflow logic. The framework handles much of the details of communication and coordination, and some of the complexity is abstracted. You can continue to have the same level of abstraction by migrating to the Java Flow Framework, or you can directly interact with Amazon SWF SDK directly. Amazon SWF Videos API Version 2012-01-25 220 Amazon Simple Workflow Service Developer Guide Continue to use the Ruby Flow Framework The AWS Flow Framework for Ruby will continue to function as it does now in the short term. If you have workflows written in the AWS Flow Framework for Ruby, these will continue to work. Without updates, support, or security fixes, it is best to have a firm plan to migrate off of the AWS Flow Framework for Ruby in the near future. Migrate to the Java Flow Framework The AWS Flow Framework for Java will remain in active development. Conceptually, the AWS Flow Framework for Java is similar to AWS Flow Framework for Ruby: you can still focus on your workflow logic, and the framework will help manage your decider logic, and will make other aspects of Amazon SWF easier to manage. • AWS Flow Framework for Java • AWS Flow Framework for Java API Reference Migrate to Step Functions AWS Step Functions provides a service that is similar to Amazon SWF, but where your workflow logic is controlled by a state machine. Step Functions enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a reliable way to coordinate components and step through the functions of your application. A graphical console provides a way to visualize the components of your application as a series of steps. It automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected, every time. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems quickly. In Step Functions, you manage the coordination of your tasks using a state machine, written in declarative JSON, that is defined using the Amazon States Language. By using a state machine, you don't have to write and maintain a decider program to control your application logic. Step Functions provides an intuitive, productive, and agile approach to coordinating application components using visual workflows. You should consider using AWS Step Functions for all your new applications, and Step Functions provides an excellent platform to migrate to for the workflows you currently have implemented in the AWS Flow Framework for Ruby. Continue to use the Ruby Flow Framework API Version 2012-01-25 221 Amazon Simple Workflow Service Developer Guide To help migrate your tasks
|
swf-dg-076
|
swf-dg.pdf
| 76 |
defined using the Amazon States Language. By using a state machine, you don't have to write and maintain a decider program to control your application logic. Step Functions provides an intuitive, productive, and agile approach to coordinating application components using visual workflows. You should consider using AWS Step Functions for all your new applications, and Step Functions provides an excellent platform to migrate to for the workflows you currently have implemented in the AWS Flow Framework for Ruby. Continue to use the Ruby Flow Framework API Version 2012-01-25 221 Amazon Simple Workflow Service Developer Guide To help migrate your tasks to Step Functions, while continuing to leverage your Ruby language skills, Step Functions provides an example Ruby activity worker. This example uses best practices for implementing an activity worker, and can be used as a template to migrate your task logic to Step Functions. For more information, see the Example Activity Worker in Ruby topic in the AWS Step Functions Developer Guide. Note For many customers, migrating to Step Functions from the AWS Flow Framework for Ruby is the best option. But, if you require that signals intervene in your processes, or if you need to launch child processes that return a result to a parent, consider using the Amazon SWF API directly, or migrating to the AWS Flow Framework for Java. For more information on AWS Step Functions, see: • AWS Step Functions Developer Guide • AWS Step Functions API Reference • AWS Step Functions Command Line Reference Use the Amazon SWF API directly While the AWS Flow Framework for Ruby manages some of the complexity of Amazon SWF, you can also use the Amazon SWF API directly. Using the API directly allows you to build workflows where you have full control over implementing tasks and coordinating them, without worrying about underlying complexities such as tracking their progress and maintaining their state. • Amazon Simple Workflow Service Developer Guide • Amazon Simple Workflow Service API Reference Use the Amazon SWF API directly API Version 2012-01-25 222 Amazon Simple Workflow Service Developer Guide Document history The following table describes the important changes to the documentation since the last release of the Amazon Simple Workflow Service Developer Guide. Change Description Documentation- only update Amazon SWF now includes a section about AWS User Notifications, an AWS service that acts as a central location for your AWS notifications in the AWS Management Console. For more information, see Using AWS User Notifications with Amazon Simple Workflow Service. Date Changed May 4, 2023 Update Amazon SWF now provides a new console experience to manage SWF workflows and their execution-related September 12, 2022 actions. For more information, see Amazon SWF console tutorials. Update Updated the Quotas on Task Executions section to include Maximum tasks scheduled per second, and the Amazon SWF Metrics for CloudWatch page to include information about using non-ASCII resource names with CloudWatch. May 12, 2021 New feature Amazon Simple Workflow Service now supports Amazon EventBridge. For more information, see: December 18, 2020 • EventBridge for Amazon SWF • EventBridge User Guide New feature Amazon Simple Workflow Service supports IAM permissio ns using tags. For more information, see the following. June 20, 2019 • Tags in Amazon SWF • Manage tags API Version 2012-01-25 223 Amazon Simple Workflow Service Change Description Developer Guide Date Changed • Tag workflow executions • Control access to domains with tags • TagResource • UntagResource • ListTagsForResource • RegisterDomain New feature Amazon Simple Workflow Service is now available the Europe (Stockholm) region. December 12, 2018 Update Update Improved the Amazon Simple Workflow Service topic on CloudTrail integration. See Recording API calls with AWS August 7, 2018 CloudTrail. Added information on the new PendingTasks metric for CloudWatch. For more information, see Amazon SWF June 18, 2018 Metrics. Update Improved syntax highlighting in code samples. March 29, 2018 Update Update Update Update Added a topic describing options for Ruby Flow users to migrate off of that platform. For more information, see March 9, 2018 Migration options for Ruby Flow. Improved navigation on advanced concepts topic. See Advanced workflow concepts in Amazon SWF. February 19, 2018 Improved CloudWatch metrics documentation by adding valid statistics information. See Amazon SWF Metrics for CloudWatch. December 4, 2017 Changed the TOC to improve the document structure . Added new information on API and Decision Event November 9, 2017 Metrics. API Version 2012-01-25 224 Amazon Simple Workflow Service Change Description Developer Guide Date Changed Update Update Updated the Amazon SWF Quotas section to include throttling limits for all regions. October 18, 2017 Changed task_list to workflowId in the Getting started with Amazon SWF to avoid confusion with July 25, 2017 activity_list . Update Cleaned up the code examples throughout this guide. June 5, 2017 Update Simplified and improved the organization and contents of this guide. May 19,
|
swf-dg-077
|
swf-dg.pdf
| 77 |
4, 2017 Changed the TOC to improve the document structure . Added new information on API and Decision Event November 9, 2017 Metrics. API Version 2012-01-25 224 Amazon Simple Workflow Service Change Description Developer Guide Date Changed Update Update Updated the Amazon SWF Quotas section to include throttling limits for all regions. October 18, 2017 Changed task_list to workflowId in the Getting started with Amazon SWF to avoid confusion with July 25, 2017 activity_list . Update Cleaned up the code examples throughout this guide. June 5, 2017 Update Simplified and improved the organization and contents of this guide. May 19, 2017 Update Updates and link fixes. Update Updates and link fixes. May 16, 2017 October 1, 2016 Lambda task support You can specify Lambda tasks in addition to traditional Activity tasks in your workflows. For more information, July 21, 2015 see AWS Lambda tasks in Amazon SWF. Support for setting task priority Amazon SWF now includes support for setting the priority of tasks on a task list, and will attempt to deliver those December 17, 2014 with higher priority before tasks with lower priority. Information about how to set the task priority for workflows and for activities is provided in Setting task priority in Amazon SWF. Update Added a new topic that describes how to log Amazon SWF API calls using CloudTrail: Recording API calls with AWS CloudTrail. May 8, 2014 API Version 2012-01-25 225 Amazon Simple Workflow Service Change Description Developer Guide Date Changed Update Two new topics related to CloudWatch metrics for Amazon SWF have been added: Amazon SWF Metrics for April 28, 2014 CloudWatch, which provides a list and descriptions of the supported metrics, and Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console, which provides information about how to view metrics and set alarms with the AWS Management Console. Update Added a new section: Additional resources and reference info for Amazon SWF. This section provides some service March 19, 2014 reference information and provides information about additional documentation, samples, code and other web resources for Amazon SWF developers. Update Added a workflow tutorial. See Getting started with Amazon SWF. October 25, 2013 Update Added AWS CLI information and example. Update Updates and fixes. August 26, 2013 August 1, 2013 Update Updated the document to describe how to use IAM for access control. February 22, 2013 Initial Release This is the first release of the Amazon Simple Workflow Service Developer Guide. October 16, 2012 API Version 2012-01-25 226
|
sync-dg-001
|
sync-dg.pdf
| 1 |
User Guide AWS DataSync Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. AWS DataSync User Guide AWS DataSync: User Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS DataSync Table of Contents User Guide What is AWS DataSync? .................................................................................................................. 1 Use cases ........................................................................................................................................................ 2 Benefits ........................................................................................................................................................... 3 Additional resources ..................................................................................................................................... 3 How it works .................................................................................................................................... 4 DataSync transfer architecture .................................................................................................................. 4 Transferring between on-premises storage and AWS ..................................................................... 4 Transferring between AWS storage services ..................................................................................... 5 Transferring between cloud storage systems and AWS storage services .................................... 6 Concepts and terminology ......................................................................................................................... 7 Agent .......................................................................................................................................................... 7 Location ..................................................................................................................................................... 7 Task ............................................................................................................................................................ 7 Task execution ......................................................................................................................................... 8 How DataSync transfers files, objects, and directories ......................................................................... 8 How DataSync prepares your data transfer ...................................................................................... 8 How DataSync transfers your data ...................................................................................................... 9 How DataSync verifies your data's integrity ................................................................................... 10 How DataSync works with open and locked files .......................................................................... 10 Recurring transfer options .................................................................................................................. 11 Getting started .............................................................................................................................. 12 Sign up for an AWS account ................................................................................................................... 12 Create a user with administrative access .............................................................................................. 12 Required IAM permissions for using DataSync .................................................................................... 14 AWS managed policies ........................................................................................................................ 14 Customer managed policies ............................................................................................................... 14 Where can I use DataSync? ...................................................................................................................... 14 How can I use DataSync? ......................................................................................................................... 14 How much will DataSync cost? ............................................................................................................... 15 Open-source components used by DataSync ....................................................................................... 15 Do I need an agent? .................................................................................................................................. 15 Situations when you need a DataSync agent ................................................................................. 15 Situations when you don't need a DataSync agent ....................................................................... 16 Using multiple DataSync agents ....................................................................................................... 16 iii AWS DataSync User Guide Next steps ............................................................................................................................................... 17 Agent requirements ................................................................................................................................... 17 Hypervisor requirements ..................................................................................................................... 17 Agent requirements for DataSync transfers .................................................................................... 18 Agent requirements for DataSync Discovery .................................................................................. 19 Agent requirements for partitions .................................................................................................... 19 Agent management requirements .................................................................................................... 19 Deploying your agent ................................................................................................................................ 19 Deploying your agent on VMware .................................................................................................... 20 Deploying your agent on KVM ........................................................................................................... 20 Deploying your agent on Microsoft Hyper-V .................................................................................. 21 Deploying your Amazon EC2 agent .................................................................................................. 22 Deploying your agent on AWS Snowball Edge ............................................................................... 27 Deploying your agent on AWS Outposts ......................................................................................... 27 Choosing a service endpoint for your agent ........................................................................................ 28 Choosing a public service endpoint .................................................................................................. 28 Choosing a FIPS service endpoint ..................................................................................................... 28 Choosing a VPC service endpoint ..................................................................................................... 29 Activating your agent ................................................................................................................................ 31 Prerequisites ........................................................................................................................................... 31 Getting an activation key .................................................................................................................... 32 Activating your agent .......................................................................................................................... 34 Next steps ............................................................................................................................................... 36 Verifying your agent's network connections ........................................................................................ 37 Accessing your agent's local console ................................................................................................ 37 Verifying your agent's connection to your storage system .......................................................... 38 Verifying your agent's connection to the DataSync service ......................................................... 38 Next steps ............................................................................................................................................... 39 Connecting your network ............................................................................................................. 40 1. Network connection between your storage system and agent .................................................... 40 2. Network connection between your agent and DataSync service ................................................. 40 Connecting your storage network to AWS ...................................................................................... 41 Choosing a service endpoint .............................................................................................................. 41 3. Network connection between DataSync service and AWS storage service ................................ 41 Networking when you don't need a DataSync agent ......................................................................... 41 How and where DataSync traffic flows through the network .......................................................... 42 iv AWS DataSync User Guide Network security for DataSync ............................................................................................................... 42 Network requirements .............................................................................................................................. 42 Network requirements for on-premises, self-managed, other cloud, and edge storage ....... 42 Network requirements for AWS storage services ........................................................................... 47 Network requirements for public or FIPS service endpoints ....................................................... 48 Network requirements for VPC service endpoints ......................................................................... 55 Network interfaces for data transfers ................................................................................................... 58 Network interfaces for transfers with agents ................................................................................. 58 Network interfaces for transfers without agents ........................................................................... 59 Viewing your network interfaces ....................................................................................................... 60 Architecture and routing examples with Direct Connect ................................................................... 60 Using Direct Connect with a DataSync VPC service endpoint ..................................................... 61 Using Direct Connect with a DataSync public or FIPS service endpoint .................................... 64 Next steps ............................................................................................................................................... 65 Configuring your agent for multiple NICs ............................................................................................ 65 Transferring data ........................................................................................................................... 66 Where can I transfer my data? ................................................................................................................ 67 Supported transfers in the same AWS account .............................................................................. 67 Supported transfers across AWS accounts ...................................................................................... 69 Supported transfers in the same AWS Region ............................................................................... 70 Supported transfers between AWS Regions ................................................................................... 70 Determining if your transfer requires a DataSync agent .............................................................. 71 Transferring to or from on-premises storage ...................................................................................... 71 Configuring transfers with an NFS file server ................................................................................ 72 Configuring transfers with an SMB file server ................................................................................ 76 Configuring transfers with an HDFS cluster
|
sync-dg-002
|
sync-dg.pdf
| 2 |
.................................... 64 Next steps ............................................................................................................................................... 65 Configuring your agent for multiple NICs ............................................................................................ 65 Transferring data ........................................................................................................................... 66 Where can I transfer my data? ................................................................................................................ 67 Supported transfers in the same AWS account .............................................................................. 67 Supported transfers across AWS accounts ...................................................................................... 69 Supported transfers in the same AWS Region ............................................................................... 70 Supported transfers between AWS Regions ................................................................................... 70 Determining if your transfer requires a DataSync agent .............................................................. 71 Transferring to or from on-premises storage ...................................................................................... 71 Configuring transfers with an NFS file server ................................................................................ 72 Configuring transfers with an SMB file server ................................................................................ 76 Configuring transfers with an HDFS cluster .................................................................................... 86 Configuring transfers with an object storage system ................................................................... 91 Transferring to or from AWS storage .................................................................................................... 95 Configuring transfers with Amazon S3 ............................................................................................ 95 Configuring transfers with Amazon EFS ........................................................................................ 126 Configuring transfers with FSx for Windows File Server ............................................................ 135 Configuring transfers with FSx for Lustre ..................................................................................... 140 Configuring transfers with FSx for OpenZFS ................................................................................ 142 Configuring transfers with FSx for ONTAP ................................................................................... 146 Transferring to or from other cloud storage ..................................................................................... 153 Configuring transfers with Google Cloud Storage ...................................................................... 153 v AWS DataSync User Guide Configuring transfers with Microsoft Azure Blob Storage ......................................................... 161 Configuring transfers with Microsoft Azure Files ........................................................................ 179 Configuring transfers with other cloud object storage .............................................................. 182 Transferring to or from S3 compatible storage on Snowball Edge ............................................... 188 Prerequisites ........................................................................................................................................ 188 Providing DataSync access to S3 compatible storage ................................................................ 189 Creating a DataSync agent in your on-premises storage environment ................................... 191 Configuring the source location for your transfer ....................................................................... 192 Configuring the destination location for your transfer .............................................................. 193 Configuring your transfer settings .................................................................................................. 194 Starting your transfer ........................................................................................................................ 194 Limitations ............................................................................................................................................ 194 Creating a task for transferring data ................................................................................................... 194 Creating your task .............................................................................................................................. 195 Task statuses ........................................................................................................................................ 197 Partitioning large datasets with multiple tasks ........................................................................... 198 Segmenting transferred data with multiple tasks ....................................................................... 198 Choosing a task mode for your transfer ....................................................................................... 199 Choosing what data to transfer ...................................................................................................... 203 Verifying data integrity ..................................................................................................................... 240 Setting bandwidth limits .................................................................................................................. 243 Scheduling your task ......................................................................................................................... 244 Tagging your tasks ............................................................................................................................. 249 Starting a task to transfer data ............................................................................................................ 252 Starting your task ............................................................................................................................... 252 Task execution statuses ..................................................................................................................... 254 Knowing when your task is queued ............................................................................................... 254 Cancelling your task execution ........................................................................................................ 255 Discovering storage ..................................................................................................................... 256 How it works ............................................................................................................................................. 256 DataSync Discovery architecture ..................................................................................................... 256 Concepts and terminology ............................................................................................................... 257 Limitations ............................................................................................................................................ 259 Adding your on-premises storage system .......................................................................................... 259 Accessing your on-premises storage system ................................................................................ 259 Adding your on-premises storage system ..................................................................................... 260 vi AWS DataSync User Guide Removing your on-premises storage system ................................................................................ 262 Logging DataSync Discovery activity to Amazon CloudWatch .................................................. 262 Working with discovery jobs ................................................................................................................. 263 Starting a discovery job .................................................................................................................... 263 Stopping a discovery job .................................................................................................................. 264 Viewing storage resource information ................................................................................................ 264 Viewing information collected about your storage system ....................................................... 265 Getting recommendations ..................................................................................................................... 267 What's included in the recommendations? ................................................................................... 268 What's not included in the recommendations? ............................................................................ 268 Getting recommendations ................................................................................................................ 269 DataSync Discovery statuses ................................................................................................................. 271 Discovery job statuses ....................................................................................................................... 271 Recommendation statuses ................................................................................................................ 272 Monitoring data transfers ........................................................................................................... 274 Understanding data transfer performance counters ........................................................................ 274 Monitoring data transfers with CloudWatch metrics ....................................................................... 291 CloudWatch metrics for DataSync .................................................................................................. 292 Monitoring data transfers with task reports ...................................................................................... 296 Use cases .............................................................................................................................................. 296 Summary only task reports .............................................................................................................. 296 Standard task reports ........................................................................................................................ 297 Example task reports ......................................................................................................................... 300 Limitations ............................................................................................................................................ 303 Creating your task reports ................................................................................................................ 303 Viewing your task reports ................................................................................................................ 313 Monitoring data transfers with CloudWatch Logs ............................................................................ 314 Allowing DataSync to upload logs to a CloudWatch log group ................................................ 314 Configuring logging for your DataSync task ................................................................................ 316 Viewing DataSync task logs ............................................................................................................. 319 Logging with CloudTrail ......................................................................................................................... 320 Working with DataSync information in CloudTrail ...................................................................... 320 Understanding DataSync log file entries ....................................................................................... 321 Monitoring with EventBridge ................................................................................................................ 323 DataSync transfer events .................................................................................................................. 323 DataSync Discovery events ............................................................................................................... 325 vii AWS DataSync User Guide Monitoring with manual tools .............................................................................................................. 326 Monitoring your transfer by using the DataSync console .......................................................... 326 Monitoring your transfer by using the AWS CLI .......................................................................... 326 Monitoring your transfer by using the watch utility .................................................................. 328 Managing resources ..................................................................................................................... 329 Managing your DataSync agent ............................................................................................................ 329 Testing your DataSync agent's connectivity and system resources ............................................... 329 Replacing your DataSync agent ............................................................................................................ 329 Cleaning up DataSync resources ........................................................................................................... 329 Reusing a DataSync agent's infrastructure ......................................................................................... 329 Managing your agent .............................................................................................................................. 329 Agent software updates .................................................................................................................... 330 Agent statuses ..................................................................................................................................... 330 Troubleshooting your agent ............................................................................................................. 331 Performing maintenance on your agent ............................................................................................. 331 Accessing your agent's local console .............................................................................................. 331 Configuring your agent's DHCP and DNS settings ...................................................................... 332 Checking your agent's system resources ....................................................................................... 335 Synchronizing the time on your VMware agent .......................................................................... 337 Running maintenance-related commands for your agent .........................................................
|
sync-dg-003
|
sync-dg.pdf
| 3 |
Managing your DataSync agent ............................................................................................................ 329 Testing your DataSync agent's connectivity and system resources ............................................... 329 Replacing your DataSync agent ............................................................................................................ 329 Cleaning up DataSync resources ........................................................................................................... 329 Reusing a DataSync agent's infrastructure ......................................................................................... 329 Managing your agent .............................................................................................................................. 329 Agent software updates .................................................................................................................... 330 Agent statuses ..................................................................................................................................... 330 Troubleshooting your agent ............................................................................................................. 331 Performing maintenance on your agent ............................................................................................. 331 Accessing your agent's local console .............................................................................................. 331 Configuring your agent's DHCP and DNS settings ...................................................................... 332 Checking your agent's system resources ....................................................................................... 335 Synchronizing the time on your VMware agent .......................................................................... 337 Running maintenance-related commands for your agent ......................................................... 338 Replacing your agent .............................................................................................................................. 339 Creating a new agent ........................................................................................................................ 339 Updating your location with the new agent ................................................................................ 339 Next steps ............................................................................................................................................ 345 Filtering DataSync resources ................................................................................................................. 345 Parameters for filtering ..................................................................................................................... 345 Filtering by location ........................................................................................................................... 346 Filtering by task .................................................................................................................................. 347 Cleaning up DataSync resources ........................................................................................................... 348 Deleting a DataSync agent ............................................................................................................... 348 Reusing a DataSync agent's infrastructure .................................................................................... 349 Deleting a DataSync location ........................................................................................................... 350 Deleting a DataSync task .................................................................................................................. 350 Security ........................................................................................................................................ 352 Data protection ........................................................................................................................................ 352 Encryption in transit .......................................................................................................................... 353 viii AWS DataSync User Guide Encryption at rest ............................................................................................................................... 356 Internetwork traffic privacy .............................................................................................................. 357 Identity and access management ......................................................................................................... 358 Access management ........................................................................................................................... 358 AWS managed policies ...................................................................................................................... 363 Customer managed policies ............................................................................................................. 372 Using service-linked roles ................................................................................................................. 375 Tagging resources during creation .................................................................................................. 382 Cross-service confused deputy prevention ................................................................................... 383 API permissions reference ................................................................................................................ 385 Compliance validation ............................................................................................................................ 394 Resilience ................................................................................................................................................... 395 Infrastructure security ............................................................................................................................. 396 Quotas .......................................................................................................................................... 397 Storage system, file, and object limits ................................................................................................ 397 DataSync quotas ...................................................................................................................................... 397 DataSync Discovery quotas .................................................................................................................... 401 Request a quota increase ....................................................................................................................... 402 Troubleshooting ........................................................................................................................... 403 Troubleshooting agent issues ................................................................................................................ 403 How do I connect to an Amazon EC2 agent's local console? .................................................... 403 What does the Failed to retrieve agent activation key error mean? ........................................ 404 I still can't activate an agent by using a VPC service endpoint ................................................. 404 What do I do if my agent is offline? .............................................................................................. 404 I don't know what's going on with my agent. Can someone help me? ................................... 405 Troubleshooting location issues ........................................................................................................... 406 My task failed with an NFS permissions denied error ................................................................ 406 My task failed with an NFS mount error ....................................................................................... 407 My task failed with an Amazon EFS mount error ........................................................................ 407 File ownership isn't maintained with NFS transfer ..................................................................... 408 My task can't access an SMB location that uses Kerberos ......................................................... 408 My task failed with an input/output error .................................................................................... 409 Error: FsS3UnableToConnectToEndpoint ............................................................................... 410 Error: FsS3HeadBucketFailed .................................................................................................... 410 Task fails with an Unable to list Azure Blobs on the volume root error ......... 411 Error: FsAzureBlobVolRootListBlobsFailed ..................................................................... 411 ix AWS DataSync User Guide Error: SrcLocHitAccess ................................................................................................................ 411 Error: SyncTaskErrorLocationNotAdded ............................................................................... 411 Task with S3 source location fails with HeadObject or GetObjectTagging error ........... 412 Troubleshooting task issues ................................................................................................................... 412 Error: Invalid SyncOption value. Option: TransferMode,PreserveDeletedFiles, Value: ALL,REMOVE. ....................................................................................................................................... 412 Task execution fails with an EniNotFound error .......................................................................... 413 Task execution fails with a Cannot allocate memory error ....................................................... 413 Task execution has a launching status but nothing seems to be happening ......................... 414 Task execution seems stuck in the preparing status ................................................................... 414 Task execution stops before the transfer finishes ....................................................................... 414 Task execution fails when transferring from a Google Cloud Storage bucket ....................... 415 There are mismatches between task execution's timestamps ................................................... 415 Task execution fails with NoMem error ........................................................................................... 415 Object fails to transfer to Azure Blob Storage with user metadata key error ................. 416 There's an /.aws-datasync folder in the destination location ............................................. 416 Can't transfer symbolic links between locations using SMB ..................................................... 416 Task report errors ............................................................................................................................... 417 Troubleshooting data verification issues ............................................................................................ 417 There are mismatches between a file's contents ......................................................................... 417 There's a mismatch between a file's SMB metadata ................................................................... 418 Files to transfer are no longer at source location ....................................................................... 420 DataSync can't verify destination data .......................................................................................... 420 DataSync can't read object metadata ............................................................................................ 421 There's a mismatch in an object's system-defined metadata .................................................... 422 Understanding data verification duration ..................................................................................... 423 Troubleshooting S3 storage costs with DataSync ............................................................................. 424 Tutorials ....................................................................................................................................... 425 Transferring from on-premises to S3 across accounts ..................................................................... 425 Overview ............................................................................................................................................... 425 Prerequisite: Required source account permissions ..................................................................... 426 Prerequisite: Required destination account permissions ............................................................ 428 Step 1: In your source account, create a DataSync agent .......................................................... 429 Step 2: In your source account, create a DataSync IAM role for destination bucket access .. 429 Step 3: In your destination account, update your S3 bucket policy ......................................... 431 Step 4: In your destination account, disable ACLs for your S3 bucket .................................... 433 x AWS DataSync User Guide Step 5: In your source account, create a DataSync source location for your on-premises storage .................................................................................................................................................. 433 Step 6: In your
|
sync-dg-004
|
sync-dg.pdf
| 4 |
425 Overview ............................................................................................................................................... 425 Prerequisite: Required source account permissions ..................................................................... 426 Prerequisite: Required destination account permissions ............................................................ 428 Step 1: In your source account, create a DataSync agent .......................................................... 429 Step 2: In your source account, create a DataSync IAM role for destination bucket access .. 429 Step 3: In your destination account, update your S3 bucket policy ......................................... 431 Step 4: In your destination account, disable ACLs for your S3 bucket .................................... 433 x AWS DataSync User Guide Step 5: In your source account, create a DataSync source location for your on-premises storage .................................................................................................................................................. 433 Step 6: In your source account, create a DataSync destination location for your S3 bucket .................................................................................................................................................... 433 Step 7: In your source account, create and start your DataSync task ...................................... 435 Related resources ................................................................................................................................ 435 Transferring between S3 buckets across accounts ........................................................................... 436 Overview ............................................................................................................................................... 437 Prerequisite: Required source account permissions ..................................................................... 438 Prerequisite: Required destination account permissions ............................................................ 440 Step 1: In your source account, create a DataSync IAM role for destination bucket access .. 441 Step 2: In your destination account, update your S3 bucket policy ......................................... 443 Step 3: In your destination account, disable ACLs for your S3 bucket .................................... 444 Step 4: In your source account, create your DataSync locations .............................................. 445 Step 5: In your source account, create and start your DataSync task ...................................... 447 Troubleshooting .................................................................................................................................. 448 Related: Cross-account transfers with S3 buckets using server-side encryption ................... 447 Performing a large migration ..................................................................................................... 449 What is a large data migration? ........................................................................................................... 449 Key stages of a large data migration .................................................................................................. 449 Additional resources ................................................................................................................................ 450 Stage 1: Planning your migration ........................................................................................................ 450 Gathering requirements .................................................................................................................... 451 Running a proof of concept ............................................................................................................. 456 Estimating migration timelines ....................................................................................................... 458 Stage 2: Implementing your migration ............................................................................................... 460 Accelerating your migration with partitioning ............................................................................. 460 Running your DataSync tasks .......................................................................................................... 462 Monitoring your transfers ................................................................................................................. 463 DataSync API ................................................................................................................................ 465 Actions ........................................................................................................................................................ 465 AddStorageSystem ............................................................................................................................. 468 CancelTaskExecution .......................................................................................................................... 474 CreateAgent ......................................................................................................................................... 476 CreateLocationAzureBlob .................................................................................................................. 481 CreateLocationEfs ............................................................................................................................... 486 xi AWS DataSync User Guide CreateLocationFsxLustre ................................................................................................................... 492 CreateLocationFsxOntap ................................................................................................................... 496 CreateLocationFsxOpenZfs ............................................................................................................... 501 CreateLocationFsxWindows .............................................................................................................. 506 CreateLocationHdfs ............................................................................................................................ 511 CreateLocationNfs .............................................................................................................................. 518 CreateLocationObjectStorage .......................................................................................................... 523 CreateLocationS3 ................................................................................................................................ 529 CreateLocationSmb ............................................................................................................................ 536 CreateTask ............................................................................................................................................ 544 DeleteAgent ......................................................................................................................................... 554 DeleteLocation .................................................................................................................................... 556 DeleteTask ............................................................................................................................................ 558 DescribeAgent ...................................................................................................................................... 560 DescribeDiscoveryJob ......................................................................................................................... 565 DescribeLocationAzureBlob .............................................................................................................. 569 DescribeLocationEfs ........................................................................................................................... 573 DescribeLocationFsxLustre ................................................................................................................ 579 DescribeLocationFsxOntap ................................................................................................................ 582 DescribeLocationFsxOpenZfs ............................................................................................................ 587 DescribeLocationFsxWindows .......................................................................................................... 591 DescribeLocationHdfs ........................................................................................................................ 595 DescribeLocationNfs ........................................................................................................................... 600 DescribeLocationObjectStorage ....................................................................................................... 604 DescribeLocationS3 ............................................................................................................................ 608 DescribeLocationSmb ......................................................................................................................... 613 DescribeStorageSystem ..................................................................................................................... 619 DescribeStorageSystemResourceMetrics ........................................................................................ 624 DescribeStorageSystemResources ................................................................................................... 630 DescribeTask ........................................................................................................................................ 640 DescribeTaskExecution ....................................................................................................................... 649 GenerateRecommendations .............................................................................................................. 665 ListAgents ............................................................................................................................................. 668 ListDiscoveryJobs ................................................................................................................................ 671 ListLocations ........................................................................................................................................ 674 ListStorageSystems ............................................................................................................................ 677 xii AWS DataSync User Guide ListTagsForResource ........................................................................................................................... 680 ListTaskExecutions .............................................................................................................................. 683 ListTasks ................................................................................................................................................ 686 RemoveStorageSystem ...................................................................................................................... 689 StartDiscoveryJob ............................................................................................................................... 691 StartTaskExecution ............................................................................................................................. 695 StopDiscoveryJob ................................................................................................................................ 702 TagResource ......................................................................................................................................... 704 UntagResource .................................................................................................................................... 707 UpdateAgent ........................................................................................................................................ 709 UpdateDiscoveryJob ........................................................................................................................... 711 UpdateLocationAzureBlob ................................................................................................................ 713 UpdateLocationEfs ............................................................................................................................. 717 UpdateLocationFsxLustre .................................................................................................................. 721 UpdateLocationFsxOntap .................................................................................................................. 724 UpdateLocationFsxOpenZfs .............................................................................................................. 727 UpdateLocationFsxWindows ............................................................................................................. 730 UpdateLocationHdfs .......................................................................................................................... 734 UpdateLocationNfs ............................................................................................................................. 740 UpdateLocationObjectStorage ......................................................................................................... 743 UpdateLocationS3 .............................................................................................................................. 748 UpdateLocationSmb ........................................................................................................................... 752 UpdateStorageSystem ....................................................................................................................... 759 UpdateTask .......................................................................................................................................... 763 UpdateTaskExecution ......................................................................................................................... 769 Data Types ................................................................................................................................................. 771 AgentListEntry ..................................................................................................................................... 774 AzureBlobSasConfiguration .............................................................................................................. 776 Capacity ................................................................................................................................................ 777 Credentials ........................................................................................................................................... 779 DiscoveryJobListEntry ........................................................................................................................ 781 DiscoveryServerConfiguration .......................................................................................................... 782 Ec2Config .............................................................................................................................................. 783 FilterRule .............................................................................................................................................. 785 FsxProtocol ........................................................................................................................................... 787 FsxProtocolNfs ..................................................................................................................................... 788 xiii AWS DataSync User Guide FsxProtocolSmb ................................................................................................................................... 789 FsxUpdateProtocol ............................................................................................................................. 791 FsxUpdateProtocolSmb ..................................................................................................................... 792 HdfsNameNode ................................................................................................................................... 794 IOPS ....................................................................................................................................................... 796 Latency .................................................................................................................................................. 798 LocationFilter ....................................................................................................................................... 800 LocationListEntry ................................................................................................................................ 802 ManifestConfig .................................................................................................................................... 804 MaxP95Performance .......................................................................................................................... 806 NetAppONTAPCluster ........................................................................................................................ 809 NetAppONTAPSVM ............................................................................................................................. 813 NetAppONTAPVolume ....................................................................................................................... 817 NfsMountOptions ............................................................................................................................... 821 OnPremConfig ..................................................................................................................................... 823 Options ................................................................................................................................................. 824 P95Metrics ............................................................................................................................................ 833 Platform ................................................................................................................................................ 835 PrivateLinkConfig ................................................................................................................................ 836 QopConfiguration ............................................................................................................................... 838 Recommendation ................................................................................................................................ 839 ReportDestination ............................................................................................................................... 841 ReportDestinationS3 .......................................................................................................................... 842 ReportOverride .................................................................................................................................... 844 ReportOverrides .................................................................................................................................. 845 ReportResult ........................................................................................................................................ 847 ResourceDetails ................................................................................................................................... 848 ResourceMetrics .................................................................................................................................. 850 S3Config ............................................................................................................................................... 852 S3ManifestConfig ................................................................................................................................ 853 SmbMountOptions ............................................................................................................................. 855 SourceManifestConfig ........................................................................................................................ 857 StorageSystemListEntry .................................................................................................................... 858 TagListEntry ......................................................................................................................................... 860 TaskExecutionFilesFailedDetail ......................................................................................................... 861 TaskExecutionFilesListedDetail ......................................................................................................... 863 xiv AWS DataSync User Guide TaskExecutionListEntry ...................................................................................................................... 865 TaskExecutionResultDetail ................................................................................................................ 867 TaskFilter .............................................................................................................................................. 870 TaskListEntry ........................................................................................................................................ 872 TaskReportConfig ................................................................................................................................ 874 TaskSchedule ....................................................................................................................................... 876 TaskScheduleDetails ........................................................................................................................... 878 Throughput .......................................................................................................................................... 880 Common Errors ........................................................................................................................................ 881 Common Parameters ............................................................................................................................... 882 Document history ........................................................................................................................ 886 AWS Glossary ............................................................................................................................... 899 xv AWS DataSync User Guide What is AWS DataSync? AWS DataSync is an online data transfer and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between AWS storage services. On-premises storage transfers DataSync works with the following on-premises storage systems: • Network File System (NFS) • Server Message Block (SMB) • Hadoop Distributed File Systems (HDFS) • Object storage AWS storage transfers DataSync works
|
sync-dg-005
|
sync-dg.pdf
| 5 |
TaskScheduleDetails ........................................................................................................................... 878 Throughput .......................................................................................................................................... 880 Common Errors ........................................................................................................................................ 881 Common Parameters ............................................................................................................................... 882 Document history ........................................................................................................................ 886 AWS Glossary ............................................................................................................................... 899 xv AWS DataSync User Guide What is AWS DataSync? AWS DataSync is an online data transfer and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between AWS storage services. On-premises storage transfers DataSync works with the following on-premises storage systems: • Network File System (NFS) • Server Message Block (SMB) • Hadoop Distributed File Systems (HDFS) • Object storage AWS storage transfers DataSync works with the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP Other cloud storage transfers DataSync works with the following other cloud storage services: • Google Cloud Storage • Microsoft Azure Blob Storage • Microsoft Azure Files • Wasabi Cloud Storage 1 User Guide AWS DataSync • DigitalOcean Spaces • Oracle Cloud Infrastructure Object Storage • Cloudflare R2 Storage • Backblaze B2 Cloud Storage • NAVER Cloud Object Storage • Alibaba Cloud Object Storage Service • IBM Cloud Object Storage • Seagate Lyve Cloud Edge storage transfers DataSync works with the following edge storage services and devices: • Amazon S3 compatible storage on AWS Snowball Edge Use cases These are some of the main use cases for DataSync: • Discover data – Get visibility into your on-premises storage performance and utilization. AWS DataSync Discovery can also provide recommendations for migrating your data to AWS storage services. • Migrate data – Transfer active datasets rapidly over the network into AWS storage services. DataSync includes automatic encryption and data integrity validation to help make sure that your data arrives securely, intact, and ready to use. • Archive cold data – Move cold data stored in on-premises storage directly to durable and secure long-term storage classes such as S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. Doing so can free up on-premises storage capacity and shut down legacy systems. • Replicate data – Copy data into any Amazon S3 storage class, choosing the most cost-effective storage class for your needs. You can also send data to Amazon EFS, FSx for Windows File Server, FSx for Lustre, or FSx for OpenZFS for a standby file system. • Transfer data for timely in-cloud processing – Transfer data in or out of AWS for processing. This approach can speed up critical hybrid cloud workflows across many industries. These include machine learning in the life-sciences industry, video production in media and entertainment, big- data analytics in financial services, and seismic research in the oil and gas industry. Use cases 2 AWS DataSync Benefits User Guide By using DataSync, you can get the following benefits: • Automate data movement – DataSync makes it easier to transfer data over the network between storage systems and services. DataSync automates both the management of data- transfer processes and the infrastructure required for high performance and secure data transfers. • Transfer data securely – DataSync provides end-to-end security, including encryption and data integrity validation, to help ensure that your data arrives securely, intact, and ready to use. DataSync accesses your AWS storage through built-in AWS security mechanisms, such as AWS Identity and Access Management (IAM) roles. It also supports virtual private cloud (VPC) endpoints, giving you the option to transfer data without traversing the public internet and further increasing the security of data copied online. • Move data faster – DataSync uses a purpose-built network protocol and a parallel, multi- threaded architecture to accelerate your transfers. This approach speeds up migrations, recurring data-processing workflows for analytics and machine learning, and data-protection processes. Additional resources We recommend that you read the following: • DataSync resources – Includes blogs, videos, and other training materials • AWS re:Post – See the latest discussion around DataSync • AWS DataSync pricing Benefits 3 AWS DataSync User Guide How AWS DataSync works Learn the key concepts and terminology related to AWS DataSync transfers, including how data gets transferred from on-premises and cloud locations. DataSync transfer architecture The following diagrams show how and where DataSync commonly transfers storage data. For a full list of DataSync supported storage systems and services, see Where can I transfer my data with AWS DataSync? Topics • Transferring between on-premises storage and AWS • Transferring between AWS storage services • Transferring between cloud storage systems and AWS storage services Transferring between on-premises storage and AWS The following diagram shows a high-level overview of DataSync transferring files between self- managed, on-premises storage systems and AWS services. DataSync transfer architecture 4 AWS DataSync User Guide The diagram illustrates a common DataSync use case: • A DataSync agent copying data from an
|
sync-dg-006
|
sync-dg.pdf
| 6 |
DataSync commonly transfers storage data. For a full list of DataSync supported storage systems and services, see Where can I transfer my data with AWS DataSync? Topics • Transferring between on-premises storage and AWS • Transferring between AWS storage services • Transferring between cloud storage systems and AWS storage services Transferring between on-premises storage and AWS The following diagram shows a high-level overview of DataSync transferring files between self- managed, on-premises storage systems and AWS services. DataSync transfer architecture 4 AWS DataSync User Guide The diagram illustrates a common DataSync use case: • A DataSync agent copying data from an on-premises storage system. • Data moving into AWS via Transport Layer Security (TLS). • DataSync copying data to a supported AWS storage service. Transferring between AWS storage services The following diagram shows a high-level overview of DataSync transferring files between AWS services in the same AWS account. The diagram illustrates a common DataSync use case: • DataSync copying data from a supported AWS storage service. • Data moving across AWS Regions via TLS. • DataSync copying data to a supported AWS storage service. When transferring between AWS storage services (whether in the same AWS Region or across AWS Regions), your data remains in the AWS network and doesn't traverse the public internet. Transferring between AWS storage services 5 AWS DataSync Important User Guide You pay for data transferred between AWS Regions. This is billed as data transfer OUT from your source Region to your destination Region. For more information, see Data transfer pricing. Transferring between cloud storage systems and AWS storage services With DataSync, you can transfer data between other cloud storage systems and AWS services. In this context, cloud storage systems can include: • Self-managed storage systems, such as an NFS file server in your virtual private cloud (VPC) within AWS. • Storage systems or services hosted by another cloud provider. For more information, see Transferring to or from other cloud storage with AWS DataSync. The following diagram shows a high-level overview of DataSync transferring data between AWS storage services and another cloud provider. Transferring between cloud storage systems and AWS storage services 6 AWS DataSync User Guide Concepts and terminology Familiarize yourself with DataSync transfer features. Topics • Agent • Location • Task • Task execution Agent An agent is a virtual machine (VM) appliance that DataSync uses to read from and write to storage during a transfer. You can deploy an agent in your storage environment on VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisors. For storage in a virtual private cloud (VPC) in AWS, you can deploy an agent as an Amazon EC2 instance. A DataSync transfer agent is no different than an agent that you can use for DataSync Discovery, but we don't recommend using the same agent for these scenarios. To get started, see Do I need an AWS DataSync agent? Location A location describes where you're copying data from or to. Each DataSync transfer (also known as a task) has a source and destination location. For more information, see Where can I transfer my data with AWS DataSync? Task A task describes a DataSync transfer. It identifies a source and destination location along with details about how to copy data between those locations. You also can specify how a task treats metadata, deleted files, and permissions. Concepts and terminology 7 AWS DataSync Task execution User Guide A task execution is an individual run of a DataSync transfer task. There are several phases involved in a task execution. For more information, see Task execution statuses. How DataSync transfers files, objects, and directories During a task execution, DataSync prepares, transfers, and verifies your data. How DataSync performs these actions depends on how you configure your DataSync task options, such as the task mode. Basic mode tasks prepare, transfer, and verify your data sequentially, while Enhanced mode tasks do these in parallel. Topics • How DataSync prepares your data transfer • How DataSync transfers your data • How DataSync verifies your data's integrity • How DataSync works with open and locked files • Recurring transfer options How DataSync prepares your data transfer DataSync by default prepares your transfer by examining your source and destination locations to determine what to transfer. This is done by scanning the contents and metadata of both locations to identify differences between the two. Note If you configure your task to transfer all data, there's no preparation. When you start your task, DataSync immediately transfers everything from your source to your destination without comparing locations. How DataSync prepares your transfer also depends on your task mode: Task execution 8 AWS DataSync User Guide Enhanced mode preparation Basic mode preparation DataSync prepares objects as they're found at the source location. Preparation continues Preparation can take just minutes, a few hours,
|
sync-dg-007
|
sync-dg.pdf
| 7 |
destination locations to determine what to transfer. This is done by scanning the contents and metadata of both locations to identify differences between the two. Note If you configure your task to transfer all data, there's no preparation. When you start your task, DataSync immediately transfers everything from your source to your destination without comparing locations. How DataSync prepares your transfer also depends on your task mode: Task execution 8 AWS DataSync User Guide Enhanced mode preparation Basic mode preparation DataSync prepares objects as they're found at the source location. Preparation continues Preparation can take just minutes, a few hours, or even longer depending on the throughout the task execution until there are number of files, objects, or directories in no more objects listed at the source. both locations and the performance of your Unlike Basic mode, DataSync can prepare storage. virtually unlimited numbers of objects with The items that DataSync inventories in your each task execution. source and destination count towards your task quotas. These quotas aren't based on the number of items that DataSync transfers during each task execution. DataSync might skip some files, objects, and directories during preparation. The reasons for this can depend on several factors, such as how you configure your task and storage system permissions. Here are some examples: • There's a file that exists in your source and destination locations. The file in the source hasn't been modified since the previous task execution. Since you're only transferring data that has changed, DataSync doesn't transfer that file next time you run your task. • An object that exists in both of your locations changes in your source. When you run your task, DataSync skips this object in your destination because your task doesn't overwrite data in the destination. • DataSync skips an object in your source location that's using an archival storage class and isn't restored. You must restore an archived object for DataSync to read it. • DataSync skips a file, object, or directory in your source location because it can't read it. If this happens and isn't expected, check your storage's access permissions and make sure that DataSync can read what was skipped. How DataSync transfers your data DataSync copies your data (including metadata) from the source to the destination based on your task options. For example, you can specify what metadata gets copied, exclude certain files, and limit how much bandwidth DataSync uses, among other options. How DataSync transfers your data 9 AWS DataSync User Guide How DataSync transfers your data also depends on your task mode: Enhanced mode transferring Basic mode transferring DataSync transfers each object as soon as it's prepared. Once DataSync prepares all of your data, the transfer begins. DataSync might skip some items during the transfer. If you configure your task to transfer all data, this can happen with an object in your source location that's using an archival storage class and isn't restored. How DataSync verifies your data's integrity DataSync always performs integrity checks on your data during a transfer. At the end of a transfer, DataSync can also perform additional checks on just the transferred data or the entire dataset in both locations. For more information, see Configuring how AWS DataSync verifies data integrity. When checking data integrity, DataSync calculates and compares the checksum and metadata of the files, objects, or directories in your locations. If DataSync notices differences between locations, verification fails with an error. For example, you might see errors such as Checksum failure, Metadata failure, Files were added, or Files were removed. How verification works depends on your task mode and whether you configure DataSync to verify data integrity at the end of your transfer. Enhanced mode verification Basic mode verification DataSync verifies each object as it's transferr ed to your destination. At the end of your transfer, DataSync verifies the integrity of your data. With Enhanced mode, DataSync verifies only transferred data. Depending on how you configure data verification, this can take a significant amount of time for large datasets. How DataSync works with open and locked files Keep in mind the following when trying to transfer files that are open (in use) or locked: How DataSync verifies your data's integrity 10 AWS DataSync User Guide • In general, DataSync can transfer open files without any limitations. • If a file is open and being written to during a transfer, DataSync can detect this kind of inconsistency during the transfer task's verification phase. To get the latest version of the file, you must run the task again. • If a file is locked and the server prevents DataSync from opening it, DataSync skips the file during the transfer and logs an error. • DataSync can't lock or unlock files. Recurring transfer options In addition to one-time transfers, DataSync can transfer data
|
sync-dg-008
|
sync-dg.pdf
| 8 |
10 AWS DataSync User Guide • In general, DataSync can transfer open files without any limitations. • If a file is open and being written to during a transfer, DataSync can detect this kind of inconsistency during the transfer task's verification phase. To get the latest version of the file, you must run the task again. • If a file is locked and the server prevents DataSync from opening it, DataSync skips the file during the transfer and logs an error. • DataSync can't lock or unlock files. Recurring transfer options In addition to one-time transfers, DataSync can transfer data on a recurring basis. Some of the options for these situations include: • Scheduling when your task executes. • Transferring only the data that's changed since the previous task execution. • Deleting data in the destination location that's no longer present in the source. Recurring transfer options 11 AWS DataSync User Guide Getting started with AWS DataSync Before you get started with AWS DataSync, you need to sign up for an AWS account if you don't have one. We also recommend learning where DataSync can be used and how much it might cost to transfer your data. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. Sign up for an AWS account 12 AWS DataSync User Guide For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying least- privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 13 AWS DataSync User Guide Required IAM permissions for using DataSync DataSync can transfer your data to an Amazon S3 bucket, Amazon EFS file system, or a number of other AWS storage services. To get your data where you want it to go, you need the right IAM permissions granted to your identity. For example, the IAM role that you use with DataSync needs permission to use the Amazon S3 operations required to transfer data to an S3 bucket. You can grant these permissions with IAM policies provided by AWS or by creating your own policies. Contents • AWS managed policies • Customer managed policies AWS managed policies AWS provides the following managed policies for common
|
sync-dg-009
|
sync-dg.pdf
| 9 |
to an Amazon S3 bucket, Amazon EFS file system, or a number of other AWS storage services. To get your data where you want it to go, you need the right IAM permissions granted to your identity. For example, the IAM role that you use with DataSync needs permission to use the Amazon S3 operations required to transfer data to an S3 bucket. You can grant these permissions with IAM policies provided by AWS or by creating your own policies. Contents • AWS managed policies • Customer managed policies AWS managed policies AWS provides the following managed policies for common DataSync use cases: • AWSDataSyncReadOnlyAccess – Provides read-only access to DataSync. • AWSDataSyncFullAccess – Provides full access to DataSync and minimal access to its dependencies. For more information, see AWS managed policies for AWS DataSync. Customer managed policies You can create custom IAM policies to use with DataSync. For more information, see IAM customer managed policies for AWS DataSync. Where can I use DataSync? For a list of AWS Regions and endpoints that DataSync supports, see AWS DataSync endpoints and quotas in the AWS General Reference. How can I use DataSync? There are several ways to use DataSync: Required IAM permissions for using DataSync 14 AWS DataSync User Guide • DataSync console, which is part of the AWS Management Console. • DataSync API or the AWS CLI to programmatically configure and manage DataSync. • AWS CloudFormation or Terraform to provision your DataSync resources. • AWS SDKs to build applications that use DataSync. How much will DataSync cost? On the DataSync pricing page, create a custom estimate using the amount of data that you plan to transfer. Open-source components used by DataSync To view the open-source components used by DataSync, download the following link: • datasync-open-source-components.zip Do I need an AWS DataSync agent? To use AWS DataSync, you might need an agent. An agent is a virtual machine (VM) appliance that you deploy in your storage environment for data transfers or storage discovery. Whether you need an agent depends on several factors, including the type of storage you're transferring to or from, if you're transferring across AWS accounts, and which AWS Regions you're transferring between. Before reading further, check that DataSync supports the transfer you're interested in. Once you know that DataSync supports your transfer, review the following information to help you understand if you need an agent. Situations when you need a DataSync agent Most situations that require a DataSync agent involve storage that's managed by you or another cloud provider. • Transferring to or from on-premises storage • Transferring to or from other cloud storage How much will DataSync cost? 15 AWS DataSync User Guide • Transferring to or from edge storage • Transferring between some AWS storage services across AWS accounts (when neither storage service is Amazon S3) For more information, see Supported transfers across AWS accounts. • Transferring between a commercial AWS Region and AWS GovCloud (US) Region • Using AWS DataSync Discovery Situations when you don't need a DataSync agent The situations that don't require an agent apply whether you're transferring in the same AWS Region or across Regions. • Transferring between AWS storage services in the same AWS account • Transferring between an S3 bucket and a different AWS storage service across AWS accounts Using multiple DataSync agents You can use more than one DataSync agent with your data transfers. While most transfers only need one agent, using multiple agents can speed up transfers of large datasets with millions of files or objects. In these situations, we recommend running transfer tasks in parallel. This approach spreads out the transfer workload across multiple tasks (each of which uses its own agent). It also helps reduce the time it takes DataSync to prepare and transfer your data. For more information, see Partitioning large datasets with multiple tasks. Another option—especially if you have millions of small files—is using multiple agents with a transfer location. For example, you can connect up to four agents to your on-premises Network File System (NFS) file service. This option can speed up your transfer, though the time it takes DataSync to prepare the transfer doesn’t change. With either approach, be mindful that these can increase the I/O operations on your storage and affect your network bandwidth. For more information on using multiple agents for your DataSync transfers, see the AWS Storage Blog. If you're thinking of using multiple agents, remember the following: Situations when you don't need a DataSync agent 16 AWS DataSync User Guide • Using multiple agents with a location doesn't provide high availability. All the agents associated with a location must be online before you can start your transfer task. If one of the agents is offline, you can't run your task. • If you're using a virtual private cloud
|
sync-dg-010
|
sync-dg.pdf
| 10 |
can increase the I/O operations on your storage and affect your network bandwidth. For more information on using multiple agents for your DataSync transfers, see the AWS Storage Blog. If you're thinking of using multiple agents, remember the following: Situations when you don't need a DataSync agent 16 AWS DataSync User Guide • Using multiple agents with a location doesn't provide high availability. All the agents associated with a location must be online before you can start your transfer task. If one of the agents is offline, you can't run your task. • If you're using a virtual private cloud (VPC) service endpoint to communicate with the DataSync service, all the agents must use the same endpoint and subnet. • With DataSync Discovery, you can only use one agent per storage system. Next steps • If you need an agent, review the agent requirements to understand what makes sense for your storage environment and what you need DataSync for. • If you don't need an agent for your transfer, you can start configuring your transfer. Requirements for AWS DataSync agents Before you deploy an AWS DataSync agent in your storage environment, make sure that you understand the agent hypervisor and resource requirements. Hypervisor requirements You can run a DataSync agent on the following hypervisors: • VMware ESXi (version 6.5, 6.7, 7.0, or 8.0): VMware ESXi is available on the Broadcom website. You also need a VMware vSphere client to connect to the host. • Linux Kernel-based Virtual Machine (KVM): A free, open-source virtualization technology. KVM is included in Linux versions 2.6.20 and newer. DataSync is tested and supported for the CentOS/RHEL 7 and 8, Ubuntu 16.04 LTS, and Ubuntu 18.04 LTS distributions. Other modern Linux distribution might work, but function or performance is not guaranteed. You must enable hardware accelerated virtualization on your KVM host to deploy your DataSync agent. We recommend this option if you already have a KVM environment up and running and you're already familiar with how KVM works. Running KVM on Amazon EC2 isn't supported and can't be used for DataSync agents. • Microsoft Hyper-V (version 2012 R2, 2016, or 2019): For this setup, you need a Microsoft Hyper-V Manager on a Microsoft Windows client computer to connect to the host. Next steps 17 AWS DataSync User Guide The DataSync agent is a generation 1 virtual machine (VM). For more information about the differences between generation 1 and generation 2 VMs, see Should I create a generation 1 or 2 virtual machine in Hyper-V? • Amazon EC2: DataSync provides an Amazon Machine Image (AMI) that contains the DataSync image. For the recommended instance types, see Amazon EC2 instance requirements. Agent requirements for DataSync transfers For DataSync transfers, your agent must meet the following resource requirements. Important Keep in mind that the agent requirements for working with up to 20 million files, objects, or directories are general guidelines. Your agent may need more resources because of other factors, such as how many directories you have and object metadata size. For example, the m5.2xlarge instance for an Amazon EC2 agent still might not be enough for a transfer of less than 20 million files. Contents • Virtual machine requirements • Amazon EC2 instance requirements Virtual machine requirements When deploying a DataSync agent that isn't on an Amazon EC2 instance, the agent VM requires the following resources: • Virtual processors: Four virtual processors assigned to the VM. • Disk space: 80 GB of disk space for installing the VM image and system data. • RAM: Depending on your transfer scenario, you need the following amount of memory: • 32 GB of RAM assigned to the VM for task executions working with up to 20 million files, objects, or directories. • 64 GB of RAM assigned to the VM for task executions working with more than 20 million files, objects, or directories. Agent requirements for DataSync transfers 18 AWS DataSync User Guide Amazon EC2 instance requirements When deploying a DataSync agent on an Amazon EC2 instance, the instance size must be at least 2xlarge. We recommend using one of the following instance sizes: • m5.2xlarge: For task executions working with up to 20 million files, objects, or directories. • m5.4xlarge: For task executions working with more than 20 million files, objects, or directories. Agent requirements for DataSync Discovery Whether it's a VM or Amazon EC2 instance, the agent that you use with DataSync Discovery must have 80 GB of disk space and 16 GB of RAM. Agent requirements for partitions DataSync agent images are associated with specific partitions. For example, by default you can't download an agent in a commercial AWS Region and then activate it in an AWS GovCloud (US) Region. Agent management requirements Once you activate your DataSync agent, AWS manages the agent for you. For more information,
|
sync-dg-011
|
sync-dg.pdf
| 11 |
For task executions working with more than 20 million files, objects, or directories. Agent requirements for DataSync Discovery Whether it's a VM or Amazon EC2 instance, the agent that you use with DataSync Discovery must have 80 GB of disk space and 16 GB of RAM. Agent requirements for partitions DataSync agent images are associated with specific partitions. For example, by default you can't download an agent in a commercial AWS Region and then activate it in an AWS GovCloud (US) Region. Agent management requirements Once you activate your DataSync agent, AWS manages the agent for you. For more information, see Managing your AWS DataSync agent. Deploying your AWS DataSync agent When creating an AWS DataSync agent, the first step is to deploy the agent in your storage environment. You can deploy an agent as a virtual machine (VM) on VMware ESXi, Linux Kernel- based Virtual Machine (KVM), and Microsoft Hyper-V hypervisors. You also can deploy an agent as an Amazon EC2 instance in a virtual private cloud (VPC) within AWS. Tip Before you begin, confirm whether you need a DataSync agent. Agent requirements for DataSync Discovery 19 AWS DataSync User Guide Deploying your agent on VMware You can download an agent from the DataSync console and deploy it in your VMware environment. Before you begin: Make sure that your storage environment can support a DataSync agent. For more information, see Virtual machine requirements. To deploy an agent on VMware 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. For Hypervisor, choose VMWare ESXi, and then choose Download the image. The agent downloads in a .zip file that contains an .ova image file. 4. To minimize network latency, deploy the agent as close as possible to the storage system that DataSync needs to access (the same local network if possible). For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. If needed, see your hypervisor's documentation on how to deploy an .ova file in a VMware host. 5. Power on your hypervisor, log in to the agent VM, and get the agent's IP address. You need this IP address to activate the agent. The agent VM's default credentials are login admin and password password. If needed, change the password through the VM's local console. Next step: Choosing a service endpoint for your AWS DataSync agent Deploying your agent on KVM You can download an agent from the DataSync console and deploy it in your KVM environment. Before you begin: Make sure that your storage environment can support a DataSync agent. For more information, see Virtual machine requirements. To deploy an agent on KVM 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Deploying your agent on VMware 20 AWS DataSync User Guide 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. For Hypervisor, choose Kernel-based Virtual Machine (KVM), and then choose Download the image. The agent downloads in a .zip file that contains a .qcow2 image file. 4. To minimize network latency, deploy the agent as close as possible to the storage system that DataSync needs to access (the same local network if possible). For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. 5. Run the following command to install your .qcow2 image. virt-install \ --name "datasync" \ --description "DataSync agent" \ --os-type=generic \ --ram=32768 \ --vcpus=4 \ --disk path=datasync-yyyymmdd-x86_64.qcow2,bus=virtio,size=80 \ --network default,model=virtio \ --graphics none \ --virt-type kvm \ --import For information about how to manage this VM and your KVM host, see your hypervisor's documentation. 6. Power on your hypervisor, log in to your VM, and get the IP address of the agent. You need this IP address to activate the agent. The agent VM's default credentials are login admin and password password. If needed, change the password through the VM's local console. Next step: Choosing a service endpoint for your AWS DataSync agent Deploying your agent on Microsoft Hyper-V You can download an agent from the DataSync console and deploy it in your Microsoft Hyper-V environment. Before you begin: Make sure that your storage environment can support a DataSync agent. For more information, see Virtual machine requirements. Deploying your agent on Microsoft Hyper-V 21 AWS DataSync To deploy an agent on Hyper-V User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. For Hypervisor, choose Microsoft Hyper-V, and then choose Download the image. The agent downloads in a .zip file that contains a .vhdx image file. 4. To minimize network latency, deploy the agent as close as possible to the storage system that DataSync needs to access (the same local network if possible). For more information, see
|
sync-dg-012
|
sync-dg.pdf
| 12 |
more information, see Virtual machine requirements. Deploying your agent on Microsoft Hyper-V 21 AWS DataSync To deploy an agent on Hyper-V User Guide 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. For Hypervisor, choose Microsoft Hyper-V, and then choose Download the image. The agent downloads in a .zip file that contains a .vhdx image file. 4. To minimize network latency, deploy the agent as close as possible to the storage system that DataSync needs to access (the same local network if possible). For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. If needed, see your hypervisor's documentation on how to deploy a .vhdx file in a Hyper-V host. Warning You may notice poor network performance if you enable virtual machine queue (VMQ) on a Hyper-V host that's using a Broadcom network adapter. For information about a workaround, see the Microsoft documentation. 5. Power on your hypervisor, log in to your VM, and get the IP address of the agent. You need this IP address to activate the agent. The agent VM's default credentials are login admin and password password. If needed, change the password through the VM's local console. Next step: Choosing a service endpoint for your AWS DataSync agent Deploying your Amazon EC2 agent You might deploy a DataSync agent as an Amazon EC2 instance when transferring data between: • A self-managed cloud storage system (for example, an NFS file server in AWS) and an AWS storage service. • A cloud storage provider (such as Microsoft Azure Blob Storage or Google Cloud Storage) and an AWS storage service. • An S3 bucket in a commercial AWS Region and an S3 bucket in an AWS GovCloud (US) Region. Deploying your Amazon EC2 agent 22 AWS DataSync User Guide • Amazon S3 on AWS Outposts and an AWS storage service. Warning We don't recommend using an Amazon EC2 agent with on-premises storage because of increased network latency. Instead, deploy the agent as a VMware, KVM, or Hyper-V virtual machine in your data center as close to your on-premises storage as possible. Deploying your EC2 agent To choose the agent AMI for your AWS Region 1. Open a terminal and copy the following AWS CLI command to get the latest DataSync Amazon Machine Image (AMI) ID for the Region where you want to deploy your Amazon EC2 agent. aws ssm get-parameter --name /aws/service/datasync/ami --region your-region 2. Run the command. In the output, take note of the "Value" property with the DataSync AMI ID. Example Example command and output aws ssm get-parameter --name /aws/service/datasync/ami --region us-east-1 { "Parameter": { "Name": "/aws/service/datasync/ami", "Type": "String", "Value": "ami-1234567890abcdef0", "Version": 6, "LastModifiedDate": 1569946277.996, "ARN": "arn:aws:ssm:us-east-1::parameter/aws/service/datasync/ami" } } Deploying your Amazon EC2 agent 23 AWS DataSync User Guide To deploy your Amazon EC2 agent Tip To avoid charges for transferring across Availability Zones, deploy your agent in a way that it doesn't require network traffic between Availability Zones. (To learn more about data transfer prices for all AWS Regions, see Amazon EC2 Data Transfer pricing.) For example, deploy your agent in the Availability Zone where your self-managed cloud storage system is located. 1. Copy the following URL: https://console.aws.amazon.com/ec2/v2/home?region=agent- region#LaunchInstanceWizard:ami=ami-id • Replace agent-region with the Region where you want to deploy your agent. • Replace ami-id with the DataSync AMI ID that you obtained. 2. Paste the URL into a browser. The Amazon EC2 instance launch page in the AWS Management Console displays. For Instance type, choose one of the recommended Amazon EC2 instances for DataSync. For Key pair, choose an existing key pair, or create a new one. For Network settings, choose Edit and then do the following: 3. 4. 5. a. b. For VPC, choose a VPC where you want to deploy your agent. For Auto-assign public IP, choose whether you want your agent to be accessible from the public internet. You use the instance's public or private IP address later to activate your agent. c. For Firewall (security groups), create or a select a security group that does the following: • If needed, allows inbound traffic to the Amazon EC2 instance on port 80 (HTTP). Some options for getting an agent activation key require this connection. • Allows inbound and outbound traffic between the Amazon EC2 instance the storage system that you're transferring data to or from. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. Deploying your Amazon EC2 agent 24 AWS DataSync User Guide Note There are additional ports to configure depending on the type of service endpoint that your agent uses. 6. (Recommended) To increase performance when transferring from a cloud-based file system, expand Advanced details and choose a Placement group value where your storage is located. 7. Choose
|
sync-dg-013
|
sync-dg.pdf
| 13 |
for getting an agent activation key require this connection. • Allows inbound and outbound traffic between the Amazon EC2 instance the storage system that you're transferring data to or from. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. Deploying your Amazon EC2 agent 24 AWS DataSync User Guide Note There are additional ports to configure depending on the type of service endpoint that your agent uses. 6. (Recommended) To increase performance when transferring from a cloud-based file system, expand Advanced details and choose a Placement group value where your storage is located. 7. Choose Launch instance to launch your Amazon EC2 instance. 8. Once your instance status is Running, choose the instance. 9. If you configured your instance to be accessible from the public internet, make note of the instance's public IP address. If you didn't, make note of the private IP address. You need this IP address when activating your agent. Examples: Deploying your EC2 agent in an AWS Region The following guidance can help with common scenarios if you deploy an DataSync agent in an AWS Region. Topics • Deploying your agent for transfers between cloud file systems and Amazon S3 • Deploying your agent for transfers between Amazon S3 to AWS file systems Deploying your agent for transfers between cloud file systems and Amazon S3 To transfer data between AWS accounts, or from a cloud file system, the DataSync agent must be located in the same AWS Region and AWS account where the source file system resides. This type of transfer includes the following: • Transfers between Amazon EFS or FSx for Windows File Server file systems to AWS storage in a different AWS account. • Transfers from self-managed file systems to AWS storage services. Deploying your Amazon EC2 agent 25 AWS DataSync Important User Guide Deploy your agent such that it doesn't require network traffic between Availability Zones (to avoid charges for such traffic). • To access your Amazon EFS or FSx for Windows File Server file system, deploy the agent in an Availability Zone that has a mount target to your file system. • For self-managed file systems, deploy the agent in the Availability Zone where your file system resides. To learn more about data transfer prices for all AWS Regions, see Amazon EC2 On-Demand pricing. For example, the following diagram shows a high-level view of the DataSync architecture for transferring data from in-cloud Network File trr System (NFS) to in-cloud NFS or Amazon S3. Remember the following when transferring between AWS storage services across AWS accounts: • When transferring between Amazon EFS file systems, configure your source file system as an NFS location. • When transferring between Amazon FSx for Windows File Server file systems, configure your source file system as an SMB location. Deploying your Amazon EC2 agent 26 AWS DataSync User Guide Deploying your agent for transfers between Amazon S3 to AWS file systems The following diagram provides a high-level view of the DataSync architecture for transferring data from Amazon S3 to an AWS file system, such as Amazon EFS or Amazon FSx. You can use this architecture to transfer data from one AWS account to another, or to transfer data from Amazon S3 to a self-managed in-cloud file system. Deploying your agent on AWS Snowball Edge For more information and instructions, see Creating a DataSync agent in your on-premises storage environment for Amazon S3 compatible storage. Deploying your agent on AWS Outposts You can launch a DataSync Amazon EC2 instance on your Outpost. To learn more about launching an AMI on AWS Outposts, see Launch an instance on your Outpost in the AWS Outposts User Guide. When using DataSync to access Amazon S3 on Outposts, you must launch the agent in a VPC that's allowed to access your Amazon S3 access point, and activate the agent in the parent Region of the Outpost. The agent must also be able to route to the Amazon S3 on Outposts endpoint for the bucket. To learn more about working with Amazon S3 on Outposts endpoints, see Working with Amazon S3 on Outposts in the Amazon S3 User Guide. Deploying your agent on AWS Snowball Edge 27 AWS DataSync User Guide Choosing a service endpoint for your AWS DataSync agent A service endpoint is how your AWS DataSync agent communicates with the DataSync service. DataSync supports the following types of service endpoints: • Public service endpoint – Data is sent over the public internet. • Federal Information Processing Standard (FIPS) service endpoint – Data is sent over the public internet by using processes that comply with FIPS. • Virtual private cloud (VPC) service endpoint – Data is sent through your VPC instead of over the public internet, increasing the security of your transferred data. You need a service endpoint
|
sync-dg-014
|
sync-dg.pdf
| 14 |
User Guide Choosing a service endpoint for your AWS DataSync agent A service endpoint is how your AWS DataSync agent communicates with the DataSync service. DataSync supports the following types of service endpoints: • Public service endpoint – Data is sent over the public internet. • Federal Information Processing Standard (FIPS) service endpoint – Data is sent over the public internet by using processes that comply with FIPS. • Virtual private cloud (VPC) service endpoint – Data is sent through your VPC instead of over the public internet, increasing the security of your transferred data. You need a service endpoint to activate your agent. When choosing a service endpoint, remember the following: • An agent can only use one type of endpoint. If you need to transfer data using different endpoint types, create an agent for each type. • How you connect your storage network to AWS determines what service endpoints you can use. • With DataSync Discovery, you can only use public endpoints. Choosing a public service endpoint If you use a public service endpoint, all communication between your DataSync agent and the DataSync service occurs over the public internet. 1. Determine the DataSync public service endpoint that you want to use. 2. Configure your network to allow the traffic required for using DataSync public service endpoints. Next step: Activating your AWS DataSync agent Choosing a FIPS service endpoint DataSync provides some service endpoints that comply with FIPS. For more information, see FIPS endpoints in the AWS General Reference. 1. Determine the DataSync FIPS service endpoint that you want to use. Choosing a service endpoint for your agent 28 AWS DataSync User Guide 2. Configure your network to allow the traffic required for using DataSync FIPS service endpoints. Next step: Activating your AWS DataSync agent Choosing a VPC service endpoint If you use a VPC service endpoint, your data isn't transferred across the public internet. DataSync instead transfers data through a VPC that's based on the Amazon VPC service. Contents • How DataSync agents work with VPC service endpoints • DataSync limitations with VPCs • Creating a VPC service endpoint for DataSync How DataSync agents work with VPC service endpoints VPC service endpoints are provided by AWS PrivateLink. These types of endpoints let you privately connect supported AWS services to your VPC. When you use a VPC service endpoint with DataSync, all communication between your DataSync agent and the DataSync service remains in your VPC. The VPC service endpoint (along with the network interfaces DataSync creates for data transfer traffic) are private IP addresses that are only accessible from inside your VPC. For more information, see Connecting your network for AWS DataSync transfers. DataSync limitations with VPCs • VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. • DataSync doesn't support shared VPCs. • DataSync VPC service endpoints only support IPv4. IPv6 and dualstack options aren't supported. Creating a VPC service endpoint for DataSync You create a VPC service endpoint for DataSync in a VPC that you manage. Your service endpoint, VPC, and DataSync agent must belong to the same AWS account. Choosing a VPC service endpoint 29 AWS DataSync User Guide The following diagram shows an example of DataSync using a VPC service endpoint for transferring from an on-premises storage system to an Amazon S3 bucket. The numbered callouts correspond to the steps to create a VPC service endpoint. To create a VPC service endpoint for DataSync 1. Create or determine a VPC and subnet where you want to create your VPC service endpoint. If you're transferring to or from storage that's outside AWS, the VPC should extend to that storage environment (for example, your storage environment might be a data center where your on-premises NFS file server is located). You can do this by using routing rules over AWS Direct Connect or VPN. 2. Create a DataSync VPC service endpoint by doing the following: a. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. b. c. d. e. f. In the left navigation pane, choose Endpoints, then choose Create endpoint. For Service category, choose AWS services. For Services, search for datasync and choose the endpoint for the Region you're in (for example, com.amazonaws.us-east-1.datasync). For VPC, choose the VPC where you want to create the VPC service endpoint. Expand Additional settings and clear the Enable Private DNS Name check box to disable this setting. Choosing a VPC service endpoint 30 AWS DataSync User Guide We recommend disabling this setting in case you have agents in the same VPC that need to use a public service endpoint. An agent can't reach a public service endpoint over the network when this setting is enabled. g. For Subnet, choose the subnet where you want to create the VPC service endpoint. Take note of the subnet ARN
|
sync-dg-015
|
sync-dg.pdf
| 15 |
com.amazonaws.us-east-1.datasync). For VPC, choose the VPC where you want to create the VPC service endpoint. Expand Additional settings and clear the Enable Private DNS Name check box to disable this setting. Choosing a VPC service endpoint 30 AWS DataSync User Guide We recommend disabling this setting in case you have agents in the same VPC that need to use a public service endpoint. An agent can't reach a public service endpoint over the network when this setting is enabled. g. For Subnet, choose the subnet where you want to create the VPC service endpoint. Take note of the subnet ARN (you need this when activating your agent). h. Choose Create endpoint. Take note of the endpoint ID (you need this when activating your agent). 3. In your VPC, configure a security group that allows the traffic required for using DataSync VPC service endpoints. Take note of the security group ARN (you need this when activating your agent). The security group must allow your agent to connect with the private IP addresses of the VPC service endpoint and your network interfaces (which get created when you create your task). Next step: Activating your AWS DataSync agent Activating your AWS DataSync agent To finish creating your AWS DataSync agent, you must activate it. This step associates the agent with your AWS account. Note You can't activate an agent in more than one AWS account and AWS Region at a time. Prerequisites To activate your DataSync agent, make sure that you have the following information: • The DataSync service endpoint that you're activating your agent with. If you're using a VPC service endpoint, you need these details: • The VPC service endpoint ID. • The subnet where your VPC service endpoint is located. • The security group that allows the traffic required for using DataSync VPC service endpoints. Activating your agent 31 AWS DataSync User Guide • Your agent's IP address or domain name. How you find this depends on the type of agent that you deploy. For example, if your agent is an Amazon EC2 instance, you can find its IP address by going to the instance's page on the Amazon EC2 console. Getting an activation key You can obtain an activation key for your deployed DataSync agent a few different ways. Some options require access to your agent on port 80 (HTTP). If you use one of these options, DataSync closes the port once you activate the agent. Note Agent activation keys expire in 30 minutes if unused. DataSync console When activating your agent in the DataSync console, DataSync can get the activation key for you by using the Automatically get the activation key from your agent option. To use this option, your browser must be able to reach your agent on port 80. Agent local console Unlike the other options for getting an activation key, this option doesn't require your agent to be accessible on port 80. 1. Log in to the local console of your agent virtual machine (VM) or Amazon EC2 instance. 2. On the AWS DataSync Activation - Configuration main menu, enter 0 to get an activation key. 3. 4. Enter the AWS Region that you're activating your agent in. Enter the type of service endpoint type that your agent is using. 5. Copy the activation key that displays. For example: F0EFT-7FPPR-GG7MC-3I9R3-27DOH You specify this key when activating your agent. Getting an activation key 32 AWS DataSync CLI User Guide With standard Unix tools, you can run a curl request to your agent's IP address to get its activation key. To use this option, your client must be able to reach your agent on port 80. You can run the following command to check: nc -vz agent-ip-address 80 Once you confirm you can reach the agent, run one of the following commands depending on the type of service endpoint that you're using: • Public service endpoints: curl "http://agent-ip-address/?gatewayType=SYNC&activationRegion=your- region&no_redirect" • FIPS service endpoints: curl "http://agent-ip-address/?gatewayType=SYNC&activationRegion=your- region&endpointType=FIPS&no_redirect" • VPC service endpoints: curl "http://agent-ip-address/?gatewayType=SYNC&activationRegion=your- region&privateLinkEndpoint=vpc-endpoint-ip- address&endpointType=PRIVATE_LINK&no_redirect" To find the vpc-endpoint-ip-address, open the Amazon VPC console, choose Endpoints, and select your DataSync VPC service endpoint. On the Subnets tab, locate the IP address for your VPC service endpoint's subnet. This is the endpoint's IP address. This command returns an activation key. For example: F0EFT-7FPPR-GG7MC-3I9R3-27DOH You specify this key when activating your agent. Getting an activation key 33 AWS DataSync Activating your agent User Guide You have several options for activating your DataSync agent. Once activated, AWS manages the agent for you. DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. In the Service endpoint section, do the following to specify the service endpoint for your agent: • For a public service endpoint, choose Public service
|
sync-dg-016
|
sync-dg.pdf
| 16 |
is the endpoint's IP address. This command returns an activation key. For example: F0EFT-7FPPR-GG7MC-3I9R3-27DOH You specify this key when activating your agent. Getting an activation key 33 AWS DataSync Activating your agent User Guide You have several options for activating your DataSync agent. Once activated, AWS manages the agent for you. DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. In the Service endpoint section, do the following to specify the service endpoint for your agent: • For a public service endpoint, choose Public service endpoints in your current AWS Region. • For a FIPS service endpoint, choose FIPS service endpoints in your current AWS Region. • For a VPC service endpoint, do the following: • Choose VPC endpoints using AWS PrivateLink. • For VPC endpoint, choose the VPC service endpoint that you want your agent to use. • For Subnet, choose the subnet where your VPC service endpoint is located. • For Security group, choose the security group that allows the traffic required for using DataSync VPC service endpoints. 4. In the Activation key section, do one of the following to specify your agent's activation key: • Choose Automatically get the activation key from your agent for DataSync to get the key for you. • For Agent address, enter your agent's IP address or domain name. • Choose Get key. If activation fails, check your network configuration based on the type of service endpoint you're using. • Choose Manually enter your agent's activation key if you don't want a connection between your browser and agent. • Get the key from the agent local console or by using a curl command. • Back in the DataSync console, enter the key in the Activation key field. Activating your agent 34 AWS DataSync User Guide 5. 6. (Recommended) For Agent name, give your agent a name that you can remember. (Optional) For Tags, enter values for the Key and Value fields to tag your agent. Tags help you manage, filter, and search for your AWS resources. 7. Choose Create agent. 8. On the Agents page, verify that your agent is using the correct service endpoint type. Note At this point, you might notice that your agent is offline. This happens briefly after activating an agent. AWS CLI 1. Once you get your activation key, copy one of the following create-agent commands depending on the type of service endpoint that you're using: • Public or FIPS service endpoint: aws datasync create-agent \ --activation-key activation-key \ --agent-name name-for-agent • VPC service endpoint: aws datasync create-agent \ --activation-key activation-key \ --agent-name name-for-agent \ --vpc-endpoint-id vpc-endpoint-id \ --subnet-arns subnet-arn \ --security-group-arns security-group-arn 2. 3. For --activation-key, specify your agent activation key. (Recommended) For --agent-name, specify a name for your agent that you can remember. 4. If you're using a VPC service endpoint, specify the following options: • For --vpc-endpoint-id, specify the ID of the VPC service endpoint that you're using. Activating your agent 35 AWS DataSync User Guide • For --subnet-arns, specify the ARN of the subnet where your VPC service endpoint is located. • For --security-group-arns, specify the ARN of the security group that allows the traffic required for using DataSync VPC service endpoints. 5. Run the create-agent command. You get a response with the ARN of the agent that you just activated. For example: { "AgentArn": "arn:aws:datasync:us-east-1:111222333444:agent/ agent-0b0addbeef44baca3" } 6. Verify that your agent is activated by running the list-agents command: aws datasync list-agents Note At this point, you might notice that your agent Status is OFFLINE. This happens briefly after activating an agent. DataSync API Once you get your activation key, activate your agent by using the CreateAgent operation. Note When you're done, you might notice that your agent is offline. This happens briefly after activating an agent. Next steps • Verify your agent's connection to your storage system and the DataSync service. • If you run into issues trying to activate your agent, get help with troubleshooting. Next steps 36 AWS DataSync User Guide • Create the DataSync location that you want to use with your agent. This might be an on- premises or other cloud location. Verifying your agent's network connections Once you activate your AWS DataSync agent, make sure that the agent has network connectivity to your storage system and the DataSync service. Accessing your agent's local console How you access your agent's local console depends on the type of agent you're using. Accessing the local console (VMware ESXi, Linux KVM, or Microsoft Hyper-V) For security reasons, you can't remotely connect to the local console of the DataSync agent virtual machine (VM). • If this is your first time using the local console, log in with the default credentials. The default user name is admin
|
sync-dg-017
|
sync-dg.pdf
| 17 |
Verifying your agent's network connections Once you activate your AWS DataSync agent, make sure that the agent has network connectivity to your storage system and the DataSync service. Accessing your agent's local console How you access your agent's local console depends on the type of agent you're using. Accessing the local console (VMware ESXi, Linux KVM, or Microsoft Hyper-V) For security reasons, you can't remotely connect to the local console of the DataSync agent virtual machine (VM). • If this is your first time using the local console, log in with the default credentials. The default user name is admin and the password is password. Note We recommend changing the default password. To do this, on the console main menu enter 5 (or 6 for VMware VMs), then run the passwd command to change the password. Accessing the local console (Amazon EC2) To connect to an Amazon EC2 agent's local console, you must use SSH. Before you begin: Make sure that your EC2 instance's security group allows access with SSH (TCP port 22). 1. Open a terminal and copy the following ssh command: ssh -i /path/key-pair-name.pem instance-user-name@instance-public-ip-address • For /path/key-pair-name, specify the path and file name (.pem) of the private key required to connect to your instance. Verifying your agent's network connections 37 AWS DataSync User Guide • For instance-user-name, specify admin. • For instance-public-ip-address, specify the public IP address of your instance. 2. Run the ssh command to connect to the instance. Once connected, the main menu of the agent's local console displays. Verifying your agent's connection to your storage system Test whether your DataSync agent can connect to your storage system. For more information, see 1. Network connection between your storage system and agent. 1. Access your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 3. 3. Enter one of the following options: a. b. c. d. e. Enter 1 to test an NFS server connection. Enter 2 to test an SMB server connection. Enter 3 to test an object storage server connection. Enter 4 to test an HDFS connection. Enter 5 to test a Microsoft Azure Blob Storage connection. 4. Enter the storage server's IP address or domain name. Remember the following when entering the IP address or domain name: • Don't include a protocol. For example, enter mystorage.com instead of https:// mystorage.com. • For HDFS, enter the IP address or domain name of the NameNode or DataNode in the Hadoop cluster. 5. If requested, enter the TCP port for connecting to the storage server (for example, 443). See if the connectivity test PASSED or FAILED. Verifying your agent's connection to the DataSync service Test whether your DataSync agent can connect to the DataSync service. For more information, see 2. Network connection between your agent and DataSync service. Verifying your agent's connection to your storage system 38 AWS DataSync User Guide 1. Access your agent's local console. 2. On the AWS DataSync Activation - Configuration main menu, enter 2 to begin testing network connectivity. If your agent is activated, the Test Network Connectivity option can be initiated without any additional user input, because the Region and endpoint type are taken from the activated agent information. 3. Enter the type of DataSync service endpoint that your agent uses: a. b. c. For public service endpoints, enter 1 and the AWS Region where your agent is activated. For FIPS service endpoints, enter 2 and the Region where your agent is activated. For VPC service endpoints, enter 3. You see a PASSED or FAILED message. 4. If you see a FAILED message, check your network configuration. For more information, see AWS DataSync network requirements. Next steps Create the DataSync location that you want to use with your agent. This might be an on-premises or other cloud location. Next steps 39 AWS DataSync User Guide Connecting your network for AWS DataSync transfers If you need an AWS DataSync agent, you must establish several network connections for a data transfer or storage discovery. The following diagram shows the three network connections in a DataSync transfer from a storage system (which could be on premises, in another cloud, or at the edge) to an AWS storage service. 1. Network connection between your storage system and agent Your DataSync agent connects to your on-premises, other cloud, or edge storage system. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. 2. Network connection between your agent and DataSync service There are a few aspects to connecting your agent to the DataSync service. First, you must connect your storage network to AWS. Second, your agent needs a service endpoint to communicate with DataSync. Contents • Connecting your storage network to AWS • Choosing a service endpoint 1. Network connection between your storage system and agent
|
sync-dg-018
|
sync-dg.pdf
| 18 |
1. Network connection between your storage system and agent Your DataSync agent connects to your on-premises, other cloud, or edge storage system. For more information, see Network requirements for on-premises, self-managed, other cloud, and edge storage. 2. Network connection between your agent and DataSync service There are a few aspects to connecting your agent to the DataSync service. First, you must connect your storage network to AWS. Second, your agent needs a service endpoint to communicate with DataSync. Contents • Connecting your storage network to AWS • Choosing a service endpoint 1. Network connection between your storage system and agent 40 AWS DataSync User Guide Connecting your storage network to AWS When using DataSync, consider the following options for connecting your storage network to AWS: • AWS Direct Connect - With Direct Connect, you can create a dedicated connection between your storage network and AWS. From a DataSync perspective, this lets you: • Transfer data over a private path to your virtual private cloud (VPC), which avoids routing over the public internet. • Get a more predictable connection than using a virtual private network (VPN) to connect your storage network to AWS (particularly if your agent is an Amazon EC2 instance). • Use any type of DataSync service endpoint, including public, Federal Information Processing Standard (FIPS), or VPC endpoints. For more information, see DataSync architecture and routing examples with AWS Direct Connect. • VPN - You can connect your storage network to AWS by using a VPN (such as AWS Site-to-Site VPN). • Public internet - You can connect your storage network directly to DataSync over the internet by using a public or FIPS service endpoint. Choosing a service endpoint Your agent uses a service endpoint to communicate with DataSync. For more information, see Choosing a service endpoint for your AWS DataSync agent. 3. Network connection between DataSync service and AWS storage service To connect DataSync to an AWS storage service, you just have to make sure that the DataSync service can access your S3 bucket or file system. For more information, see Network requirements for AWS storage services. Networking when you don't need a DataSync agent For transfers that don't require a DataSync agent, you just have to make sure that the DataSync service can access the AWS storage services you’re transferring between. For more information, see Network requirements for AWS storage services. Connecting your storage network to AWS 41 AWS DataSync User Guide How and where DataSync traffic flows through the network DataSync has data plane and control plane traffic. Knowing how each of these flows through the network is important if you want to separate your DataSync traffic. • Data plane traffic – Includes the file or object data moving between your storage locations. In most cases, data plane traffic routes through network interfaces that DataSync automatically generates and manages when you create a task. Where these network interfaces get created depends on the type of AWS storage service you’re transferring to or from and the service endpoint that your DataSync agent uses. • Control plane traffic – Includes management activities for your DataSync resources. This traffic routes through the service endpoint that your agent uses. Network security for DataSync For information about how your storage data (including metadata) is secured during a transfer, see AWS DataSync encryption in transit. AWS DataSync network requirements Configuring your network is an important step in setting up AWS DataSync. Your network configuration depends on several factors, such as what kind of storage systems you're working with. It's also based on what kind of DataSync service endpoint that you plan to use. Network requirements for on-premises, self-managed, other cloud, and edge storage The following network requirements can apply to on-premises, self-managed, other cloud, and edge storage systems. These are typically storage systems that you manage or might be managed by another cloud provider. Note Depending on your network, you might need to allow traffic on ports other than what's listed here for your DataSync agent to connect with your storage. How and where DataSync traffic flows through the network 42 AWS DataSync User Guide From To Protocol Port DataSync agent NFS file server TCP 2049 (for NFS versions 4.1 and 4.0) 111 and 2049 (for NFS version 3.x) How it's used by DataSync Mounts the NFS file server. DataSync supports NFS versions 3.x, 4.0, and 4.1. DataSync agent SMB file server TCP 139 or 445 Mounts the SMB file server. DataSync supports SMB versions 1.0 and later. For security reasons, we recommend using SMB version 3.0.2 or later. Earlier versions, such as SMB 1.0, contain known security vulnerabilities that attackers can exploit to compromise your data. DataSync agent Object storage TCP 443 (HTTPS) or 80 (HTTP) Accesses your object storage. Network requirements for on-premises, self-managed, other cloud, and edge storage 43
|
sync-dg-019
|
sync-dg.pdf
| 19 |
111 and 2049 (for NFS version 3.x) How it's used by DataSync Mounts the NFS file server. DataSync supports NFS versions 3.x, 4.0, and 4.1. DataSync agent SMB file server TCP 139 or 445 Mounts the SMB file server. DataSync supports SMB versions 1.0 and later. For security reasons, we recommend using SMB version 3.0.2 or later. Earlier versions, such as SMB 1.0, contain known security vulnerabilities that attackers can exploit to compromise your data. DataSync agent Object storage TCP 443 (HTTPS) or 80 (HTTP) Accesses your object storage. Network requirements for on-premises, self-managed, other cloud, and edge storage 43 Note Depending on AWS DataSync User Guide From To Protocol Port How it's used by DataSync your object storage, you might need to allow traffic on nonstanda rd HTTPS and HTTP ports (such as 8443 or 8080). Network requirements for on-premises, self-managed, other cloud, and edge storage 44 AWS DataSync User Guide From To Protocol Port How it's used by DataSync DataSync agent Hadoop cluster TCP NameNode port Accesses the NameNodes in your Hadoop cluster. (default is Specify the port used 8020) when creating an HDFS location. In most clusters, you can find this port number in the core- site .xml file under the fs.defaul t or fs.defaul t.name property (dependin g on the Hadoop distribut ion). Network requirements for on-premises, self-managed, other cloud, and edge storage 45 AWS DataSync User Guide From To Protocol Port How it's used by DataSync DataSync agent Hadoop cluster TCP DataNode port Accesses the DataNodes in your Hadoop cluster. (default is The DataSync agent 50010) automatically determines the port to use. In most clusters, you can find this port number in the hdfs- site .xml file under the dfs.datan ode.addre ss property. DataSync agent Hadoop Key Managemen t Server (KMS) TCP KMS port (default is Accesses the KMS for your Hadoop cluster. DataSync agent Kerberos Key Distribution Center (KDC) server TCP 9600) KDC port (default is 88) Authenticates with the Kerberos realm. This port is used only with HDFS and SMB locations that use Kerberos authentic ation. Network requirements for on-premises, self-managed, other cloud, and edge storage 46 AWS DataSync User Guide From To Protocol Port DataSync agent Storage system's management interface TCP Depends on your network How it's used by DataSync Connects to your storage system. DataSync Discovery uses this connection to collect information about your system. Network requirements for AWS storage services The network ports required for DataSync to connect to an AWS storage service during a transfer vary. From To DataSync service Amazon EFS DataSync service FSx for Windows File Server DataSync service FSx for Lustre DataSync service FSx for OpenZFS Protocol TCP Port 2049 See file system access control for FSx for Windows File Server. See file system access control for FSx for Lustre. See file system access control for FSx for OpenZFS. DataSync service FSx for ONTAP TCP 111, 635, and 2049 (NFS) 445 (SMB) DataSync service Amazon S3 N/A (DataSync connects to S3 buckets on your behalf) Network requirements for AWS storage services 47 AWS DataSync User Guide Network requirements for public or FIPS service endpoints Your DataSync agent requires the following network access when using public or FIPS service endpoints. If you use a firewall or router to filter or limit network traffic, configure your firewall or router to allow these endpoints. From To Protocol Port How it's used Endpoints accessed Your web browser DataSync agent TCP 80 (HTTP) N/A Allows your browser to obtain the DataSync agent's activation key. Once activated , DataSync closes the agent's port 80. Your agent doesn't require port 80 to be publicly accessible. The required level of access to port 80 depends on your network configuration. Note You can get the activatio n key Network requirements for public or FIPS service endpoints 48 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed without a connectio n between your browser and agent. For more informati on, see Getting an activatio n key. DataSync agent Amazon CloudFron TCP 443 (HTTPS) Helps bootstrap AWS Regions: t your DataSync agent prior to activation. • d3dvvaliwoko8h.clo udfront.net AWS GovCloud (US) Regions: • s3.us-gov-west-1.a mazonaws.com/fmrse ndpoints-endpoints bucket-go4p5gpna6sk Network requirements for public or FIPS service endpoints 49 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent AWS TCP 443 (HTTPS) Activates your DataSync agent The activation-region is the AWS Region where you activate and associate your DataSync agent. s it with your AWS account. You can block the public endpoint after activation. Public endpoint activation: • activation.datasyn c. activation- region.amazonaws.com FIPS endpoint activation: • activation.datasyn c-fips. activation- region.amazonaws.com Network requirements for public or FIPS service endpoints 50 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent AWS TCP 443 (HTTPS) Allows communication
|
sync-dg-020
|
sync-dg.pdf
| 20 |
service endpoints 49 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent AWS TCP 443 (HTTPS) Activates your DataSync agent The activation-region is the AWS Region where you activate and associate your DataSync agent. s it with your AWS account. You can block the public endpoint after activation. Public endpoint activation: • activation.datasyn c. activation- region.amazonaws.com FIPS endpoint activation: • activation.datasyn c-fips. activation- region.amazonaws.com Network requirements for public or FIPS service endpoints 50 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent AWS TCP 443 (HTTPS) Allows communication The activation-region is the AWS Region where you activate between the your DataSync agent. Depending DataSync agent on what you're using DataSync and DataSync for, you might not need to allow service endpoint. For informati on, see Choosing a service endpoint for your AWS DataSync agent. access to every endpoint listed here. DataSync control plane endpoints: • Public endpoint: cp.datasy nc. activation- region.amazonaws.com • FIPS endpoint: cp.datasy nc-fips. activation- region.amazonaws.com DataSync data plane endpoint (for transfer tasks only): • your-task-id .datasync -dp. activation- region.amazonaws.com DataSync Discovery endpoint (for discovery jobs only): • discovery-datasync . activation- region.amazonaws.com Network requirements for public or FIPS service endpoints 51 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed Your client AWS TCP 443 (HTTPS) Allows you to make DataSync The activation-region is the AWS Region where you activate API requests. your DataSync agent. Public endpoint: • datasync. activation- region.amazonaws.com FIPS endpoint: • datasync-fips. activatio n-region .amazonaws.com DataSync agent AWS TCP 443 (HTTPS) Allows the DataSync The activation-region is the AWS Region where you activate agent to get your DataSync agent. updates from AWS. For more information, see Managing your AWS DataSync agent. • amazonlinux.defaul t.amazonaws.com • cdn.amazonlinux.com • amazonlinux-2- repos- activatio n-region .s3.duals tack. activation- region.amazonaws.com • amazonlinux-2-repo s- activation- region.s3.activation- region.amazonaws.com • *.s3.activation- region.amazonaws.com Network requirements for public or FIPS service endpoints 52 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent Domain Name TCP/ UDP 53 (DNS) Service (DNS) server N/A Allows communication between the DataSync agent and DNS server. DataSync agent AWS TCP 22 (Support Allows AWS Support to channel) access your AWS Support channel: • 54.201.223.107 DataSync agent to help you troubleshoot issues. You don't need this port open for normal operation. Network requirements for public or FIPS service endpoints 53 AWS DataSync User Guide From To Protocol Port How it's used Endpoints accessed DataSync agent Network Time UDP 123 (NTP) Allows local systems to NTP: Protocol (NTP) server synchronize the VM time to the host time. • 0.amazon.pool.ntp.org • 1.amazon.pool.ntp.org • 2.amazon.pool.ntp.org • 3.amazon.pool.ntp.org Note To change the default NTP configuration of your VM agent to use a different NTP server using the local console, see Synchronizing the time on your VMware agent. The following diagram shows the ports required by DataSync when using public or FIPS service endpoints. Network requirements for public or FIPS service endpoints 54 AWS DataSync User Guide Network requirements for VPC service endpoints A virtual private cloud (VPC) endpoint provides a private connection between your agent and AWS that doesn't cross the internet or use public IP addresses. This also helps prevent packets from entering or exiting the network. For more information, see Choosing a VPC service endpoint. DataSync requires the following ports for your agent to use a VPC service endpoint. From To Protocol Port How it's used Your web browser Your DataSync agent TCP 80 (HTTP) Allows your browser to obtain the agent activation key. Once activated, DataSync closes the agent's port 80. Network requirements for VPC service endpoints 55 AWS DataSync User Guide From To Protocol Port How it's used Your agent doesn't require port 80 to be publicly accessible. The required level of access to port 80 depends on your network configura tion. Note You can get the activatio n key without a connection between your browser and agent. For more information, see Getting an activation key. DataSync agent Your DataSync VPC service endpoint TCP 1024-1064 For control plane traffic. To find the endpoint' s IP address, open the Amazon VPC console, choose Endpoints, and select your DataSync VPC service endpoint. On the Subnets tab, locate the IP address for your VPC service endpoint' s subnet. This is the endpoint's IP address. Network requirements for VPC service endpoints 56 AWS DataSync User Guide From To Protocol Port How it's used DataSync agent Your DataSync task's network interfaces TCP 443 (HTTPS) For data plane traffic. To find the IP addresses of these interfaces, see Viewing your network interfaces. DataSync agent Your DataSync VPC service endpoint TCP 22 (Support channel) To allow AWS Support to access your DataSync agent for troublesh ooting. You don't need this port open for normal operation. The
|
sync-dg-021
|
sync-dg.pdf
| 21 |
endpoint. On the Subnets tab, locate the IP address for your VPC service endpoint' s subnet. This is the endpoint's IP address. Network requirements for VPC service endpoints 56 AWS DataSync User Guide From To Protocol Port How it's used DataSync agent Your DataSync task's network interfaces TCP 443 (HTTPS) For data plane traffic. To find the IP addresses of these interfaces, see Viewing your network interfaces. DataSync agent Your DataSync VPC service endpoint TCP 22 (Support channel) To allow AWS Support to access your DataSync agent for troublesh ooting. You don't need this port open for normal operation. The following diagram shows the ports required by DataSync when using VPC service endpoints. Network requirements for VPC service endpoints 57 AWS DataSync User Guide Network interfaces for AWS DataSync transfers For every task you create, AWS DataSync automatically generates and manages network interfaces for data transfer traffic. How many network interfaces DataSync creates and where they’re created depends on the following details about your transfer task: • Whether your task requires a DataSync agent. • Your source and destination locations (where you’re copying data from and to). • The type of service endpoint that your agent uses. Each network interface uses a single IP address in your subnet (the more network interfaces there are, the more IP addresses you need). Use the following tables to make sure your subnet has enough IP addresses for your task. Network interfaces for transfers with agents In general, you need a DataSync agent when copying data between an AWS storage service and storage system that isn't AWS. Location Network interfaces created by default Where network interfaces are Where network interfaces are Amazon S3 Amazon EFS Amazon FSx for Windows File Server 4 4 4 created when using created when using a public or FIPS a private (VPC) endpoint endpoint N/A1 The subnet you specify when activating your DataSync agent. The subnet you specify when creating the Amazon EFS location. The same subnet as the file system's preferred file server. Network interfaces for data transfers 58 AWS DataSync Location Amazon FSx for Lustre Amazon FSx for OpenZFS Amazon FSx for NetApp ONTAP Network interfaces created by default Where network interfaces are Where network interfaces are User Guide created when using created when using a public or FIPS a private (VPC) endpoint endpoint The same subnet as the file system. The same subnet as the file system. The same subnet as the file system. 4 4 4 1 Network interfaces aren't needed because the DataSync service communicates directly with the S3 bucket. Network interfaces for transfers without agents You don’t need a DataSync agent when copying data between AWS services. The total number of network interfaces depends on the DataSync locations in your transfer. For example, transferring between Amazon EFS and FSx for Lustre file systems requires four network interfaces. Meanwhile, transferring between FSx for Windows File Server and an S3 bucket requires two network interfaces. Location Amazon S3 Amazon EFS Network interfaces created by default Where network interfaces are created N/A1 2 N/A1 The subnet you specify when creating the Amazon EFS location. Network interfaces for transfers without agents 59 AWS DataSync Location FSx for Windows File Server FSx for Lustre FSx for OpenZFS FSx for ONTAP Network interfaces created by default Where network interfaces are created User Guide 2 2 2 2 The same subnet as the preferred file server for the file system. The same subnet as the file system. The same subnet as the file system. The same subnet as the file system. 1 Network interfaces aren't needed because the DataSync service communicates directly with the S3 bucket. Viewing your network interfaces To see the network interfaces allocated to your DataSync transfer task, do one of the following: • Use the DescribeTask operation. The operation returns SourceNetworkInterfaceArns and DestinationNetworkInterfaceArns with responses that look like this: arn:aws:ec2:your-region:your-account-id:network-interface/eni-f012345678abcdef0 In this example, the network interface ID is eni-f012345678abcdef0. • In the Amazon EC2 console, search for your task ID (such as task-f012345678abcdef0) to find its network interfaces. DataSync architecture and routing examples with AWS Direct Connect Consider the following network architectures when using AWS Direct Connect with your AWS DataSync transfers. Viewing your network interfaces 60 AWS DataSync Tip User Guide If your network uses a transit gateway, we recommend separating your DataSync transfer's logical path to optimize costs (particularly if you're migrating a large amount of data). For example, if you use AWS Transit Gateway for normal traffic between your on-premises networks and virtual private clouds (VPCs), you can configure your network so that DataSync traffic bypasses the transit gateway and its data processing charges. Using Direct Connect with a DataSync VPC service endpoint If your DataSync agent uses a VPC service endpoint, you need a Direct Connect gateway to connect to your VPC.
|
sync-dg-022
|
sync-dg.pdf
| 22 |
60 AWS DataSync Tip User Guide If your network uses a transit gateway, we recommend separating your DataSync transfer's logical path to optimize costs (particularly if you're migrating a large amount of data). For example, if you use AWS Transit Gateway for normal traffic between your on-premises networks and virtual private clouds (VPCs), you can configure your network so that DataSync traffic bypasses the transit gateway and its data processing charges. Using Direct Connect with a DataSync VPC service endpoint If your DataSync agent uses a VPC service endpoint, you need a Direct Connect gateway to connect to your VPC. Contents • Direct Connect architecture with VPC endpoint and S3 destination • Direct Connect architecture with VPC endpoint and file system destination in same subnet • Direct Connect architecture with VPC endpoint and file system destination in different subnets Direct Connect architecture with VPC endpoint and S3 destination The following Direct Connect architecture shows a DataSync transfer from an on-premises storage system to an S3 bucket. Using Direct Connect with a DataSync VPC service endpoint 61 AWS DataSync User Guide 1. The DataSync agent routes DataSync traffic from the on-premises storage system (source location) to the Direct Connect connection. 2. DataSync traffic routes to a Direct Connect gateway that’s used for your transfer. To set this up, you must: a. Associate the Direct Connect gateway with a virtual private gateway for the VPC. This is the VPC where the DataSync VPC endpoint is located and where the DataSync task creates network interfaces. b. Create a private virtual interface that connects this VPC to the Direct Connect gateway. 3. DataSync traffic (control plane) routes through the DataSync VPC endpoint. 4. DataSync traffic (data plane) routes through the DataSync network interfaces in the subnet that you specify when creating the DataSync agent. 5. DataSync traffic routes through the DataSync service to the S3 bucket (destination location). Direct Connect architecture with VPC endpoint and file system destination in same subnet When transferring to or from an Amazon EFS or Amazon FSx file system, your file system and DataSync VPC endpoint can be in the same subnet. The following Direct Connect architecture shows a DataSync transfer from an on-premises storage system to an Amazon EFS or Amazon FSx file system. Using Direct Connect with a DataSync VPC service endpoint 62 AWS DataSync User Guide 1. The DataSync agent routes DataSync traffic from the on-premises storage system (source location) to the Direct Connect connection. 2. DataSync traffic routes to a Direct Connect gateway that's used for your transfer. To set this up, you must: a. Associate the Direct Connect gateway with a virtual private gateway for the VPC. This is the VPC where the DataSync VPC endpoint is located and where the DataSync task creates network interfaces for the file system (destination location). b. Create a private virtual interface that connects this VPC to the Direct Connect gateway. 3. DataSync traffic (control plane) routes through the DataSync VPC endpoint. 4. DataSync traffic (data plane) routes through the DataSync network interfaces in the file system's subnet. This is the same subnet where the DataSync VPC endpoint is located. 5. DataSync traffic routes through the DataSync service to the file system (destination location). Direct Connect architecture with VPC endpoint and file system destination in different subnets When transferring to or from an Amazon EFS or Amazon FSx file system, your file system and DataSync VPC endpoint can be in different subnets. The following Direct Connect architecture shows a DataSync transfer from an on-premises storage system to an Amazon EFS or Amazon FSx file system. Using Direct Connect with a DataSync VPC service endpoint 63 AWS DataSync User Guide 1. The DataSync agent routes DataSync traffic from the on-premises storage system (source location) to the Direct Connect connection. 2. DataSync traffic routes to a Direct Connect gateway that's used for your transfer. To set this up, you must: a. Associate the Direct Connect gateway with a virtual private gateway for the VPC. This is the VPC where the DataSync VPC endpoint is located and where the DataSync task creates network interfaces for the file system (destination location). b. Create a private virtual interface that connects these VPCs to the Direct Connect gateway. 3. DataSync traffic (control plane) routes through the DataSync VPC endpoint. 4. DataSync traffic (data plane) routes through the DataSync network interfaces in the file system's subnet. This is a different subnet than where the DataSync VPC endpoint is located. 5. DataSync traffic routes through the DataSync service to the file system (destination location). Using Direct Connect with a DataSync public or FIPS service endpoint If your DataSync agent uses a public or Federal Information Processing Standard (FIPS) service endpoint, you can route your data transfer traffic through a Direct Connect connection by using a public
|
sync-dg-023
|
sync-dg.pdf
| 23 |
the Direct Connect gateway. 3. DataSync traffic (control plane) routes through the DataSync VPC endpoint. 4. DataSync traffic (data plane) routes through the DataSync network interfaces in the file system's subnet. This is a different subnet than where the DataSync VPC endpoint is located. 5. DataSync traffic routes through the DataSync service to the file system (destination location). Using Direct Connect with a DataSync public or FIPS service endpoint If your DataSync agent uses a public or Federal Information Processing Standard (FIPS) service endpoint, you can route your data transfer traffic through a Direct Connect connection by using a public virtual interface. While Direct Connect advertises all local and remote AWS Region prefixes by default, you can use BGP community tags to control the scope (Regional or global) and route preference of traffic on the public virtual interface. You must advertise at least one public prefix to create your DataSync agent. The following Direct Connect architecture shows a DataSync transfer from an on-premises storage system through a public or FIPS endpoint to an S3 bucket. Using Direct Connect with a DataSync public or FIPS service endpoint 64 AWS DataSync User Guide 1. The DataSync agent routes DataSync traffic from the on-premises storage system (source location) to the Direct Connect connection. 2. DataSync traffic routes to the DataSync service through a public virtual interface. 3. DataSync traffic to the S3 bucket (destination location). Next steps If you need a DataSync agent and haven't created one yet, deploy the agent, choose a service endpoint for the agent, and then activate the agent. Once you create the agent, you can configure your network for DataSync. Configuring your AWS DataSync agent for multiple NICs If you configure your AWS DataSync agent to use multiple network adapters (NICs), the agent can be accessed by more than one IP address. You might want to do this in the following situations: • Maximizing throughput – You might want to maximize throughput to an agent when network adapters are a bottleneck. • Network isolation – Your Network File System (NFS), Server Message Block (SMB), Hadoop Distributed File System (HDFS), or object storage server might reside on a virtual LAN (VLAN) that lacks internet connectivity for security reasons. In a typical multiple-adapter use case, one adapter is configured as the route by which the agent communicates with AWS (as the default agent). Except for this one adapter, NFS, SMB, HDFS, or self-managed object storage locations must be in the same subnet as the adapter that connects to them. Otherwise, communication with the intended NFS, SMB, HDFS, or object storage locations might not be possible. In some cases, you might configure an NFS, SMB, HDFS, or object storage location on the same adapter that's used for communication with AWS. In these cases, NFS, SMB, HDFS, or object storage traffic for that server and AWS traffic flows through the same adapter. In some cases, you might configure one adapter to connect to the AWS DataSync console and then add a second adapter. In such a case, DataSync automatically configures the route table to use the second adapter as the preferred route. Next steps 65 AWS DataSync User Guide Transferring your data with AWS DataSync With AWS DataSync, you can transfer data to or from storage that's on-premises, in AWS, in other clouds, and on the edge. Setting up a DataSync transfer generally involves the following steps: 1. Determine if DataSync supports your transfer. 2. If you need a DataSync agent for your transfer, deploy and activate an agent as close as possible to one of your storage systems. For example, if you're transferring from an on-premises Network File System (NFS) file server, deploy the agent as close as you can to that file server. 3. Provide DataSync access to your storage system. DataSync needs permission to read from or write to your storage (depending on whether your storage is a source or destination location). For example, learn how to provide DataSync access to NFS file servers. 4. Connect your network for traffic between your storage system and DataSync. 5. Create a location for your storage system by using the DataSync console, AWS CLI, or DataSync API. For example, learn how to create an NFS location or Amazon S3 location. 6. Repeat steps 3-5 to create your transfer's other location. 7. Create and start a DataSync transfer task that includes your source and destination locations. Topics • Where can I transfer my data with AWS DataSync? • Transferring to or from on-premises storage with AWS DataSync • Transferring to or from AWS storage with AWS DataSync • Transferring to or from other cloud storage with AWS DataSync • Transferring to or from S3 compatible storage on Snowball Edge • Creating a task for transferring your data • Starting a task to transfer
|
sync-dg-024
|
sync-dg.pdf
| 24 |
create an NFS location or Amazon S3 location. 6. Repeat steps 3-5 to create your transfer's other location. 7. Create and start a DataSync transfer task that includes your source and destination locations. Topics • Where can I transfer my data with AWS DataSync? • Transferring to or from on-premises storage with AWS DataSync • Transferring to or from AWS storage with AWS DataSync • Transferring to or from other cloud storage with AWS DataSync • Transferring to or from S3 compatible storage on Snowball Edge • Creating a task for transferring your data • Starting a task to transfer your data 66 AWS DataSync User Guide Where can I transfer my data with AWS DataSync? Where you can transfer your data with AWS DataSync depends on the following factors: • Your transfer's source and destination locations • If your locations are in different AWS accounts • If your locations are in different AWS Regions Supported transfers in the same AWS account DataSync supports transfers between the following storage resources that are associated with the same AWS account. Source (from) Destination (to) • NFS • SMB • HDFS • Object storage • Amazon S3 (in AWS Regions) • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • Amazon S3 (in AWS Regions) • Amazon EFS • Amazon FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • NFS • SMB • HDFS • Object storage • Google Cloud Storage • Amazon S3 (in AWS Regions) • Microsoft Azure Blob Storage • Amazon EFS • Microsoft Azure Files • Wasabi Cloud Storage • DigitalOcean Spaces • Amazon FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS Where can I transfer my data? 67 AWS DataSync Source (from) Destination (to) • Oracle Cloud Infrastructure Object • FSx for ONTAP User Guide Storage • Cloudflare R2 Storage • Backblaze B2 Cloud Storage • NAVER Cloud Object Storage • Alibaba Cloud Object Storage Service • IBM Cloud Object Storage • Seagate Lyve Cloud • Amazon S3 (in AWS Regions) • Google Cloud Storage • Amazon EFS • Microsoft Azure Blob Storage • Amazon FSx for Windows File Server • Microsoft Azure Files • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • Wasabi Cloud Storage • DigitalOcean Spaces • Oracle Cloud Infrastructure Object Storage • Cloudflare R2 Storage • Backblaze B2 Cloud Storage • NAVER Cloud Object Storage • Alibaba Cloud Object Storage Service • IBM Cloud Object Storage • Seagate Lyve Cloud • Amazon S3 compatible storage on AWS • Amazon S3 (in AWS Regions) Snowball Edge • Amazon EFS • Amazon FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP Supported transfers in the same AWS account 68 AWS DataSync Source (from) Destination (to) User Guide • Amazon S3 (in AWS Regions) • Amazon S3 compatible storage on Snowball Edge • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • Amazon S3 (in AWS Regions) • Amazon S3 (in AWS Regions) • Amazon EFS • Amazon EFS • FSx for Windows File Server • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • Amazon S3 (in AWS Regions) • Amazon S3 on AWS Outposts • Amazon S3 on AWS Outposts • Amazon S3 (in AWS Regions) Supported transfers across AWS accounts DataSync supports some transfers between storage resources that are associated with different AWS accounts. Source (from) Destination (to) • Amazon EFS1 • Amazon EFS • FSx for Windows File • FSx for Windows File Server Server2 • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP Supported transfers across AWS accounts 69 AWS DataSync User Guide Source (from) Destination (to) • Amazon S3 (in AWS • Amazon S3 (in AWS Regions) Regions) • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • Amazon S3 (in AWS • Amazon S3 (in AWS Regions) Regions) • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • NFS • SMB • HDFS • Object storage 1 Configured as an NFS location. 2 Configured as an SMB location. • Amazon S3 (in AWS Regions) Supported transfers in the same AWS Region There are no restrictions when transferring data within the same AWS Region (including opt-in Regions). For more information, see AWS Regions supported by DataSync. Supported transfers between AWS Regions Note the
|
sync-dg-025
|
sync-dg.pdf
| 25 |
for ONTAP • Amazon S3 (in AWS • Amazon S3 (in AWS Regions) Regions) • Amazon EFS • FSx for Windows File Server • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP • NFS • SMB • HDFS • Object storage 1 Configured as an NFS location. 2 Configured as an SMB location. • Amazon S3 (in AWS Regions) Supported transfers in the same AWS Region There are no restrictions when transferring data within the same AWS Region (including opt-in Regions). For more information, see AWS Regions supported by DataSync. Supported transfers between AWS Regions Note the following when transferring data between AWS Regions supported by DataSync: Supported transfers in the same AWS Region 70 AWS DataSync User Guide • When transferring between AWS storage services in different AWS Regions, one of the two locations must be in the Region where you're using DataSync. • You can't transfer across Regions with an NFS, SMB, HDFS, or object storage location. In these situations, both of your transfer locations must be in the same Region where you activate your DataSync agent. • With AWS GovCloud (US) Regions, you can: • Transfer between the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. • Transfer between an AWS GovCloud (US) Region and commercial AWS Region, such as US East (N. Virginia). This type of transfer requires an agent. Important You pay for data transferred between AWS Regions. This transfer is billed as data transfer out from the source to destination Region. For more information, see AWS DataSync Pricing. Determining if your transfer requires a DataSync agent Depending on your transfer scenario, you might need a DataSync agent. For more information, see Do I need an AWS DataSync agent? Transferring to or from on-premises storage with AWS DataSync With AWS DataSync, you can transfer files and objects between a number of on-premises or self- managed storage systems and the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP Determining if your transfer requires a DataSync agent 71 AWS DataSync Topics • Configuring AWS DataSync transfers with an NFS file server • Configuring AWS DataSync transfers with an SMB file server • Configuring AWS DataSync transfers with an HDFS cluster • Configuring DataSync transfers with an object storage system User Guide Configuring AWS DataSync transfers with an NFS file server With AWS DataSync, you can transfer data between your Network File System (NFS) file server and one of the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP To set up this kind of transfer, you create a location for your NFS file server. You can use this location as a transfer source or destination. Providing DataSync access to NFS file servers For DataSync to access your NFS file server, you need a DataSync agent. The agent mounts an export on your file server by using the NFS protocol. Topics • Configuring your NFS export • Supported NFS versions Configuring your NFS export The export that DataSync needs for your transfer depends on if your NFS file server is a source or destination location and how your file server's permissions are configured. Configuring transfers with an NFS file server 72 AWS DataSync User Guide If your file server is a source location, DataSync just has to read and traverse your files and folders. If it's a destination location, DataSync needs root access to write to the location and set ownership, permissions, and other metadata on the files and folders that you're copying. You can use the no_root_squash option to allow root access for your export. The following examples describe how to configure an NFS export that provides access to DataSync. When your NFS file server is a source location (root access) Configure your export by using the following command, which provides DataSync read-only permissions (ro) and root access ( no_root_squash): export-path datasync-agent-ip-address(ro,no_root_squash) When your NFS file server is a destination location Configure your export by using the following command, which provides DataSync write permissions (rw) and root access ( no_root_squash): export-path datasync-agent-ip-address(rw,no_root_squash) When your NFS file server is a source location (no root access) Configure your export by using the following command, which specifies the POSIX user ID (UID) and group ID (GID) that you know would provide DataSync read-only permissions on the export: export-path datasync-agent-ip-address(ro,all_squash,anonuid=uid,anongid=gid) Supported NFS versions By default, DataSync uses NFS version 4.1. DataSync also supports NFS 4.0 and 3.x. Configuring your network for NFS transfers For your DataSync transfer, you must configure traffic for a few network connections: 1. Allow traffic
|
sync-dg-026
|
sync-dg.pdf
| 26 |
using the following command, which provides DataSync write permissions (rw) and root access ( no_root_squash): export-path datasync-agent-ip-address(rw,no_root_squash) When your NFS file server is a source location (no root access) Configure your export by using the following command, which specifies the POSIX user ID (UID) and group ID (GID) that you know would provide DataSync read-only permissions on the export: export-path datasync-agent-ip-address(ro,all_squash,anonuid=uid,anongid=gid) Supported NFS versions By default, DataSync uses NFS version 4.1. DataSync also supports NFS 4.0 and 3.x. Configuring your network for NFS transfers For your DataSync transfer, you must configure traffic for a few network connections: 1. Allow traffic on the following ports from your DataSync agent to your NFS file server: • For NFS version 4.1 and 4.0 – TCP port 2049 • For NFS version 3.x – TCP ports 111 and 2049 Configuring transfers with an NFS file server 73 AWS DataSync User Guide Other NFS clients in your network should be able to mount the NFS export that you're using to transfer data. The export must also be accessible without Kerberos authentication. 2. Configure traffic for your service endpoint connection (such as a VPC, public, or FIPS endpoint). 3. Allow traffic from the DataSync service to the AWS storage service you're transferring to or from. Creating your NFS transfer location Before you begin, note the following: • You need an NFS file server that you want to transfer data from. • You need a DataSync agent that can access your file server. • DataSync doesn't support copying NFS version 4 access control lists (ACLs). Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Network File System (NFS). For Agents, choose the DataSync agent that can connect to your NFS file server. You can choose more than one agent. For more information, see Using multiple DataSync agents. For NFS server, enter the Domain Name System (DNS) name or IP address of the NFS file server that your DataSync agent connects to. 6. For Mount path, enter the NFS export path that you want DataSync to mount. This path (or a subdirectory of the path) is where DataSync transfers data to or from. For more information, see Configuring your NFS export. 7. (Optional) Expand Additional settings and choose a specific NFS version for DataSync to use when accessing your file server. For more information, see Supported NFS versions. 8. (Optional) Choose Add tag to tag your NFS location. Configuring transfers with an NFS file server 74 AWS DataSync User Guide Tags are key-value pairs that help you manage, filter, and search for your locations. We recommend creating at least a name tag for your location. 9. Choose Create location. Using the AWS CLI • Use the following command to create an NFS location. aws datasync create-location-nfs \ --server-hostname nfs-server-address \ --on-prem-config AgentArns=datasync-agent-arns \ --subdirectory nfs-export-path For more information on creating the location, see Providing DataSync access to NFS file servers. DataSync automatically chooses the NFS version that it uses to read from an NFS location. To specify an NFS version, use the optional Version parameter in the NfsMountOptions API operation. This command returns the Amazon Resource Name (ARN) of the NFS location, similar to the ARN shown following. { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0f01451b140b2af49" } To make sure that the directory can be mounted, you can connect to any computer that has the same network configuration as your agent and run the following command. mount -t nfs -o nfsvers=<nfs-server-version <nfs-server-address:<nfs-export-path <test- folder The following is an example of the command. Configuring transfers with an NFS file server 75 AWS DataSync User Guide mount -t nfs -o nfsvers=3 198.51.100.123:/path_for_sync_to_read_from / temp_folder_to_test_mount_on_local_machine Configuring AWS DataSync transfers with an SMB file server With AWS DataSync, you can transfer data between your Server Message Block (SMB) file server and one of the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP To set up this kind of transfer, you create a location for your SMB file server. You can use this as a transfer source or destination. Providing DataSync access to SMB file servers DataSync connects to your file server using the SMB protocol and can authenticate with NTLM or Kerberos. Topics • Supported SMB versions • Using NTLM authentication • Using Kerberos authentication • Required permissions • DFS Namespaces Supported SMB versions By default, DataSync automatically chooses a version of the SMB protocol based on negotiation with your SMB file server. Configuring transfers with an SMB file server 76 AWS DataSync User Guide You also can configure DataSync to use a specific SMB version,
|
sync-dg-027
|
sync-dg.pdf
| 27 |
file server. You can use this as a transfer source or destination. Providing DataSync access to SMB file servers DataSync connects to your file server using the SMB protocol and can authenticate with NTLM or Kerberos. Topics • Supported SMB versions • Using NTLM authentication • Using Kerberos authentication • Required permissions • DFS Namespaces Supported SMB versions By default, DataSync automatically chooses a version of the SMB protocol based on negotiation with your SMB file server. Configuring transfers with an SMB file server 76 AWS DataSync User Guide You also can configure DataSync to use a specific SMB version, but we recommend doing this only if DataSync has trouble negotiating with the SMB file server automatically. DataSync supports SMB versions 1.0 and later. For security reasons, we recommend using SMB version 3.0.2 or later. Earlier versions, such as SMB 1.0, contain known security vulnerabilities that attackers can exploit to compromise your data. See the following table for a list of options in the DataSync console and API: Console option API option Description Automatic AUTOMATIC DataSync and the SMB file server negotiate the highest version of SMB that they mutually support between 2.1 and 3.1.1. This is the default and recommended option. If you instead choose a specific version that your file server doesn't support, you may get an Operation Not Supported error. SMB 3.0.2 SMB3 Restricts the protocol negotiation to only SMB version 3.0.2. SMB 2.1 SMB2 Restricts the protocol negotiation to only SMB version 2.1. SMB 2.0 SMB2_0 Restricts the protocol negotiation to only SMB version 2.0. SMB 1.0 SMB1 Restricts the protocol negotiation to only SMB version 1.0. Using NTLM authentication To use NTLM authentication, you provide a user name and password that allows DataSync to access the SMB file server that you're transferring to or from. The user can be a local user on your file server or a domain user in your Microsoft Active Directory. Configuring transfers with an SMB file server 77 AWS DataSync Using Kerberos authentication User Guide To use Kerberos authentication, you provide a Kerberos principal, Kerberos key table (keytab) file, and Kerberos configuration file that allows DataSync to access the SMB file server that you're transferring to or from. Topics • Prerequisites • DataSync configuration options for Kerberos Prerequisites You need to create a couple Kerberos artifacts and configure your network so that DataSync can access your SMB file server. • Create a Kerberos keytab file by using the ktpass or kutil utility. The following example creates a keytab file by using ktpass. The Kerberos realm that you specify (MYDOMAIN.ORG) must be upper case. ktpass /out C:\YOUR_KEYTAB.keytab /princ HOST/[email protected] /mapuser kerberosuser /pass * /crypto AES256-SHA1 /ptype KRB5_NT_PRINCIPAL • Prepare a simplified version of the Kerberos configuration file (krb5.conf). Include information about the realm, the location of the domain admin servers, and mappings of hostnames onto a Kerberos realm. Verify that the krb5.conf content is formatted with the correct mixed casing for the realms and domain realm names. For example: [libdefaults] dns_lookup_realm = true dns_lookup_kdc = true forwardable = true default_realm = MYDOMAIN.ORG [realms] MYDOMAIN.ORG = { kdc = mydomain.org admin_server = mydomain.org Configuring transfers with an SMB file server 78 AWS DataSync } [domain_realm] .mydomain.org = MYDOMAIN.ORG mydomain.org = MYDOMAIN.ORG User Guide • In your network configuration, make sure that your Kerberos Key Distribution Center (KDC) server port is open. The KDC port is typically TCP port 88. DataSync configuration options for Kerberos When creating an SMB location that uses Kerberos, you configure the following options. Console option API option Description SMB server ServerHostName Kerberos principal KerberosPrincipal Keytab file KerberosKeytab The domain name of the SMB file server that your DataSync agent will mount. For Kerberos, you can't specify the file server's IP address. An identity in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/kerb erosuser@MYDOMAIN. ORG . Principal names are case sensitive. A Kerberos key table (keytab) file, which includes mappings Configuring transfers with an SMB file server 79 AWS DataSync User Guide Console option API option Description Kerberos configuration file KerberosKrbConf DNS IP addresses (optional) DnsIpAddresses between your Kerberos principal and encryption keys. A krb5.conf file that defines your Kerberos realm configuration. The IPv4 addresses for the DNS servers that your SMB file server belongs to. If you have multiple domains in your environment, configuring this makes sure that DataSync connects to the right SMB file server. Required permissions The identity that you provide DataSync must have permission to mount and access your SMB file server's files, folders, and file metadata. If you provide an identity in your Active Directory, it must be a member of an Active Directory group with one or both of the following user rights (depending the
|
sync-dg-028
|
sync-dg.pdf
| 28 |
encryption keys. A krb5.conf file that defines your Kerberos realm configuration. The IPv4 addresses for the DNS servers that your SMB file server belongs to. If you have multiple domains in your environment, configuring this makes sure that DataSync connects to the right SMB file server. Required permissions The identity that you provide DataSync must have permission to mount and access your SMB file server's files, folders, and file metadata. If you provide an identity in your Active Directory, it must be a member of an Active Directory group with one or both of the following user rights (depending the metadata that you want DataSync to copy): User right Description Restore files and directories (SE_RESTOR E_NAME ) Allows DataSync to copy object ownership, permissions, file metadata, and NTFS discretio nary access lists (DACLs). This user right is usually granted to members of the Domain Admins and Backup Operators groups (both of which are default Active Directory groups). Configuring transfers with an SMB file server 80 AWS DataSync User right Description User Guide Manage auditing and security log (SE_SECURITY_NAME ) Allows DataSync to copy NTFS system access control lists (SACLs). This user right is usually granted to members of the Domain Admins group. If you want to copy Windows ACLs and are transferring between an SMB file server and another storage system that uses SMB (such as Amazon FSx for Windows File Server or FSx for ONTAP), the identity that you provide DataSync must belong to the same Active Directory domain or have an Active Directory trust relationship between their domains. DFS Namespaces DataSync doesn't support Microsoft Distributed File System (DFS) Namespaces. We recommend specifying an underlying file server or share instead when creating your DataSync location. Creating your SMB transfer location Before you begin, you need an SMB file server that you want to transfer data from. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Server Message Block (SMB). You configure this location as a source or destination later. 4. For Agents, choose the DataSync agent that can connect to your SMB file server. You can choose more than one agent. For more information, see Using multiple DataSync agents. 5. For SMB server, enter the domain name or IP address of the SMB file server that your DataSync agent will mount. Remember the following with this setting: • You can't specify an IP version 6 (IPv6) address. Configuring transfers with an SMB file server 81 AWS DataSync User Guide • If you're using Kerberos authentication, you must specify a domain name. 6. For Share name, enter the name of the share exported by your SMB file server where DataSync will read or write data. You can include a subdirectory in the share path (for example, /path/to/subdirectory). Make sure that other SMB clients in your network can also mount this path. To copy all the data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see Required permissions. 7. (Optional) Expand Additional settings and choose an SMB Version for DataSync to use when accessing your file server. By default, DataSync automatically chooses a version based on negotiation with the SMB file server. For information, see Supported SMB versions. 8. For Authentication type, choose NTLM or Kerberos. 9. Do one of the following depending on your authentication type: NTLM • For User, enter a user name that can mount your SMB file server and has permission to access the files and folders involved in your transfer. For more information, see Required permissions. • For Password, enter the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. • (Optional) For Domain, enter the Windows domain name that your SMB file server belongs to. If you have multiple domains in your environment, configuring this setting makes sure that DataSync connects to the right SMB file server. Kerberos • For Kerberos principal, specify a principal in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/[email protected]. Configuring transfers with an SMB file server 82 AWS DataSync User Guide Principal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this setting doesn’t exactly match the principal that you use to create the keytab file. • For Keytab file, upload a keytab file that includes mappings between your Kerberos principal and encryption keys. • For Kerberos configuration file, upload a krb5.conf file that defines your Kerberos realm configuration. • (Optional) For DNS
|
sync-dg-029
|
sync-dg.pdf
| 29 |
the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/[email protected]. Configuring transfers with an SMB file server 82 AWS DataSync User Guide Principal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this setting doesn’t exactly match the principal that you use to create the keytab file. • For Keytab file, upload a keytab file that includes mappings between your Kerberos principal and encryption keys. • For Kerberos configuration file, upload a krb5.conf file that defines your Kerberos realm configuration. • (Optional) For DNS IP addresses, specify up to two IPv4 addresses for the DNS servers that your SMB file server belongs to. If you have multiple domains in your environment, configuring this parameter makes sure that DataSync connects to the right SMB file server. 10. (Optional) Choose Add tag to tag your SMB location. Tags are key-value pairs that help you manage, filter, and search for your locations. We recommend creating at least a name tag for your location. 11. Choose Create location. Using the AWS CLI The following instructions describe how to create SMB locations with NTLM or Kerberos authentication. NTLM 1. Copy the following create-location-smb command. aws datasync create-location-smb \ --agent-arns datasync-agent-arns \ --server-hostname smb-server-address \ --subdirectory smb-export-path \ --authentication-type "NTLM" \ --user user-who-can-mount-share \ --password user-password \ --domain windows-domain-of-smb-server 2. For --agent-arns, specify the DataSync agent that can connect to your SMB file server. Configuring transfers with an SMB file server 83 AWS DataSync User Guide You can choose more than one agent. For more information, see Using multiple DataSync agents. 3. 4. 5. 6. 7. For --server-hostname, specify the domain name or IPv4 address of the SMB file server that your DataSync agent will mount. For --subdirectory, specify the name of the share exported by your SMB file server where DataSync will read or write data. You can include a subdirectory in the share path (for example, /path/to/subdirectory). Make sure that other SMB clients in your network can also mount this path. To copy all the data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see Required permissions. For --user, specify a user name that can mount your SMB file server and has permission to access the files and folders involved in your transfer. For more information, see Required permissions. For --password, specify the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. (Optional) For --domain, specify the Windows domain name that your SMB file server belongs to. If you have multiple domains in your environment, configuring this setting makes sure that DataSync connects to the right SMB file server. 8. (Optional) Add the --version option if you want DataSync to use a specific SMB version. For more information, see Supported SMB versions. 9. Run the create-location-smb command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "arn:aws:datasync:us-east-1:123456789012:location/loc-01234567890example" } Configuring transfers with an SMB file server 84 AWS DataSync Kerberos User Guide 1. Copy the following create-location-smb command. aws datasync create-location-smb \ --agent-arns datasync-agent-arns \ --server-hostname smb-server-address \ --subdirectory smb-export-path \ --authentication-type "KERBEROS" \ --kerberos-principal "HOST/[email protected]" \ --kerberos-keytab "fileb://path/to/file.keytab" \ --kerberos-krb5-conf "file://path/to/krb5.conf" \ --dns-ip-addresses array-of-ipv4-addresses 2. For --agent-arns, specify the DataSync agent that can connect to your SMB file server. You can choose more than one agent. For more information, see Using multiple DataSync agents. 3. 4. For --server-hostname, specify the domain name of the SMB file server that your DataSync agent will mount. For --subdirectory, specify the name of the share exported by your SMB file server where DataSync will read or write data. You can include a subdirectory in the share path (for example, /path/to/subdirectory). Make sure that other SMB clients in your network can also mount this path. To copy all the data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see Required permissions. 5. For the Kerberos options, do the following: • --kerberos-principal: Specify a principal in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/[email protected]. Principal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this option doesn’t exactly match the principal that you use to create the keytab file. • --kerberos-keytab: Specify a keytab file that includes mappings between your Kerberos principal and encryption keys. Configuring transfers with an SMB file server 85 AWS DataSync User Guide • --kerberos-krb5-conf: Specify a krb5.conf file
|
sync-dg-030
|
sync-dg.pdf
| 30 |
following: • --kerberos-principal: Specify a principal in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server. A Kerberos principal might look like HOST/[email protected]. Principal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this option doesn’t exactly match the principal that you use to create the keytab file. • --kerberos-keytab: Specify a keytab file that includes mappings between your Kerberos principal and encryption keys. Configuring transfers with an SMB file server 85 AWS DataSync User Guide • --kerberos-krb5-conf: Specify a krb5.conf file that defines your Kerberos realm configuration. • (Optional) --dns-ip-addresses: Specify up to two IPv4 addresses for the DNS servers that your SMB file server belongs to. If you have multiple domains in your environment, configuring this parameter makes sure that DataSync connects to the right SMB file server. 6. (Optional) Add the --version option if you want DataSync to use a specific SMB version. For more information, see Supported SMB versions. 7. Run the create-location-smb command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "arn:aws:datasync:us-east-1:123456789012:location/loc-01234567890example" } Configuring AWS DataSync transfers with an HDFS cluster With AWS DataSync, you can transfer data between your Hadoop Distributed File System (HDFS) cluster and one of the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP To set up this kind of transfer, you create a location for your HDFS cluster. You can use this location as a transfer source or destination. Configuring transfers with an HDFS cluster 86 AWS DataSync User Guide Providing DataSync access to HDFS clusters To connect to your HDFS cluster, DataSync uses an agent that you deploy as close as possible to your HDFS cluster. The DataSync agent acts as an HDFS client and communicates with the NameNodes and DataNodes in your cluster. When you start a transfer task, DataSync queries the NameNode for locations of files and folders on the cluster. If you configure your HDFS location as a source location, DataSync reads files and folder data from the DataNodes in your cluster and copies that data to the destination. If you configure your HDFS location as a destination location, then DataSync writes files and folders from the source to the DataNodes in your cluster. Authentication When connecting to an HDFS cluster, DataSync supports simple authentication or Kerberos authentication. To use simple authentication, provide the user name of a user with rights to read and write to the HDFS cluster. To use Kerberos authentication, provide a Kerberos configuration file, a Kerberos key table (keytab) file, and a Kerberos principal name. The credentials of the Kerberos principal must be in the provided keytab file. Encryption When using Kerberos authentication, DataSync supports encryption of data as it's transmitted between the DataSync agent and your HDFS cluster. Encrypt your data by using the Quality of Protection (QOP) configuration settings on your HDFS cluster and by specifying the QOP settings when creating your HDFS location. The QOP configuration includes settings for data transfer protection and Remote Procedure Call (RPC) protection. DataSync supports the following Kerberos encryption types: • des-cbc-crc • des-cbc-md4 • des-cbc-md5 • des3-cbc-sha1 • arcfour-hmac • arcfour-hmac-exp • aes128-cts-hmac-sha1-96 • aes256-cts-hmac-sha1-96 Configuring transfers with an HDFS cluster 87 AWS DataSync User Guide • aes128-cts-hmac-sha256-128 • aes256-cts-hmac-sha384-192 • camellia128-cts-cmac • camellia256-cts-cmac You can also configure HDFS clusters for encryption at rest using Transparent Data Encryption (TDE). When using simple authentication, DataSync reads and writes to TDE-enabled clusters. If you're using DataSync to copy data to a TDE-enabled cluster, first configure the encryption zones on the HDFS cluster. DataSync doesn't create encryption zones. Unsupported HDFS features The following HDFS capabilities aren't currently supported by DataSync: • Transparent Data Encryption (TDE) when using Kerberos authentication • Configuring multiple NameNodes • Hadoop HDFS over HTTP (HttpFS) • POSIX access control lists (ACLs) • HDFS extended attributes (xattrs) • HDFS clusters using Apache HBase Creating your HDFS transfer location You can use your location as a source or destination for your DataSync transfer. Before you begin: Verify network connectivity between your agent and Hadoop cluster by doing the following: • Test access to the TCP ports listed in Network requirements for on-premises, self-managed, other cloud, and edge storage. • Test access between your local agent and your Hadoop cluster. For instructions, see Verifying your agent's connection to your storage system. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Configuring transfers with an HDFS cluster 88 AWS DataSync User Guide 2. 3. In the left navigation pane, expand Data transfer, then choose Locations
|
sync-dg-031
|
sync-dg.pdf
| 31 |
source or destination for your DataSync transfer. Before you begin: Verify network connectivity between your agent and Hadoop cluster by doing the following: • Test access to the TCP ports listed in Network requirements for on-premises, self-managed, other cloud, and edge storage. • Test access between your local agent and your Hadoop cluster. For instructions, see Verifying your agent's connection to your storage system. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Configuring transfers with an HDFS cluster 88 AWS DataSync User Guide 2. 3. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Hadoop Distributed File System (HDFS). You can configure this location as a source or destination later. 4. For Agents, choose the agent that can connect to your HDFS cluster. You can choose more than one agent. For more information, see Using multiple DataSync agents. For NameNode, provide the domain name or IP address of your HDFS cluster's primary NameNode. For Folder, enter a folder on your HDFS cluster that you want DataSync to use for the data transfer. 5. 6. If your HDFS location is a source, DataSync copies the files in this folder to the destination. If your location is a destination, DataSync writes files to this folder. 7. To set the Block size or Replication factor, choose Additional settings. The default block size is 128 MiB. The block sizes that you provide must be a multiple of 512 bytes. The default replication factor is three DataNodes when transferring to the HDFS cluster. 8. In the Security section, choose the Authentication type used on your HDFS cluster. • Simple – For User, specify the user name with the following permissions on the HDFS cluster (depending on your use case): • If you plan to use this location as a source location, specify a user that only has read permissions. • If you plan to use this location as a destination location, specify a user that has read and write permissions. Optionally, specify the URI of the Key Management Server (KMS) of your HDFS cluster. • Kerberos – Specify the Kerberos Principal with access to your HDFS cluster. Next, provide the KeyTab file that contains the provided Kerberos principal. Then, provide the Kerberos configuration file. Finally, specify the type of encryption in transit protection in the RPC protection and Data transfer protection dropdown lists. 9. (Optional) Choose Add tag to tag your HDFS location. Configuring transfers with an HDFS cluster 89 AWS DataSync User Guide Tags are key-value pairs that help you manage, filter, and search for your locations. We recommend creating at least a name tag for your location. 10. Choose Create location. Using the AWS CLI 1. Copy the following create-location-hdfs command. aws datasync create-location-hdfs --name-nodes [{"Hostname":"host1", "Port": 8020}] \ --authentication-type "SIMPLE|KERBEROS" \ --agent-arns [arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890example] \ --subdirectory "/path/to/my/data" 2. 3. For the --name-nodes parameter, specify the hostname or IP address of your HDFS cluster's primary NameNode and the TCP port that the NameNode is listening on. For the --authentication-type parameter, specify the type of authentication to use when connecting to the Hadoop cluster. You can specify SIMPLE or KERBEROS. If you use SIMPLE authentication, use the --simple-user parameter to specify the user name of the user. If you use KERBEROS authentication, use the --kerberos-principal, -- kerberos-keytab, and --kerberos-krb5-conf parameters. For more information, see create-location-hdfs. 4. For the --agent-arns parameter, specify the ARN of the DataSync agent that can connect to your HDFS cluster. You can choose more than one agent. For more information, see Using multiple DataSync agents. 5. (Optional) For the --subdirectory parameter, specify a folder on your HDFS cluster that you want DataSync to use for the data transfer. If your HDFS location is a source, DataSync copies the files in this folder to the destination. If your location is a destination, DataSync writes files to this folder. 6. Run the create-location-hdfs command. Configuring transfers with an HDFS cluster 90 AWS DataSync User Guide If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "arn:aws:datasync:us-east-1:123456789012:location/loc-01234567890example" } Configuring DataSync transfers with an object storage system With AWS DataSync, you can transfer data between your object storage system and one of the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP To set up this kind of transfer, you create a location for your object storage system. You can use this location as a transfer source or destination. Prerequisites Your object storage system must be compatible with the following Amazon S3 API operations for DataSync to connect to it: • AbortMultipartUpload • CompleteMultipartUpload
|
sync-dg-032
|
sync-dg.pdf
| 32 |
AWS DataSync, you can transfer data between your object storage system and one of the following AWS storage services: • Amazon S3 • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP To set up this kind of transfer, you create a location for your object storage system. You can use this location as a transfer source or destination. Prerequisites Your object storage system must be compatible with the following Amazon S3 API operations for DataSync to connect to it: • AbortMultipartUpload • CompleteMultipartUpload • CopyObject • CreateMultipartUpload • DeleteObject • DeleteObjects • DeleteObjectTagging Configuring transfers with an object storage system 91 User Guide AWS DataSync • GetBucketLocation • GetObject • GetObjectTagging • HeadBucket • HeadObject • ListObjectsV2 • PutObject • PutObjectTagging • UploadPart Creating your object storage transfer location Before you begin, you need an object storage system that you plan to transfer data to or from. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Object storage. You configure this location as a source or destination later. 4. For Agents, choose the DataSync agent that can connect to your object storage system. 5. 6. 7. 8. You can choose more than one agent. For more information, see Using multiple DataSync agents. For Server, provide the domain name or IP address of the object storage server. For Bucket name, enter the name of the object storage bucket involved in the transfer. For Folder, enter an object prefix. DataSync only copies objects with this prefix. To configure the connection to the object storage server, expand Additional settings and do the following: a. For Server protocol, choose HTTP or HTTPS. Configuring transfers with an object storage system 92 AWS DataSync User Guide b. For Server port, use a default port (80 for HTTP or 443 for HTTPS) or specify a custom port if needed. c. For Certificate, if your object storage system uses a private or self-signed certificate authority (CA), select Choose file and specify a single .pem file with a full certificate chain. The certificate chain might include: • The object storage system's certificate • All intermediate certificates (if there are any) • The root certificate of the signing CA You can concatenate your certificates into a .pem file (which can be up to 32768 bytes before base64 encoding). The following example cat command creates an object_storage_certificates.pem file that includes three certificates: cat object_server_certificate.pem intermediate_certificate.pem ca_root_certificate.pem > object_storage_certificates.pem 9. If credentials are required to access the object storage server, select Requires credentials and enter the Access key and Secret key for accessing the bucket. The access key and secret key can be a user name and password, respectively. 10. (Optional) Choose Add tag to tag your object storage location. Tags are key-value pairs that help you manage, filter, and search for your locations. We recommend creating at least a name tag for your location. 11. Choose Create location. Using the AWS CLI 1. Copy the following create-location-object-storage command: aws datasync create-location-object-storage \ --server-hostname object-storage-server.example.com \ --bucket-name your-bucket \ --agent-arns arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890deadfb Configuring transfers with an object storage system 93 AWS DataSync User Guide 2. Specify the following required parameters in the command: • --server-hostname – Specify the domain name or IP address of your object storage server. • --bucket-name – Specify the name of the bucket on your object storage server that you're transferring to or from. • --agent-arns – Specify the DataSync agents that you want to connect to your object storage server. 3. (Optional) Add any of the following parameters to the command: • --server-port – Specifies the port that your object storage server accepts inbound network traffic on (for example, port 443). • --server-protocol – Specifies the protocol (HTTP or HTTPS) which your object storage server uses to communicate. • --access-key – Specifies the access key (for example, a user name) if credentials are required to authenticate with the object storage server. • --secret-key – Specifies the secret key (for example, a password) if credentials are required to authenticate with the object storage server. • --server-certificate – Specifies a certificate chain for DataSync to authenticate with your object storage system if the system uses a private or self-signed certificate authority (CA). You must specify a single .pem file with a full certificate chain (for example, file:/// home/user/.ssh/object_storage_certificates.pem). The certificate chain might include: • The object storage system's certificate • All intermediate certificates (if there are any) • The root certificate of the signing CA You can concatenate your certificates into a .pem file (which can be up to 32768 bytes before base64 encoding). The following example cat command
|
sync-dg-033
|
sync-dg.pdf
| 33 |
authenticate with the object storage server. • --server-certificate – Specifies a certificate chain for DataSync to authenticate with your object storage system if the system uses a private or self-signed certificate authority (CA). You must specify a single .pem file with a full certificate chain (for example, file:/// home/user/.ssh/object_storage_certificates.pem). The certificate chain might include: • The object storage system's certificate • All intermediate certificates (if there are any) • The root certificate of the signing CA You can concatenate your certificates into a .pem file (which can be up to 32768 bytes before base64 encoding). The following example cat command creates an object_storage_certificates.pem file that includes three certificates: cat object_server_certificate.pem intermediate_certificate.pem ca_root_certificate.pem > object_storage_certificates.pem • --subdirectory – Specifies the object prefix for your object storage server. Configuring transfers with an object storage system 94 AWS DataSync User Guide DataSync only copies objects with this prefix. • --tags – Specifies the key-value pair that represents a tag that you want to add to the location resource. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. 4. Run the create-location-object-storage command. You get a response that shows you the location ARN that you just created. { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-01234567890abcdef" } Transferring to or from AWS storage with AWS DataSync With AWS DataSync, you can transfer data to or from a number of AWS storage services. For more information, see Where can I transfer my data with DataSync? Topics • Configuring AWS DataSync transfers with Amazon S3 • Configuring AWS DataSync transfers with Amazon EFS • Configuring transfers with FSx for Windows File Server • Configuring DataSync transfers with FSx for Lustre • Configuring DataSync transfers with Amazon FSx for OpenZFS • Configuring transfers with Amazon FSx for NetApp ONTAP Configuring AWS DataSync transfers with Amazon S3 To transfer data to or from your Amazon S3 bucket, you create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Transferring to or from AWS storage 95 AWS DataSync User Guide Providing DataSync access to S3 buckets DataSync needs access to the S3 bucket that you're transferring to or from. To do this, you must create an AWS Identity and Access Management (IAM) role that DataSync assumes with the permissions required to access the bucket. You then specify this role when creating your Amazon S3 location for DataSync. Contents • Required permissions • Creating an IAM role for DataSync to access your Amazon S3 location • Accessing S3 buckets using server-side encryption • Accessing restricted S3 buckets • Accessing S3 buckets with restricted VPC access Required permissions The permissions that your IAM role needs can depend on whether bucket is a DataSync source or destination location. Amazon S3 on Outposts requires a different set of permissions. Amazon S3 (source location) { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket" }, { "Action": [ "s3:AbortMultipartUpload", "s3:GetObject", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetObjectVersionTagging", Configuring transfers with Amazon S3 96 AWS DataSync User Guide "s3:ListMultipartUploadParts" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*" } ] } Amazon S3 (destination location) { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket" "Condition": { "StringEquals": { "aws:ResourceAccount": "123456789012" } } }, { "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetObjectVersionTagging", "s3:ListMultipartUploadParts", "s3:PutObject", "s3:PutObjectTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*" "Condition": { "StringEquals": { Configuring transfers with Amazon S3 97 AWS DataSync User Guide "aws:ResourceAccount": "123456789012" } } } ] } Amazon S3 on Outposts { "Version": "2012-10-17", "Statement": [{ "Action": [ "s3-outposts:ListBucket", "s3-outposts:ListBucketMultipartUploads" ], "Effect": "Allow", "Resource": [ "arn:aws:s3-outposts:region:account-id:outpost/outpost-id/ bucket/amzn-s3-demo-bucket", "arn:aws:s3-outposts:region:account-id:outpost/outpost-id/ accesspoint/bucket-access-point-name" ] }, { "Action": [ "s3-outposts:AbortMultipartUpload", "s3-outposts:DeleteObject", "s3-outposts:GetObject", "s3-outposts:GetObjectTagging", "s3-outposts:GetObjectVersion", "s3-outposts:GetObjectVersionTagging", "s3-outposts:ListMultipartUploadParts", "s3-outposts:PutObject", "s3-outposts:PutObjectTagging" ], "Effect": "Allow", "Resource": [ "arn:aws:s3-outposts:region:account-id:outpost/outpost-id/ bucket/amzn-s3-demo-bucket/*", "arn:aws:s3-outposts:region:account-id:outpost/outpost-id/ accesspoint/bucket-access-point-name/*" ] Configuring transfers with Amazon S3 98 AWS DataSync }, User Guide { "Action": "s3-outposts:GetAccessPoint", "Effect": "Allow", "Resource": "arn:aws:s3-outposts:region:account- id:outpost/outpost-id/accesspoint/bucket-access-point-name" } ] } Creating an IAM role for DataSync to access your Amazon S3 location When creating your Amazon S3 location in the console, DataSync can automatically create and assume an IAM role that normally has the right permissions to access your S3 bucket. In some situations, you might need to create this role manually (for example, accessing buckets with extra layers of security or transferring to or from a bucket in a different AWS accounts). Manually creating an IAM role for DataSync 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 3. On the Select trusted entity page, for Trusted entity type, choose AWS service. 4. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 5. On the Add permissions page, choose Next. Give your role a name and
|
sync-dg-034
|
sync-dg.pdf
| 34 |
need to create this role manually (for example, accessing buckets with extra layers of security or transferring to or from a bucket in a different AWS accounts). Manually creating an IAM role for DataSync 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 3. On the Select trusted entity page, for Trusted entity type, choose AWS service. 4. For Use case, choose DataSync in the dropdown list and select DataSync. Choose Next. 5. On the Add permissions page, choose Next. Give your role a name and choose Create role. 6. On the Roles page, search for the role that you just created and choose its name. 7. On the role's details page, choose the Permissions tab. Choose Add permissions then Create inline policy. 8. Choose the JSON tab and add the permissions required to access your bucket into the policy editor. 9. Choose Next. Give your policy a name and choose Create policy. 10. (Recommended) To prevent the cross-service confused deputy problem, do the following: a. On the role's details page, choose the Trust relationships tab. Choose Edit trust policy. b. Update the trust policy by using the following example, which includes the aws:SourceArn and aws:SourceAccount global condition context keys: Configuring transfers with Amazon S3 99 AWS DataSync User Guide { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "account-id" }, "StringLike": { "aws:SourceArn": "arn:aws:datasync:region:account-id:*" } } }] } c. Choose Update policy. You can specify this role when creating your Amazon S3 location. Accessing S3 buckets using server-side encryption DataSync can transfer data to or from S3 buckets that use server-side encryption. The type of encryption key a bucket uses can determine if you need a custom policy allowing DataSync to access the bucket. When using DataSync with S3 buckets that use server-side encryption, remember the following: • If your S3 bucket is encrypted with an AWS managed key – DataSync can access the bucket's objects by default if all your resources are in the same AWS account. • If your S3 bucket is encrypted with a customer managed AWS Key Management Service (AWS KMS) key (SSE-KMS) – The key's policy must include the IAM role that DataSync uses to access the bucket. • If your S3 bucket is encrypted with a customer managed SSE-KMS key and in a different AWS account – DataSync needs permission to access the bucket in the other AWS account. You can set up this up by doing the following: Configuring transfers with Amazon S3 100 AWS DataSync User Guide • In the IAM role that DataSync uses, you must specify the cross-account bucket's SSE-KMS key by using the key's fully qualified Amazon Resource Name (ARN). This is the same key ARN that you use to configure the bucket's default encryption. You can't specify a key ID, alias name, or alias ARN in this situation. Here's an example key ARN: arn:aws:kms:us- west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab For more information on specifying KMS keys in IAM policy statements, see the AWS Key Management Service Developer Guide. • In the SSE-KMS key policy, specify the IAM role used by DataSync. • If your S3 bucket is encrypted with a customer managed AWS KMS key (DSSE-KMS) for dual- layer server-side encryption – The key's policy must include the IAM role that DataSync uses to access the bucket. (Keep in mind that DSSE-KMS doesn't support S3 Bucket Keys, which can reduce AWS KMS request costs.) • If your S3 bucket is encrypted with a customer-provided encryption key (SSE-C) – DataSync can't access this bucket. Example: SSE-KMS key policy for DataSync The following example is a key policy for a customer-managed SSE-KMS key. The policy is associated with an S3 bucket that uses server-side encryption. If you want to use this example, replace the following values with your own: • account-id – Your AWS account. • admin-role-name – The name of the IAM role that can administer the key. • datasync-role-name – The name of the IAM role that allows DataSync to use the key when accessing the bucket. { "Id": "key-consolepolicy-3", "Version": "2012-10-17", "Statement": [ { Configuring transfers with Amazon S3 101 AWS DataSync User Guide "Sid": "Enable IAM Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:role/admin-role-name" }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:role/datasync-role-name" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*" ], Configuring transfers with Amazon S3 102 AWS DataSync "Resource": "*" } ] } Accessing restricted S3 buckets User Guide If you
|
sync-dg-035
|
sync-dg.pdf
| 35 |
with Amazon S3 101 AWS DataSync User Guide "Sid": "Enable IAM Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow access for Key Administrators", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:role/admin-role-name" }, "Action": [ "kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:TagResource", "kms:UntagResource", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-id:role/datasync-role-name" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*" ], Configuring transfers with Amazon S3 102 AWS DataSync "Resource": "*" } ] } Accessing restricted S3 buckets User Guide If you need to transfer to or from an S3 bucket that typically denies all access, you can edit the bucket policy so that DataSync can access the bucket only for your transfer. Example: Allowing access based on IAM roles 1. Copy the following S3 bucket policy. { "Version": "2012-10-17", "Statement": [{ "Sid": "Deny-access-to-bucket", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::amzn-s3-demo-bucket", "arn:aws:s3:::amzn-s3-demo-bucket/*" ], "Condition": { "StringNotLike": { "aws:userid": [ "datasync-iam-role-id:*", "your-iam-role-id" ] } } }] } 2. In the policy, replace the following values: • amzn-s3-demo-bucket – Specify the name of the restricted S3 bucket. • datasync-iam-role-id – Specify the ID of the IAM role that DataSync uses to access the bucket. Run the following AWS CLI command to get the IAM role ID: Configuring transfers with Amazon S3 103 AWS DataSync User Guide aws iam get-role --role-name datasync-iam-role-name In the output, look for the RoleId value: "RoleId": "ANPAJ2UCCR6DPCEXAMPLE" • your-iam-role-id – Specify the ID of the IAM role that you use to create your DataSync location for the bucket. Run the following command to get the IAM role ID: aws iam get-role --role-name your-iam-role-name In the output, look for the RoleId value: "RoleId": "AIDACKCEVSQ6C2EXAMPLE" 3. Add this policy to your S3 bucket policy. 4. When you're done using DataSync with the restricted bucket, remove the conditions for both IAM roles from the bucket policy. Accessing S3 buckets with restricted VPC access An Amazon S3 bucket that limits access to specific virtual private cloud (VPC) endpoints or VPCs will deny DataSync from transferring to or from that bucket. To enable transfers in these situations, you can update the bucket's policy to include the IAM role that you specify with your DataSync location. Option 1: Allowing access based on DataSync location role ARN In the S3 bucket policy, you can specify the Amazon Resource Name (ARN) of your DataSync location IAM role. The following example is an S3 bucket policy that denies access from all but two VPCs (vpc-1234567890abcdef0 and vpc-abcdef01234567890). However, the policy also includes the ArnNotLikeIfExists condition and aws:PrincipalArn condition key, which allow the ARN of a DataSync location role to access the bucket. { "Version": "2012-10-17", "Statement": [ Configuring transfers with Amazon S3 104 AWS DataSync { User Guide "Sid": "Access-to-specific-VPCs-only", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*", "Condition": { "StringNotEqualsIfExists": { "aws:SourceVpc": [ "vpc-1234567890abcdef0", "vpc-abcdef01234567890" ] }, "ArnNotLikeIfExists": { "aws:PrincipalArn": [ "arn:aws:iam::account-id:role/datasync-location-role-name" ] } } } ] } Option 2: Allowing access based on DataSync location role tag In the S3 bucket policy, you can specify a tag attached to your DataSync location IAM role. The following example is an S3 bucket policy that denies access from all but two VPCs (vpc-1234567890abcdef0 and vpc-abcdef01234567890). However, the policy also includes the StringNotEqualsIfExists condition and aws:PrincipalTag condition key, which allow a principal with the tag key exclude-from-vpc-restriction and value true. You can try a similar approach in your bucket policy by specifying a tag attached to your DataSync location role. { "Version": "2012-10-17", "Statement": [ { "Sid": "Access-to-specific-VPCs-only", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*", Configuring transfers with Amazon S3 105 AWS DataSync User Guide "Condition": { "StringNotEqualsIfExists": { "aws:SourceVpc": [ "vpc-1234567890abcdef0", "vpc-abcdef01234567890" ], "aws:PrincipalTag/exclude-from-vpc-restriction": "true" } } } ] } Storage class considerations with Amazon S3 transfers When Amazon S3 is your destination location, DataSync can transfer your data directly into a specific Amazon S3 storage class. Some storage classes have behaviors that can affect your Amazon S3 storage costs. When using storage classes that can incur additional charges for overwriting, deleting, or retrieving objects, changes to object data or metadata result in such charges. For more information, see Amazon S3 pricing. Important New objects transferred to your Amazon S3 destination location are stored using the storage class that you specify when creating your location. By default, DataSync preserves the storage class of existing objects in your destination location unless you configure your task to transfer all data. In those situations, the storage class that you specify when creating your location is used for all objects. Amazon S3 storage class Considerations S3 Standard Choose S3 Standard to store your frequently accessed files redundant ly in multiple Availability Zones that are geographically separated. This is the default
|
sync-dg-036
|
sync-dg.pdf
| 36 |
more information, see Amazon S3 pricing. Important New objects transferred to your Amazon S3 destination location are stored using the storage class that you specify when creating your location. By default, DataSync preserves the storage class of existing objects in your destination location unless you configure your task to transfer all data. In those situations, the storage class that you specify when creating your location is used for all objects. Amazon S3 storage class Considerations S3 Standard Choose S3 Standard to store your frequently accessed files redundant ly in multiple Availability Zones that are geographically separated. This is the default if you don't specify a storage class. Configuring transfers with Amazon S3 106 AWS DataSync Amazon S3 storage class S3 Intelligent-Tiering Considerations User Guide Choose S3 Intelligent-Tiering to optimize storage costs by automatic ally moving data to the most cost-effective storage access tier. You pay a monthly charge per object stored in the S3 Intelligent- Tiering storage class. This Amazon S3 charge includes monitoring data access patterns and moving objects between tiers. S3 Standard-IA Choose S3 Standard-IA to store your infrequently accessed objects redundantly in multiple Availability Zones that are geographically separated. Objects stored in the S3 Standard-IA storage class can incur additiona l charges for overwriting, deleting, or retrieving. Consider how often these objects change, how long you plan to keep these objects, and how often you need to access them. Changes to object data or metadata are equivalent to deleting an object and creating a new one to replace it. This results in additional charges for objects stored in the S3 Standard-IA storage class. Objects less than 128 KB are smaller than the minimum capacity charge per object in the S3 Standard-IA storage class. These objects are stored in the S3 Standard storage class. Configuring transfers with Amazon S3 107 AWS DataSync User Guide Amazon S3 storage class Considerations S3 One Zone-IA Choose S3 One Zone-IA to store your infrequently accessed objects in a single Availability Zone. Objects stored in the S3 One Zone-IA storage class can incur additiona l charges for overwriting, deleting, or retrieving. Consider how often these objects change, how long you plan to keep these objects, and how often you need to access them. Changes to object data or metadata are equivalent to deleting an object and creating a new one to replace it. This results in additional charges for objects stored in the S3 One Zone-IA storage class. Objects less than 128 KB are smaller than the minimum capacity charge per object in the S3 One Zone-IA storage class. These objects are stored in the S3 Standard storage class. S3 Glacier Instant Retrieval Choose S3 Glacier Instant Retrieval to archive objects that are rarely accessed but require retrieval in milliseconds. Data stored in the S3 Glacier Instant Retrieval storage class offers cost savings compared to the S3 Standard-IA storage class with the same latency and throughput performance. S3 Glacier Instant Retrieval has higher data access costs than S3 Standard-IA, though. Objects stored in S3 Glacier Instant Retrieval can incur additional charges for overwriting, deleting, or retrieving. Consider how often these objects change, how long you plan to keep these objects, and how often you need to access them. Changes to object data or metadata are equivalent to deleting an object and creating a new one to replace it. This results in additional charges for objects stored in the S3 Glacier Instant Retrieval storage class. Objects less than 128 KB are smaller than the minimum capacity charge per object in the S3 Glacier Instant Retrieval storage class. These objects are stored in the S3 Standard storage class. Configuring transfers with Amazon S3 108 AWS DataSync User Guide Amazon S3 storage class Considerations S3 Glacier Flexible Retrieval Choose S3 Glacier Flexible Retrieval for more active archives. Objects stored in S3 Glacier Flexible Retrieval can incur additional charges for overwriting, deleting, or retrieving. Consider how often these objects change, how long you plan to keep these objects, and how often you need to access them. Changes to object data or metadata are equivalent to deleting an object and creating a new one to replace it. This results in additional charges for objects stored in the S3 Glacier Flexible Retrieval storage class. The S3 Glacier Flexible Retrieval storage class requires 40 KB of additional metadata for each archived object. DataSync puts objects that are less than 40 KB in the S3 Standard storage class. You must restore objects archived in this storage class before DataSync can read them. For information, see Working with archived objects in the Amazon S3 User Guide. When using S3 Glacier Flexible Retrieval, choose the Verify only the data transferred task option to compare data and metadata checksums at the end of the transfer. You can't use the Verify all data in the
|
sync-dg-037
|
sync-dg.pdf
| 37 |
Glacier Flexible Retrieval storage class. The S3 Glacier Flexible Retrieval storage class requires 40 KB of additional metadata for each archived object. DataSync puts objects that are less than 40 KB in the S3 Standard storage class. You must restore objects archived in this storage class before DataSync can read them. For information, see Working with archived objects in the Amazon S3 User Guide. When using S3 Glacier Flexible Retrieval, choose the Verify only the data transferred task option to compare data and metadata checksums at the end of the transfer. You can't use the Verify all data in the destination option for this storage class because it requires retrieving all existing objects from the destination. Configuring transfers with Amazon S3 109 AWS DataSync User Guide Amazon S3 storage class Considerations S3 Glacier Deep Archive Choose S3 Glacier Deep Archive to archive your objects for long-term data retention and digital preservation where data is accessed once or twice a year. Objects stored in S3 Glacier Deep Archive can incur additional charges for overwriting, deleting, or retrieving. Consider how often these objects change, how long you plan to keep these objects, and how often you need to access them. Changes to object data or metadata are equivalent to deleting an object and creating a new one to replace it. This results in additional charges for objects stored in the S3 Glacier Deep Archive storage class. The S3 Glacier Deep Archive storage class requires 40 KB of additional metadata for each archived object. DataSync puts objects that are less than 40 KB in the S3 Standard storage class. You must restore objects archived in this storage class before DataSync can read them. For information, see Working with archived objects in the Amazon S3 User Guide. When using S3 Glacier Deep Archive, choose the Verify only the data transferred task option to compare data and metadata checksums at the end of the transfer. You can't use the Verify all data in the destination option for this storage class because it requires retrieving all existing objects from the destination. S3 Outposts The storage class for Amazon S3 on Outposts. Evaluating S3 request costs when using DataSync With Amazon S3 locations, you incur costs related to S3 API requests made by DataSync. This section can help you understand how DataSync uses these requests and how they might affect your Amazon S3 costs. Topics Configuring transfers with Amazon S3 110 AWS DataSync • S3 requests made by DataSync • Cost considerations S3 requests made by DataSync User Guide The following table describes the S3 requests that DataSync can make when you’re copying data to or from an Amazon S3 location. S3 request ListObjectV2 HeadObject GetObject GetObjectTagging PutObject How DataSync uses it DataSync makes at least one LIST request for every object ending in a forward slash (/) to list the objects that start with that prefix. This request is called during a task’s preparing phase. DataSync makes HEAD requests to retrieve object metadata during a task’s preparing and verifying phases. There can be multiple HEAD requests per object depending on how you want DataSync to verify the integrity of the data it transfers. DataSync makes GET requests to read data from an object during a task’s transferring phase. There can be multiple GET requests for large objects. If you configure your task to copy object tags, DataSync makes these GET requests to check for object tags during the task's preparing and transferring phases. DataSync makes PUT requests to create objects and prefixes in a destination S3 bucket during a task’s transferring phase. Since DataSync uses the Amazon S3 multipart upload feature, there can be multiple PUT Configuring transfers with Amazon S3 111 AWS DataSync S3 request PutObjectTagging CopyObject Cost considerations User Guide How DataSync uses it requests for large objects. To help minimize storage costs, we recommend using a lifecycle configuration to stop incomplete multipart uploads. If your source objects have tags and you configure your task to copy object tags, DataSync makes these PUT requests when transferring those tags. DataSync makes a COPY request to create a copy of an object only if that object’s metadata changes. This can happen if you originally copied data to the S3 bucket using another service or tool that didn’t carry over its metadata. DataSync makes S3 requests on S3 buckets every time you run your task. This can lead to charges adding up in certain situations. For example: • You’re frequently transferring objects to or from an S3 bucket. • You may not be transferring much data, but your S3 bucket has lots of objects in it. You can still see high charges in this scenario because DataSync makes S3 requests on each of the bucket's objects. • You're transferring between S3 buckets, so DataSync is making S3
|
sync-dg-038
|
sync-dg.pdf
| 38 |
to the S3 bucket using another service or tool that didn’t carry over its metadata. DataSync makes S3 requests on S3 buckets every time you run your task. This can lead to charges adding up in certain situations. For example: • You’re frequently transferring objects to or from an S3 bucket. • You may not be transferring much data, but your S3 bucket has lots of objects in it. You can still see high charges in this scenario because DataSync makes S3 requests on each of the bucket's objects. • You're transferring between S3 buckets, so DataSync is making S3 requests on the source and destination. To help minimize S3 request costs related to DataSync, consider the following: Topics • What S3 storage classes am I using? • How often do I need to transfer my data? Configuring transfers with Amazon S3 112 AWS DataSync User Guide What S3 storage classes am I using? S3 request charges can vary based on the Amazon S3 storage class your objects are using, particularly for classes that archive objects (such as S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive). Here are some scenarios in which storage classes can affect your S3 request charges when using DataSync: • Each time you run a task, DataSync makes HEAD requests to retrieve object metadata. These requests result in charges even if you aren’t moving any objects. How much these requests affect your bill depends on the storage class your objects are using along with the number of objects that DataSync scans. • If you moved objects into the S3 Glacier Instant Retrieval storage class (either directly or through a bucket lifecycle configuration), requests on objects in this class are more expensive than objects in other storage classes. • If you configure your DataSync task to verify that your source and destination locations are fully synchronized, there will be GET requests for each object in all storage classes (except S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive). • In addition to GET requests, you incur data retrieval costs for objects in the S3 Standard-IA, S3 One Zone-IA, or S3 Glacier Instant Retrieval storage class. For more information, see Amazon S3 pricing. How often do I need to transfer my data? If you need to move data on a recurring basis, think about a schedule that doesn't run more tasks than you need. You may also consider limiting the scope of your transfers. For example, you can configure DataSync to focus on objects in certain prefixes or filter what data gets transferred. These options can help reduce the number of S3 requests made each time you run your DataSync task. Object considerations with Amazon S3 transfers • If you're transferring from an S3 bucket, use S3 Storage Lens to determine how many objects you're moving. • When transferring between S3 buckets, we recommend using Enhanced task mode because you aren't subject to DataSync task quotas. Configuring transfers with Amazon S3 113 AWS DataSync User Guide • DataSync might not transfer an object with nonstandard characters in its name. For more information, see the object key naming guidelines in the Amazon S3 User Guide. • When using DataSync with an S3 bucket that uses versioning, remember the following: • When transferring to an S3 bucket, DataSync creates a new version of an object if that object is modified at the source. This results in additional charges. • An object has different version IDs in the source and destination buckets. • After initially transferring data from an S3 bucket to a file system (for example, NFS or Amazon FSx), subsequent runs of the same DataSync task won't include objects that have been modified but are the same size they were during the first transfer. Creating your transfer location for an Amazon S3 general purpose bucket To create a location for your transfer, you need an existing S3 general purpose bucket. If you don't have one, see the Amazon S3 User Guide. Important Before you create your location, make sure that you read the following sections: • Storage class considerations with Amazon S3 transfers • Evaluating S3 request costs when using DataSync Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon S3, and then choose General purpose bucket. For S3 URI, enter or choose the bucket and prefix that you want to use for your location. Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos Configuring transfers with Amazon S3 114 AWS DataSync User Guide • photos//2006/January • photos/./2006/February • photos/../2006/March 5. For S3
|
sync-dg-039
|
sync-dg.pdf
| 39 |
console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon S3, and then choose General purpose bucket. For S3 URI, enter or choose the bucket and prefix that you want to use for your location. Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos Configuring transfers with Amazon S3 114 AWS DataSync User Guide • photos//2006/January • photos/./2006/February • photos/../2006/March 5. For S3 storage class when used as a destination, choose a storage class that you want your objects to use when Amazon S3 is a transfer destination. For more information, see Storage class considerations with Amazon S3 transfers. 6. For IAM role, do one of the following: • Choose Autogenerate for DataSync to automatically create an IAM role with the permissions required to access the S3 bucket. If DataSync previously created an IAM role for this S3 bucket, that role is chosen by default. • Choose a custom IAM role that you created. For more information, see Creating an IAM role for DataSync to access your Amazon S3 location. 7. (Optional) Choose Add new tag to tag your Amazon S3 location. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. 8. Choose Create location. Using the AWS CLI 1. Copy the following create-location-s3 command: aws datasync create-location-s3 \ --s3-bucket-arn 'arn:aws:s3:::amzn-s3-demo-bucket' \ --s3-storage-class 'your-S3-storage-class' \ --s3-config 'BucketAccessRoleArn=arn:aws:iam::account-id:role/role-allowing- datasync-operations' \ --subdirectory /your-prefix-name 2. 3. For --s3-bucket-arn, specify the ARN of the S3 bucket that you want to use as a location. For --s3-storage-class, specify a storage class that you want your objects to use when Amazon S3 is a transfer destination. 4. For --s3-config, specify the ARN of the IAM role that DataSync needs to access your bucket. Configuring transfers with Amazon S3 115 AWS DataSync User Guide For more information, see Creating an IAM role for DataSync to access your Amazon S3 location. 5. For --subdirectory, specify a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 6. Run the create-location-s3 command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0b3017fc4ba4a2d8d" } You can use this location as a source or destination for your DataSync task. Creating your transfer location for an S3 on Outposts bucket To create a location for your transfer, you need an existing Amazon S3 on Outposts bucket. If you don't have one, see the Amazon S3 on Outposts User Guide. You also need a DataSync agent. For more information, see Deploying your agent on AWS Outposts. When transferring from an S3 on Outposts bucket prefix that contains a large dataset (such as hundreds of thousands or millions of objects), your DataSync task might time out. To avoid this, Configuring transfers with Amazon S3 116 AWS DataSync User Guide consider using a DataSync manifest, which lets you specify the exact objects that you need to transfer. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. 6. 7. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon S3, and then choose Outposts bucket. For S3 bucket, choose an Amazon S3 access point that can access your S3 on Outposts bucket. For more information, see the Amazon S3 User Guide. For S3 storage class when used as a destination, choose a storage class that you want your objects to use when Amazon S3 is a transfer destination. For more information, see Storage class considerations with Amazon S3 transfers. DataSync by default uses the S3 Outposts storage class for Amazon S3 on Outposts. For Agents, specify the Amazon Resource Name (ARN) of the DataSync agent on your Outpost. For Folder, enter a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 8. For IAM role, do one of the following: • Choose Autogenerate for DataSync to automatically create an IAM role with the permissions required to access the S3 bucket. If DataSync previously created an IAM role
|
sync-dg-040
|
sync-dg.pdf
| 40 |
the DataSync agent on your Outpost. For Folder, enter a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 8. For IAM role, do one of the following: • Choose Autogenerate for DataSync to automatically create an IAM role with the permissions required to access the S3 bucket. If DataSync previously created an IAM role for this S3 bucket, that role is chosen by default. Configuring transfers with Amazon S3 117 AWS DataSync User Guide • Choose a custom IAM role that you created. For more information, see Creating an IAM role for DataSync to access your Amazon S3 location. 9. (Optional) Choose Add new tag to tag your Amazon S3 location. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. 10. Choose Create location. Using the AWS CLI 1. Copy the following create-location-s3 command: aws datasync create-location-s3 \ --s3-bucket-arn 'bucket-access-point' \ --s3-storage-class 'your-S3-storage-class' \ --s3-config 'BucketAccessRoleArn=arn:aws:iam::account-id:role/role-allowing- datasync-operations' \ --subdirectory /your-folder \ --agent-arns 'arn:aws:datasync:your-region:account-id::agent/agent-agent-id' 2. 3. For --s3-bucket-arn, specify the ARN an Amazon S3 access point that can access your S3 on Outposts bucket. For more information, see the Amazon S3 User Guide. For --s3-storage-class, specify a storage class that you want your objects to use when Amazon S3 is a transfer destination. For more information, see Storage class considerations with Amazon S3 transfers. DataSync by default uses the S3 Outposts storage class for S3 on Outposts. 4. For --s3-config, specify the ARN of the IAM role that DataSync needs to access your bucket. For more information, see Creating an IAM role for DataSync to access your Amazon S3 location. 5. For --subdirectory, specify a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Configuring transfers with Amazon S3 118 AWS DataSync User Guide Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 6. For --agent-arns, specify the ARN of the DataSync agent on your Outpost. 7. Run the create-location-s3 command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0b3017fc4ba4a2d8d" } You can use this location as a source or destination for your DataSync task. Amazon S3 transfers across AWS accounts With DataSync, you can move data to or from S3 buckets in different AWS accounts. For more information, see the following tutorials: • Transferring data from on-premises storage to Amazon S3 across AWS accounts • Transferring data from Amazon S3 to Amazon S3 across AWS accounts Amazon S3 transfers between commercial and AWS GovCloud (US) Regions By default, DataSync doesn't transfer between S3 buckets in commercial and AWS GovCloud (US) Regions. You can still set up this kind of transfer, though, by creating an object storage location for one of the S3 buckets in your transfer. This type of location requires a DataSync agent. Configuring transfers with Amazon S3 119 AWS DataSync User Guide Before you begin: Make sure that you understand the cost implications of transferring between Regions. For more information, see AWS DataSync Pricing. Contents • Providing DataSync access to your object storage location's bucket • Creating your DataSync agent • Creating an object storage location for your S3 bucket Providing DataSync access to your object storage location's bucket When creating the object storage location for this transfer, you must provide DataSync the credentials of an IAM user with permission to access the location's S3 bucket. For more information, see Required permissions. Warning IAM users have long-term credentials, which presents a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. Creating your DataSync agent Since you're transferring between a commercial and AWS GovCloud (US) Region, you deploy your DataSync agent as an Amazon EC2 instance in one of the Regions. We recommend that your agent use a VPC service endpoint to avoid data transfer charges out to the public internet. For more information, see Amazon EC2 Data Transfer pricing. Choose one of the following scenarios that describe how to create an agent based on the Region where you plan to run your DataSync task. When running a DataSync task in a commercial Region The following diagram shows a transfer where your
|
sync-dg-041
|
sync-dg.pdf
| 41 |
your DataSync agent Since you're transferring between a commercial and AWS GovCloud (US) Region, you deploy your DataSync agent as an Amazon EC2 instance in one of the Regions. We recommend that your agent use a VPC service endpoint to avoid data transfer charges out to the public internet. For more information, see Amazon EC2 Data Transfer pricing. Choose one of the following scenarios that describe how to create an agent based on the Region where you plan to run your DataSync task. When running a DataSync task in a commercial Region The following diagram shows a transfer where your DataSync task and agent are in the commercial Region. Configuring transfers with Amazon S3 120 AWS DataSync User Guide Reference Description 1 2 3 In the commercial Region where you're running a DataSync task, data transfers from the source S3 bucket. The source bucket is configured as an Amazon S3 location in the commercial Region. Data transfers through the DataSync agent, which is in the same VPC and subnet where the VPC service endpoint and network interfaces are located. Data transfers to the destination S3 bucket in the AWS GovCloud (US) Region. The destination bucket is configured as an object storage location in the commercial Region. You can use this same setup to transfer the opposite direction, too, from the AWS GovCloud (US) Region to the commercial Region. To create your DataSync agent 1. Deploy an Amazon EC2 agent in your commercial Region. 2. Configure your agent to use a VPC service endpoint. Configuring transfers with Amazon S3 121 AWS DataSync 3. Activate your agent. User Guide When running a DataSync task in a GovCloud (US) Region The following diagram shows a transfer where your DataSync task and agent are in the AWS GovCloud (US) Region. Reference Description 1 2 3 Data transfers from the source S3 bucket in the commercial Region to the AWS GovCloud (US) Region where you're running a DataSync task. The source bucket is configured as an object storage location in the AWS GovCloud (US) Region. In the AWS GovCloud (US) Region, data transfers through the DataSync agent in the same VPC and subnet where the VPC service endpoint and network interfaces are located. Data transfers to the destination S3 bucket in the AWS GovCloud (US) Region. The destination bucket is configured as an Amazon S3 location in the AWS GovCloud (US) Region. Configuring transfers with Amazon S3 122 AWS DataSync User Guide You can use this same setup to transfer the opposite direction, too, from the AWS GovCloud (US) Region to the commercial Region. To create your DataSync agent 1. Deploy an Amazon EC2 agent in your AWS GovCloud (US) Region. 2. Configure your agent to use a VPC service endpoint. 3. Activate your agent. If your dataset is highly compressible, you might see reduced costs by instead creating your agent in a commercial Region while running a task in an AWS GovCloud (US) Region. There's more setup than normal for creating this agent, including preparing the agent for use in a commercial Region. For information about creating an agent for this setup, see the Move data in and out of AWS GovCloud (US) with AWS DataSync blog. Creating an object storage location for your S3 bucket You need an object storage location for the S3 bucket that's in the Region where you aren't running your DataSync task. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. Make sure that you're in the same Region where you plan to run your task. 3. 4. 5. 6. 7. 8. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Object storage. For Agents, choose the DataSync agent that you created for this transfer. For Server, enter an Amazon S3 endpoint for your bucket by using one of the following formats: • Commercial Region bucket: s3.your-region.amazonaws.com • AWS GovCloud (US) Region bucket: s3.your-gov-region.amazonaws.com For a list of Amazon S3 endpoints, see the AWS General Reference. For Bucket name, enter the name of the S3 bucket. For Folder, enter a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Configuring transfers with Amazon S3 123 AWS DataSync User Guide Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 9. Select Requires credentials and do the following: • For Access key, enter the access key for an IAM user that can access the bucket. • For Secret key, enter the same IAM user’s secret key. 10. (Optional) Choose Add tag to tag your location. Tags can help you manage, filter, and
|
sync-dg-042
|
sync-dg.pdf
| 42 |
bucket is a source or destination location). Configuring transfers with Amazon S3 123 AWS DataSync User Guide Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 9. Select Requires credentials and do the following: • For Access key, enter the access key for an IAM user that can access the bucket. • For Secret key, enter the same IAM user’s secret key. 10. (Optional) Choose Add tag to tag your location. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. 11. Choose Create location. Using the AWS CLI 1. Copy the following create-location-object-storage command: aws datasync create-location-object-storage \ --server-hostname s3-endpoint \ --bucket-name amzn-s3-demo-bucket \ --agent-arns arn:aws:datasync:your-region:123456789012:agent/ agent-01234567890deadfb 2. For the --server-hostname parameter, specify an Amazon S3 endpoint for your bucket by using one of the following formats: • Commercial Region bucket: s3.your-region.amazonaws.com • AWS GovCloud (US) Region bucket: s3.your-gov-region.amazonaws.com Configuring transfers with Amazon S3 124 AWS DataSync User Guide For the Region in the endpoint, make sure that you specify the same Region where you plan to run your task. 3. 4. 5. 6. 7. For a list of Amazon S3 endpoints, see the AWS General Reference. For the --bucket-name parameter, specify the name of the S3 bucket. For the --agent-arns parameter, specify the DataSync agent that you created for this transfer. For the --access-key parameter, specify the access key for an IAM user that can access the bucket. For the --secret-key parameter, enter the same IAM user's secret key. (Optional) For the --subdirectory parameter, specify a prefix in the S3 bucket that DataSync reads from or writes to (depending on whether the bucket is a source or destination location). Warning DataSync can't transfer objects with a prefix that begins with a slash (/) or includes //, /./, or /../ patterns. For example: • /photos • photos//2006/January • photos/./2006/February • photos/../2006/March 8. (Optional) For the --tags parameter, specify key-value pairs that represent tags for the location resource. Tags can help you manage, filter, and search for your resources. We recommend creating a name tag for your location. 9. Run the create-location-object-storage command. You get a response that shows you the location ARN that you just created. { Configuring transfers with Amazon S3 125 AWS DataSync User Guide "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-01234567890abcdef" } You can use this location as a source or destination for your DataSync task. For the other S3 bucket in this transfer, create an Amazon S3 location. Next steps Some possible next steps include: 1. If needed, create your other location. For more information, see Where can I transfer my data with AWS DataSync? 2. Configure DataSync task settings, such as what files to transfer, how to handle metadata, among other options. 3. Set a schedule for your DataSync task. 4. Configure monitoring for your DataSync task. 5. Start your task. Configuring AWS DataSync transfers with Amazon EFS To transfer data to or from your Amazon EFS file system, you must create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Providing DataSync access to Amazon EFS file systems Creating a location involves understanding how DataSync can access your storage. For Amazon EFS, DataSync mounts your file system as a root user from your virtual private cloud (VPC) using network interfaces. Contents • Determining the subnet and security groups for your mount target • Accessing restricted file systems • Creating a DataSync IAM role for file system access • Example file system policy allowing DataSync access Configuring transfers with Amazon EFS 126 AWS DataSync User Guide Determining the subnet and security groups for your mount target When creating your location, you specify the subnet and security groups that allow DataSync to connect to one of your Amazon EFS file system's mount targets. The subnet that you specify must be located: • In the same VPC as your file system. • In the same Availability Zone as at least one mount target for your file system. Note You don't need to specify a subnet that includes a file system mount target. The security groups that you specify must allow inbound traffic on Network File System (NFS) port 2049. For information on creating and updating security groups for your mount targets, see the Amazon EFS User Guide. Specifying security groups associated with a mount target You can specify a security group that's associated with one of your file system's mount targets. We recommend this approach from a network management standpoint. Specifying security groups that aren't associated with a mount target You also can specify a security group that isn't associated with one of your file system's mount targets. However,
|
sync-dg-043
|
sync-dg.pdf
| 43 |
system mount target. The security groups that you specify must allow inbound traffic on Network File System (NFS) port 2049. For information on creating and updating security groups for your mount targets, see the Amazon EFS User Guide. Specifying security groups associated with a mount target You can specify a security group that's associated with one of your file system's mount targets. We recommend this approach from a network management standpoint. Specifying security groups that aren't associated with a mount target You also can specify a security group that isn't associated with one of your file system's mount targets. However, this security group must be able to communicate with a mount target's security group. For example, here's how you might create a relationship between security group D (for DataSync) and security group M (for the mount target): • Security group D, which you specify when creating your location, must have a rule that allows outbound connections on NFS port 2049 to security group M. • Security group M, which you associate with the mount target, must allow inbound access on NFS port 2049 from security group S. Configuring transfers with Amazon EFS 127 AWS DataSync User Guide To find a mount target's security group The following instructions can help you identify the security group of an Amazon EFS file system mount target that you want DataSync to use for your transfer. 1. In the AWS CLI, run the following describe-mount-targets command. aws efs describe-mount-targets \ --region file-system-region \ --file-system-id file-system-id This command returns information about your file system's mount targets (similar to the following example output). { "MountTargets": [ { "OwnerId": "111222333444", "MountTargetId": "fsmt-22334a10", "FileSystemId": "fs-123456ab", "SubnetId": "subnet-f12a0e34", "LifeCycleState": "available", "IpAddress": "11.222.0.123", "NetworkInterfaceId": "eni-1234a044" } ] } 2. Take note of the MountTargetId value that you want to use. 3. Run the following describe-mount-target-security-groups command using the MountTargetId to see the security group of your mount target. aws efs describe-mount-target-security-groups \ --region file-system-region \ --mount-target-id mount-target-id You specify this security group when creating your location. Configuring transfers with Amazon EFS 128 AWS DataSync Accessing restricted file systems User Guide DataSync can transfer to or from Amazon EFS file systems that restrict access through access points and IAM policies. Note If DataSync accesses a destination file system through an access point that enforces user identity, the POSIX user and group IDs for your source data aren't preserved if you configure your DataSync task to copy ownership. Instead, the transferred files and folders are set to the access point's user and group IDs. When this happens, task verification fails because DataSync detects a mismatch between metadata in the source and destination locations. Contents • Creating a DataSync IAM role for file system access • Example file system policy allowing DataSync access Creating a DataSync IAM role for file system access If you have an Amazon EFS file system that restricts access through an IAM policy, you can create an IAM role that provides DataSync permission to read from or write data to the file system. You then might need to specify that role in your file system policy. To create the DataSync IAM role 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, under Access management, choose Roles, and then choose Create role. 3. On the Select trusted entity page, for Trusted entity type, choose Custom trust policy. 4. Paste the following JSON into the policy editor: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { Configuring transfers with Amazon EFS 129 AWS DataSync User Guide "Service": "datasync.amazonaws.com" }, "Action": "sts:AssumeRole" }] } 5. Choose Next. On the Add permissions page, choose Next. 6. Give your role a name and choose Create role. You specify this role when creating your location. Example file system policy allowing DataSync access The following example file system policy shows how access to an Amazon EFS file system (identified in the policy as fs-1234567890abcdef0) is restricted but still allows access to DataSync through an IAM role named MyDataSyncRole: { "Version": "2012-10-17", "Id": "ExampleEFSFileSystemPolicy", "Statement": [{ "Sid": "AccessEFSFileSystem", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:role/MyDataSyncRole" }, "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess" ], "Resource": "arn:aws:elasticfilesystem:us-east-1:111122223333:file-system/ fs-1234567890abcdef0", "Condition": { "Bool": { "aws:SecureTransport": "true" }, "StringEquals": { "elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:us- east-1:111122223333:access-point/fsap-abcdef01234567890" } } Configuring transfers with Amazon EFS 130 AWS DataSync }] } User Guide • Principal – Specifies an IAM role that gives DataSync permission to access the file system. • Action – Gives DataSync root access and allows it to read from and write to the file system. • aws:SecureTransport – Requires NFS clients to use TLS when connecting to the file system. • elasticfilesystem:AccessPointArn – Allows access to the file system only through a specific access point. Network considerations with Amazon EFS transfers VPCs that you use with DataSync must have default tenancy. VPCs with dedicated
|
sync-dg-044
|
sync-dg.pdf
| 44 |
"arn:aws:elasticfilesystem:us- east-1:111122223333:access-point/fsap-abcdef01234567890" } } Configuring transfers with Amazon EFS 130 AWS DataSync }] } User Guide • Principal – Specifies an IAM role that gives DataSync permission to access the file system. • Action – Gives DataSync root access and allows it to read from and write to the file system. • aws:SecureTransport – Requires NFS clients to use TLS when connecting to the file system. • elasticfilesystem:AccessPointArn – Allows access to the file system only through a specific access point. Network considerations with Amazon EFS transfers VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. Performance considerations with Amazon EFS transfers Your Amazon EFS file system's throughput mode can affect transfer duration and file system performance during the transfer. Consider the following: • For best results, we recommend using Elastic throughput mode. If you don't use Elastic throughput mode, your transfer might take longer. • If you use Bursting throughput mode, the performance of your file system's applications might be affected because DataSync consumes file system burst credits. • How you configure DataSync to verify your transferred data can affect file system performance and data access costs. For more information, see Amazon EFS performance in the Amazon Elastic File System User Guide and the Amazon EFS Pricing page. Creating your Amazon EFS transfer location To create the transfer location, you need an existing Amazon EFS file system. If you don't have one, see Getting started with Amazon EFS in the Amazon Elastic File System User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Configuring transfers with Amazon EFS 131 AWS DataSync User Guide 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon EFS file system. You configure this location as a source or destination later. For File system, choose the Amazon EFS file system that you want to use as a location. For Mount path, enter a mount path for your Amazon EFS file system. This specifies where DataSync reads or writes data (depending on if this is a source or destination location) on your file system. By default, DataSync uses the root directory (or access point if you provide one for the EFS access point setting). You can also specify subdirectories using forward slashes (for example, / path/to/directory). 6. For Subnet choose a subnet where you want DataSync to create the network interfaces for managing your data transfer traffic. The subnet must be located: • In the same VPC as your file system. • In the same Availability Zone as at least one file system mount target. Note You don't need to specify a subnet that includes a file system mount target. 7. For Security groups, choose the security group associated with your Amazon EFS file system's mount target. You can choose more than one security group. Note The security groups that you specify must allow inbound traffic on NFS port 2049. For more information, see Determining the subnet and security groups for your mount target. 8. For In-transit encryption, choose whether you want DataSync to use Transport Layer Security (TLS) encryption when it transfers data to or from your file system. Configuring transfers with Amazon EFS 132 AWS DataSync Note User Guide You must enable this setting to configure an access point, IAM role, or both with your Amazon EFS location. 9. (Optional) For EFS access point, choose an access point that DataSync can use to mount your file system. For more information, see Accessing restricted file systems. 10. (Optional) For IAM role, specify a role that allows DataSync to access your file system. For information on creating this role, see Creating a DataSync IAM role for file system access. 11. (Optional) Select Add tag to tag your file system. A tag is a key-value pair that helps you manage, filter, and search for your locations. 12. Choose Create location. Using the AWS CLI 1. Copy the following create-location-efs command: aws datasync create-location-efs \ --efs-filesystem-arn 'arn:aws:elasticfilesystem:region:account-id:file- system/file-system-id' \ --subdirectory /path/to/your/subdirectory \ --ec2-config SecurityGroupArns='arn:aws:ec2:region:account-id:security- group/security-group-id',SubnetArn='arn:aws:ec2:region:account-id:subnet/subnet-id' \ --in-transit-encryption TLS1_2 \ --access-point-arn 'arn:aws:elasticfilesystem:region:account-id:access- point/access-point-id' \ --file-system-access-role-arn 'arn:aws:iam::account-id:role/datasync-efs- access-role 2. For --efs-filesystem-arn, specify the Amazon Resource Name (ARN) of the Amazon EFS file system that you're transferring to or from. 3. For --subdirectory, specify a mount path for your file system. Configuring transfers with Amazon EFS 133 AWS DataSync User Guide This is where DataSync reads or writes data (depending on if this is a source or destination location) on your file system. By default, DataSync uses the root directory (or access point if you provide one with -- access-point-arn). You can also specify subdirectories using forward slashes (for example, /path/to/directory). 4. For --ec2-config, do the following: • For SecurityGroupArns, specify the ARN of the
|
sync-dg-045
|
sync-dg.pdf
| 45 |
Resource Name (ARN) of the Amazon EFS file system that you're transferring to or from. 3. For --subdirectory, specify a mount path for your file system. Configuring transfers with Amazon EFS 133 AWS DataSync User Guide This is where DataSync reads or writes data (depending on if this is a source or destination location) on your file system. By default, DataSync uses the root directory (or access point if you provide one with -- access-point-arn). You can also specify subdirectories using forward slashes (for example, /path/to/directory). 4. For --ec2-config, do the following: • For SecurityGroupArns, specify the ARN of the security group associated with your file system's mount target. You can specify more than one security group. Note The security groups that you specify must allow inbound traffic on NFS port 2049. For more information, see Determining the subnet and security groups for your mount target. • For SubnetArn, specify the ARN of the subnet where you want DataSync to create the network interfaces for managing your data transfer traffic. The subnet must be located: • In the same VPC as your file system. • In the same Availability Zone as at least one file system mount target. Note You don't need to specify a subnet that includes a file system mount target. 5. For --in-transit-encryption, specify whether you want DataSync to use Transport Layer Security (TLS) encryption when it transfers data to or from your file system. Note You must set this to TLS1_2 to configure an access point, IAM role, or both with your Amazon EFS location. Configuring transfers with Amazon EFS 134 AWS DataSync User Guide 6. 7. (Optional) For --access-point-arn, specify the ARN of an access point that DataSync can use to mount your file system. For more information, see Accessing restricted file systems. (Optional) For --file-system-access-role-arn, specify the ARN of an IAM role that allows DataSync to access your file system. For information on creating this role, see Creating a DataSync IAM role for file system access. 8. Run the create-location-efs command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "LocationArn": "arn:aws:datasync:us-east-1:111222333444:location/ loc-0b3017fc4ba4a2d8d" } Configuring transfers with FSx for Windows File Server To transfer data to or from your Amazon FSx for Windows File Server file system, you must create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Providing DataSync access to FSx for Windows File Server file systems DataSync connects to your FSx for Windows File Server file system with the Server Message Block (SMB) protocol and mounts it from your virtual private cloud (VPC) using network interfaces. Note VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. Topics • Required permissions • Required authentication protocols Configuring transfers with FSx for Windows File Server 135 AWS DataSync • DFS Namespaces Required permissions User Guide You must provide DataSync a user with the necessary rights to mount and access your FSx for Windows File Server files, folders, and file metadata. We recommend that this user belong to a Microsoft Active Directory group for administering your file system. The specifics of this group depends on your Active Directory setup: • If you're using AWS Directory Service for Microsoft Active Directory with FSx for Windows File Server, the user must be a member of the AWS Delegated FSx Administrators group. • If you're using self-managed Active Directory with FSx for Windows File Server, the user must be a member of one of two groups: • The Domain Admins group, which is the default delegated administrators group. • A custom delegated administrators group with user rights that allow DataSync to copy object ownership permissions and Windows access control lists (ACLs). Important You can't change the delegated administrators group after the file system has been deployed. You must either redeploy the file system or restore it from a backup to use the custom delegated administrator group with the following user rights that DataSync needs to copy metadata. User right Description Restore files and directories (SE_RESTOR E_NAME ) Allows DataSync to copy object ownership , permissions, file metadata, and NTFS discretionary access lists (DACLs). This user right is usually granted to members of the Domain Admins and Backup Operators groups (both of which are default Active Directory groups). Configuring transfers with FSx for Windows File Server 136 AWS DataSync User right Description User Guide Manage auditing and security log (SE_SECURITY_NAME ) Allows DataSync to copy NTFS system access control lists (SACLs). This user right is usually granted to members of the Domain Admins group. • If you want to copy Windows ACLs and are transferring between an SMB file server and FSx for Windows File Server file system
|
sync-dg-046
|
sync-dg.pdf
| 46 |
and NTFS discretionary access lists (DACLs). This user right is usually granted to members of the Domain Admins and Backup Operators groups (both of which are default Active Directory groups). Configuring transfers with FSx for Windows File Server 136 AWS DataSync User right Description User Guide Manage auditing and security log (SE_SECURITY_NAME ) Allows DataSync to copy NTFS system access control lists (SACLs). This user right is usually granted to members of the Domain Admins group. • If you want to copy Windows ACLs and are transferring between an SMB file server and FSx for Windows File Server file system or between FSx for Windows File Server file systems, the users that you provide DataSync must belong to the same Active Directory domain or have an Active Directory trust relationship between their domains. Warning Your FSx for Windows File Server file system's SYSTEM user must have Full control permissions on all folders in your file system. Do not change the NTFS ACL permissions for this user on your folders. If you do, DataSync can change your file system's permissions in a way that makes your file share inaccessible and prevents file system backups from being usable. For more information on file- and folder-level access, see the Amazon FSx for Windows File Server User Guide. Required authentication protocols Your FSx for Windows File Server must use NTLM authentication for DataSync to access it. DataSync can't access a file server that uses Kerberos authentication. DFS Namespaces DataSync doesn't support Microsoft Distributed File System (DFS) Namespaces. We recommend specifying an underlying file server or share instead when creating your DataSync location. For more information, see Grouping multiple file systems with DFS Namespaces in the Amazon FSx for Windows File Server User Guide. Configuring transfers with FSx for Windows File Server 137 AWS DataSync User Guide Creating your FSx for Windows File Server transfer location Before you begin, make sure that you have an existing FSx for Windows File Server in your AWS Region. For more information, see Getting started with Amazon FSx in the Amazon FSx for Windows File Server User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon FSx. For FSx file system, choose the FSx for Windows File Server file system that you want to use as a location. For Share name, enter a mount path for your FSx for Windows File Server using forward slashes. This specifies the path where DataSync reads or writes data (depending on if this is a source or destination location). You can also include subdirectories (for example, /path/to/directory). 6. For Security groups, choose up to five Amazon EC2 security groups that provide access to your file system's preferred subnet. The security groups that you choose must be able to communicate with your file system's security groups. For information about configuring security groups for file system access, see the Amazon FSx for Windows File Server User Guide. Note If you choose a security group that doesn't allow connections from within itself, do one of the following: • Configure the security group to allow it to communicate within itself. • Choose a different security group that can communicate with the mount target's security group. 7. For User, enter the name of a user that can access your FSx for Windows File Server. Configuring transfers with FSx for Windows File Server 138 AWS DataSync User Guide 8. 9. For more information, see Required permissions. For Password, enter password of the user name. (Optional) For Domain, enter the name of the Windows domain that your FSx for Windows File Server file system belongs to. If you have multiple Active Directory domains in your environment, configuring this setting makes sure that DataSync connects to the right file system. 10. (Optional) Enter values for the Key and Value fields to tag the FSx for Windows File Server. Tags help you manage, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. 11. Choose Create location. Using the AWS CLI To create an FSx for Windows File Server location by using the AWS CLI • Use the following command to create an Amazon FSx location. aws datasync create-location-fsx-windows \ --fsx-filesystem-arn arn:aws:fsx:region:account-id:file-system/filesystem-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id \ --user smb-user --password password In the create-location-fsx-windows command, do the following: • fsx-filesystem-arn – Specify the Amazon Resource Name (ARN) of the file system that you want to transfer to or from. • security-group-arns – Specify the ARNs of up to five Amazon EC2 security groups that provide access to your file system's preferred subnet. The security groups that you specify must be able to communicate with your file system's security
|
sync-dg-047
|
sync-dg.pdf
| 47 |
File Server location by using the AWS CLI • Use the following command to create an Amazon FSx location. aws datasync create-location-fsx-windows \ --fsx-filesystem-arn arn:aws:fsx:region:account-id:file-system/filesystem-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id \ --user smb-user --password password In the create-location-fsx-windows command, do the following: • fsx-filesystem-arn – Specify the Amazon Resource Name (ARN) of the file system that you want to transfer to or from. • security-group-arns – Specify the ARNs of up to five Amazon EC2 security groups that provide access to your file system's preferred subnet. The security groups that you specify must be able to communicate with your file system's security groups. For information about configuring security groups for file system access, see the Amazon FSx for Windows File Server User Guide. Configuring transfers with FSx for Windows File Server 139 AWS DataSync Note User Guide If you choose a security group that doesn't allow connections from within itself, do one of the following: • Configure the security group to allow it to communicate within itself. • Choose a different security group that can communicate with the mount target's security group. • The AWS Region – The Region that you specify is the one where your target Amazon FSx file system is located. The preceding command returns a location ARN similar to the one shown following. { "LocationArn": "arn:aws:datasync:us-west-2:111222333444:location/ loc-07db7abfc326c50fb" } Configuring DataSync transfers with FSx for Lustre To transfer data to or from your Amazon FSx for Lustre file system, you must create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Providing DataSync access to FSx for Lustre file systems DataSync accesses your FSx for Lustre file system using the Lustre client. DataSync requires access to all data on your FSx for Lustre file system. To have this level of access, DataSync mounts your file system as the root user using a user ID (UID) and group ID (GID) of 0. DataSync mounts your file system from your virtual private cloud (VPC) using network interfaces. DataSync fully manages the creation, the use, and the deletion of these network interfaces on your behalf. Configuring transfers with FSx for Lustre 140 AWS DataSync Note User Guide VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. Creating your FSx for Lustre transfer location To create the transfer location, you need an existing FSx for Lustre file system. For more information, see Getting started with Amazon FSx for Lustre in the Amazon FSx for Lustre User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon FSx. You configure this location as a source or destination later. For FSx file system, choose the FSx for Lustre file system that you want to use as a location. For Mount path, enter the mount path for your FSx for Lustre file system. The path can include a subdirectory. When the location is used as a source, DataSync reads data from the mount path. When the location is used as a destination, DataSync writes all data to the mount path. If a subdirectory isn't provided, DataSync uses the root directory (/). 6. For Security groups, choose up to five security groups that provide access to your FSx for Lustre file system. The security groups must be able to access the file system's ports. The file system must also allow access from the security groups. For more information about security groups, see File System Access Control with Amazon VPC in the Amazon FSx for Lustre User Guide. 7. (Optional) Enter values for the Key and Value fields to tag the FSx for Lustre file system. Tags help you manage, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. 8. Choose Create location. Configuring transfers with FSx for Lustre 141 AWS DataSync Using the AWS CLI User Guide To create an FSx for Lustre location by using the AWS CLI • Use the following command to create an FSx for Lustre location. aws datasync create-location-fsx-lustre \ --fsx-filesystem-arn arn:aws:fsx:region:account-id:file-system:filesystem-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id The following parameters are required in the create-location-fsx-lustre command. • fsx-filesystem-arn – The fully qualified Amazon Resource Name (ARN) of the file system that you want to read from or write to. • security-group-arns – The ARN of an Amazon EC2 security group to apply to the network interfaces of the file system's preferred subnet. The preceding command returns a location ARN similar to the following. { "LocationArn": "arn:aws:datasync:us-west-2:111222333444:location/ loc-07sb7abfc326c50fb" } Configuring DataSync transfers with Amazon FSx for OpenZFS To transfer data to or from your Amazon FSx for OpenZFS file system, you must create an
|
sync-dg-048
|
sync-dg.pdf
| 48 |
arn:aws:fsx:region:account-id:file-system:filesystem-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id The following parameters are required in the create-location-fsx-lustre command. • fsx-filesystem-arn – The fully qualified Amazon Resource Name (ARN) of the file system that you want to read from or write to. • security-group-arns – The ARN of an Amazon EC2 security group to apply to the network interfaces of the file system's preferred subnet. The preceding command returns a location ARN similar to the following. { "LocationArn": "arn:aws:datasync:us-west-2:111222333444:location/ loc-07sb7abfc326c50fb" } Configuring DataSync transfers with Amazon FSx for OpenZFS To transfer data to or from your Amazon FSx for OpenZFS file system, you must create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Providing DataSync access to FSx for OpenZFS file systems DataSync mounts your FSx for OpenZFS file system from your virtual private cloud (VPC) using network interfaces. DataSync fully manages the creation, the use, and the deletion of these network interfaces on your behalf. Configuring transfers with FSx for OpenZFS 142 AWS DataSync Note User Guide VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. Configuring FSx for OpenZFS file system authorization DataSync accesses your FSx for OpenZFS file system as an NFS client, mounting the file system as a root user with a user ID (UID) and group ID (GID) of 0. For DataSync to copy all of your file metadata, you must configure the NFS export settings on your file system volumes using no_root_squash. However, you can limit this level of access to only a specific DataSync task. For more information, see Volume properties in the Amazon FSx for OpenZFS User Guide. Configuring NFS exports specific to DataSync (recommended) You can configure an NFS export specific to each volume that’s accessed only by your DataSync task. Do this for the most recent ancestor volume of the mount path that you specify when creating your FSx for OpenZFS location. To configure an NFS export specific to DataSync 1. Create your DataSync task. This creates the task’s network interfaces that you specify in your NFS export settings. 2. Locate the private IP addresses of the task's network interfaces by using the Amazon EC2 console or AWS CLI. 3. For your FSx for OpenZFS file system volume, configure the following NFS export settings for each of the task’s network interfaces: • Client address: Enter the network interface’s private IP address (for example, 10.24.34.0). • NFS options: Enter rw,no_root_squash. Configuring NFS exports for all clients You can specify an NFS export that allows root access to all clients. Configuring transfers with FSx for OpenZFS 143 AWS DataSync User Guide To configure an NFS export for all clients • For your FSx for OpenZFS file system volume, configure the following NFS export settings: • Client address: Enter *. • NFS options: Enter rw,no_root_squash. Creating your FSx for OpenZFS transfer location To create the location, you need an existing FSx for OpenZFS file system. If you don't have one, see Getting started with Amazon FSx for OpenZFS in the Amazon FSx for OpenZFS User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, choose Locations, and then choose Create location. For Location type, choose Amazon FSx. You configure this location as a source or destination later. For FSx file system, choose the FSx for OpenZFS file system that you want to use as a location. For Mount path, enter the mount path for your FSx for OpenZFS file system. The path must begin with /fsx and can be any existing directory path in the file system. When the location is used as a source, DataSync reads data from the mount path. When the location is used as a destination, DataSync writes all data to the mount path. If a subdirectory isn't provided, DataSync uses the root volume directory (for example, /fsx). 6. For Security groups, choose up to five security groups that provide network access to your FSx for OpenZFS file system. The security groups must provide access to the network ports that are used by the FSx for OpenZFS file system. The file system must allow network access from the security groups. For more information about security groups, see File system access control with Amazon VPC in the Amazon FSx for OpenZFS User Guide. 7. (Optional) Expand Additional settings and for NFS version choose the NFS version that DataSync uses to access your file system. By default, DataSync uses NFS version 4.1. Configuring transfers with FSx for OpenZFS 144 AWS DataSync User Guide 8. (Optional) Enter values for the Key and Value fields to tag the FSx for OpenZFS file system. Tags help you manage, filter, and search for your location. We recommend creating at least a
|
sync-dg-049
|
sync-dg.pdf
| 49 |
access from the security groups. For more information about security groups, see File system access control with Amazon VPC in the Amazon FSx for OpenZFS User Guide. 7. (Optional) Expand Additional settings and for NFS version choose the NFS version that DataSync uses to access your file system. By default, DataSync uses NFS version 4.1. Configuring transfers with FSx for OpenZFS 144 AWS DataSync User Guide 8. (Optional) Enter values for the Key and Value fields to tag the FSx for OpenZFS file system. Tags help you manage, filter, and search for your location. We recommend creating at least a name tag for your location. 9. Choose Create location. Using the AWS CLI To create an FSx for OpenZFS location by using the AWS CLI 1. Copy the following create-location-fsx-openzfs command: aws datasync create-location-fsx-openzfs \ --fsx-filesystem-arn arn:aws:fsx:region:account-id:file-system/filesystem-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id \ --protocol NFS={} 2. Specify the following required options in the command: • For fsx-filesystem-arn, specify the location file system's fully qualified Amazon Resource Name (ARN). This includes the AWS Region where your file system resides, your AWS account, and the file system ID. • For security-group-arns, specify the ARN of the Amazon EC2 security group that provides access to the network interfaces of your FSx for OpenZFS file system's preferred subnet. This includes the AWS Region where your Amazon EC2 instance resides, your AWS account, and the security group ID. For more information about security groups, see File System Access Control with Amazon VPC in the Amazon FSx for OpenZFS User Guide. • For protocol, specify the protocol that DataSync uses to access your file system. (DataSync currently supports only NFS.) 3. Run the command. You get a response showing the location that you just created. { "LocationArn": "arn:aws:datasync:us-west-2:123456789012:location/loc- abcdef01234567890" } Configuring transfers with FSx for OpenZFS 145 AWS DataSync User Guide Configuring transfers with Amazon FSx for NetApp ONTAP To transfer data to or from your Amazon FSx for NetApp ONTAP file system, you must create an AWS DataSync transfer location. DataSync can use this location as a source or destination for transferring data. Providing DataSync access to FSx for ONTAP file systems To access an FSx for ONTAP file system, DataSync mounts a storage virtual machine (SVM) on your file system using network interfaces in your virtual private cloud (VPC). DataSync creates these network interfaces in your file system’s preferred subnet only when you create a task that includes your FSx for ONTAP location. Note VPCs that you use with DataSync must have default tenancy. VPCs with dedicated tenancy aren't supported. DataSync can connect to an FSx for ONTAP file system's SVM and copy data by using the Network File System (NFS) or Server Message Block (SMB) protocol. Topics • Using the NFS protocol • Using the SMB protocol • Unsupported protocols • Choosing the right protocol • Accessing SnapLock volumes Using the NFS protocol With the NFS protocol, DataSync uses the AUTH_SYS security mechanism with a user ID (UID) and group ID (GID) of 0 to authenticate with your SVM. Note DataSync currently only supports NFS version 3 with FSx for ONTAP locations. Configuring transfers with FSx for ONTAP 146 AWS DataSync Using the SMB protocol User Guide With the SMB protocol, DataSync uses credentials that you provide to authenticate with your SVM. Supported SMB versions By default, DataSync automatically chooses a version of the SMB protocol based on negotiation with your SMB file server. You also can configure DataSync to use a specific version, but we recommend doing this only if DataSync has trouble negotiating with the SMB file server automatically. For security reasons, we recommend using SMB version 3.0.2 or later. See the following table for a list of options in the DataSync console and API for configuring an SMB version with your FSx for ONTAP location: Console option API option Description Automatic AUTOMATIC DataSync and the SMB file server negotiate the highest version of SMB that they mutually support between 2.1 and 3.1.1. This is the default and recommended option. If you instead choose a specific version that your file server doesn't support, you may get an Operation Not Supported error. SMB 3.0.2 SMB3 Restricts the protocol negotiation to only SMB version 3.0.2. SMB 2.1 SMB2 Restricts the protocol negotiation to only SMB version 2.1. SMB 2.0 SMB2_0 Restricts the protocol negotiation to only SMB version 2.0. Required permissions You must provide DataSync a local user in your SVM or a domain user in your Microsoft Active Directory with the necessary rights to mount and access your files, folders, and file metadata. If you provide a user in your Active Directory, note the following: Configuring transfers with FSx for ONTAP 147 AWS DataSync User Guide • If you're using AWS Directory Service for Microsoft Active Directory, the user must be a member of
|
sync-dg-050
|
sync-dg.pdf
| 50 |
SMB 2.1 SMB2 Restricts the protocol negotiation to only SMB version 2.1. SMB 2.0 SMB2_0 Restricts the protocol negotiation to only SMB version 2.0. Required permissions You must provide DataSync a local user in your SVM or a domain user in your Microsoft Active Directory with the necessary rights to mount and access your files, folders, and file metadata. If you provide a user in your Active Directory, note the following: Configuring transfers with FSx for ONTAP 147 AWS DataSync User Guide • If you're using AWS Directory Service for Microsoft Active Directory, the user must be a member of the AWS Delegated FSx Administrators group. • If you're using a self-managed Active Directory, the user must be a member of one of two groups: • The Domain Admins group, which is the default delegated administrators group. • A custom delegated administrators group with user rights that allow DataSync to copy object ownership permissions and Windows access control lists (ACLs). Important You can't change the delegated administrators group after the file system has been deployed. You must either redeploy the file system or restore it from a backup to use the custom delegated administrator group with the following user rights that DataSync needs to copy metadata. User right Description Act as part of the operating system (SE_TCB_NAME ) Allows DataSync to copy object ownership , permissions, file metadata, and NTFS discretionary access lists (DACLs). This user right is usually granted to members of the Domain Admins and Backup Operators groups (both of which are default Active Directory groups). Manage auditing and security log (SE_SECURITY_NAME ) Allows DataSync to copy NTFS system access control lists (SACLs). This user right is usually granted to members of the Domain Admins group. • If you want to copy Windows ACLs and are transferring between FSx for ONTAP file systems using SMB (or other types of file systems using SMB), the users that you provide DataSync must belong to the same Active Directory domain or have an Active Directory trust relationship between their domains. Configuring transfers with FSx for ONTAP 148 AWS DataSync User Guide Required authentication protocols For DataSync to access your SMB share, your FSx for ONTAP file system must use NTLM authentication. DataSync can't access FSx for ONTAP file systems that use Kerberos authentication. DFS Namespaces DataSync doesn't support Microsoft Distributed File System (DFS) Namespaces. We recommend specifying an underlying file server or share instead when creating your DataSync location. Unsupported protocols DataSync can't access FSx for ONTAP file systems using the iSCSI (Internet Small Computer Systems Interface) protocol. Choosing the right protocol To preserve file metadata in FSx for ONTAP migrations, configure your DataSync source and destination locations to use the same protocol. Between the supported protocols, SMB preserves metadata with the highest fidelity (see Understanding how DataSync handles file and object metadata for details). When migrating from a Unix (Linux) server or network-attached storage (NAS) share that serves users through NFS, do the following: 1. Create an NFS location for the Unix (Linux) server or NAS share. (This will be your source location.) 2. Configure the FSx for ONTAP volume you’re transferring data to with the Unix security style. 3. Create a location for your FSx for ONTAP file system that’s configured for NFS. (This will be your destination location.) When migrating from a Windows server or NAS share that serves users through SMB, do the following: 1. Create an SMB location for the Windows server or NAS share. (This will be your source location.) 2. Configure the FSx for ONTAP volume you’re transferring data to with the NTFS security style. 3. Create a location for your FSx for ONTAP file system that’s configured for SMB. (This will be your destination location.) Configuring transfers with FSx for ONTAP 149 AWS DataSync User Guide If your FSx for ONTAP environment uses multiple protocols, we recommend working with an AWS storage specialist. To learn about best practices for multiprotocol access, see Enabling multiprotocol workloads with Amazon FSx for NetApp ONTAP. Accessing SnapLock volumes If you're transferring data to a SnapLock volume on an FSx for ONTAP file system, make sure the SnapLock settings Autocommit and Volume append mode are disabled on the volume during your transfer. You can re-enable these settings when you're done transferring data. Creating your FSx for ONTAP transfer location To create the location, you need an existing FSx for ONTAP file system. If you don't have one, see Getting started with Amazon FSx for NetApp ONTAP in the Amazon FSx for NetApp ONTAP User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon FSx. You configure this location as a source
|
sync-dg-051
|
sync-dg.pdf
| 51 |
volume during your transfer. You can re-enable these settings when you're done transferring data. Creating your FSx for ONTAP transfer location To create the location, you need an existing FSx for ONTAP file system. If you don't have one, see Getting started with Amazon FSx for NetApp ONTAP in the Amazon FSx for NetApp ONTAP User Guide. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Amazon FSx. You configure this location as a source or destination later. For FSx file system, choose the FSx for ONTAP file system that you want to use as a location. For Storage virtual machine, choose a storage virtual machine (SVM) in your file system where you want to copy data to or from. 6. For Mount path, specify a path to the file share in that SVM where you'll copy your data. You can specify a junction path (also known as a mount point), qtree path (for NFS file shares), or share name (for SMB file shares). For example, your mount path might be /vol1, /vol1/ tree1, or /share1. Tip Don't specify a path in the SVM's root volume. For more information, see Managing FSx for ONTAP storage virtual machines in the Amazon FSx for NetApp ONTAP User Guide. 7. For Security groups, choose up to five Amazon EC2 security groups that provide access to your file system's preferred subnet. Configuring transfers with FSx for ONTAP 150 AWS DataSync User Guide The security groups must allow outbound traffic on the following ports (depending on the protocol you use): • NFS – TCP ports 111, 635, and 2049 • SMB – TCP port 445 Your file system's security groups must also allow inbound traffic on the same ports. 8. For Protocol, choose the data transfer protocol that DataSync uses to access your file system's SVM. For more information, see Choosing the right protocol. NFS DataSync uses NFS version 3. SMB Configure an SMB version, user, password, and Active Directory domain name (if needed) to access the SVM. • (Optional) Expand Additional settings and choose an SMB version for DataSync to use when accessing your SVM. By default, DataSync automatically chooses a version based on negotiation with the SMB file server. For more information, see Using the SMB protocol. • For User, enter a user name that can mount and access the files, folders, and metadata that you want to transfer in the SVM. For more information, see Using the SMB protocol. • For Password, enter the password of the user that you specified that can access the SVM. • (Optional) For Active Directory domain name, enter the fully qualified domain name (FQDN) of the Active Directory that your SVM belongs to. If you have multiple domains in your environment, configuring this setting makes sure that DataSync connects to the right SVM. 9. (Optional) Enter values for the Key and Value fields to tag the FSx for ONTAP file system. Configuring transfers with FSx for ONTAP 151 AWS DataSync User Guide Tags help you manage, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. 10. Choose Create location. Using the AWS CLI To create an FSx for ONTAP location by using the AWS CLI 1. Copy the following create-location-fsx-ontap command: aws datasync create-location-fsx-ontap \ --storage-virtual-machine-arn arn:aws:fsx:region:account-id:storage-virtual- machine/fs-file-system-id \ --security-group-arns arn:aws:ec2:region:account-id:security-group/group-id \ --protocol data-transfer-protocol={} 2. Specify the following required options in the command: • For storage-virtual-machine-arn, specify the fully qualified Amazon Resource Name (ARN) of a storage virtual machine (SVM) in your file system where you want to copy data to or from. This ARN includes the AWS Region where your file system resides, your AWS account, and the file system and SVM IDs. • For security-group-arns, specify the ARNs of the Amazon EC2 security groups that provide access to the network interfaces of your file system's preferred subnet. This includes the AWS Region where your Amazon EC2 instance resides, your AWS account, and your security group IDs. You can specify up to five security group ARNs. For more information about security groups, see File System Access Control with Amazon VPC in the Amazon FSx for NetApp ONTAP User Guide. • For protocol, configure the protocol that DataSync uses to access your file system's SVM. • For NFS, you can use the default configuration: --protocol NFS={} • For SMB, you must specify a user name and password that can access the SVM: Configuring transfers with FSx for ONTAP 152 AWS DataSync User Guide --protocol SMB={User=smb-user,Password=smb-password} 3. Run the command. You get a response that shows the location that you just created. { "LocationArn": "arn:aws:datasync:us-west-2:123456789012:location/loc- abcdef01234567890" } Transferring to or from other cloud
|
sync-dg-052
|
sync-dg.pdf
| 52 |
groups, see File System Access Control with Amazon VPC in the Amazon FSx for NetApp ONTAP User Guide. • For protocol, configure the protocol that DataSync uses to access your file system's SVM. • For NFS, you can use the default configuration: --protocol NFS={} • For SMB, you must specify a user name and password that can access the SVM: Configuring transfers with FSx for ONTAP 152 AWS DataSync User Guide --protocol SMB={User=smb-user,Password=smb-password} 3. Run the command. You get a response that shows the location that you just created. { "LocationArn": "arn:aws:datasync:us-west-2:123456789012:location/loc- abcdef01234567890" } Transferring to or from other cloud storage with AWS DataSync With AWS DataSync, you can transfer data between some other cloud providers and AWS storage services. For more information, see Where can I transfer my data with DataSync? Topics • Configuring AWS DataSync transfers with Google Cloud Storage • Configuring transfers with Microsoft Azure Blob Storage • Configuring AWS DataSync transfers with Microsoft Azure Files SMB shares • Configuring transfers with other cloud object storage Configuring AWS DataSync transfers with Google Cloud Storage The following tutorial shows how you can use AWS DataSync to migrate objects from a Google Cloud Storage bucket to an Amazon S3 bucket. Overview Because DataSync integrates with the Google Cloud Storage XML API, you can copy objects into Amazon S3 without writing code. How this works depends on where you deploy the DataSync agent that facilitates the transfer. Agent in Google Cloud 1. You deploy a DataSync agent in your Google Cloud environment. Transferring to or from other cloud storage 153 AWS DataSync User Guide 2. The agent reads your Google Cloud Storage bucket by using a Hash-based Message Authentication Code (HMAC) key. 3. The objects from your Google Cloud Storage bucket transfer securely through TLS 1.3 into the AWS Cloud by using a public endpoint. 4. The DataSync service writes the data to your S3 bucket. The following diagram illustrates the transfer. Agent in your VPC 1. You deploy a DataSync agent in a virtual private cloud (VPC) in your AWS environment. 2. The agent reads your Google Cloud Storage bucket by using a Hash-based Message Authentication Code (HMAC) key. 3. The objects from your Google Cloud Storage bucket transfer securely through TLS 1.3 into the AWS Cloud by using a private VPC endpoint. 4. The DataSync service writes the data to your S3 bucket. The following diagram illustrates the transfer. Configuring transfers with Google Cloud Storage 154 AWS DataSync User Guide Costs The fees associated with this migration include: • Running a Google Compute Engine virtual machine (VM) instance (if you deploy your DataSync agent in Google Cloud) • Running an Amazon EC2 instance (if you deploy your DataSync agent in a VPC within AWS) • Transferring the data by using DataSync, including request charges related to Google Cloud Storage and Amazon S3 (if S3 is one of your transfer locations) • Transferring data out of Google Cloud Storage • Storing data in Amazon S3 Prerequisites Before you begin, do the following if you haven’t already: • Create a Google Cloud Storage bucket with the objects that you want to transfer to AWS. • Sign up for an AWS account. • Create an Amazon S3 bucket for storing your objects after they're in AWS. Configuring transfers with Google Cloud Storage 155 AWS DataSync User Guide Creating an HMAC key for your Google Cloud Storage bucket DataSync uses an HMAC key that's associated with your Google service account to authenticate with and read the bucket that you’re transferring data from. (For detailed instructions on how to create HMAC keys, see the Google Cloud Storage documentation.) To create an HMAC key 1. Create an HMAC key for your Google service account. 2. Make sure that your Google service account has at least Storage Object Viewer permissions. 3. Save your HMAC key's access ID and secret in a secure location. You'll need these items later to configure your DataSync source location. Step 2: Configure your network The network requirements for this migration depend on how you want to deploy your DataSync agent. For a DataSync agent in Google Cloud If you want to host your DataSync agent in Google Cloud, configure your network to allow DataSync transfers through a public endpoint. For a DataSync agent in your VPC If you want to host your agent in AWS, you need a VPC with an interface endpoint. DataSync uses the VPC endpoint to facilitate the transfer. To configure your network for a VPC endpoint 1. If you don't have one, create a VPC in the same AWS Region as your S3 bucket. 2. Create a private subnet for your VPC. 3. Create a VPC service endpoint for DataSync. 4. Configure your network to allow DataSync transfers through a VPC service endpoint. To do this,
|
sync-dg-053
|
sync-dg.pdf
| 53 |
network to allow DataSync transfers through a public endpoint. For a DataSync agent in your VPC If you want to host your agent in AWS, you need a VPC with an interface endpoint. DataSync uses the VPC endpoint to facilitate the transfer. To configure your network for a VPC endpoint 1. If you don't have one, create a VPC in the same AWS Region as your S3 bucket. 2. Create a private subnet for your VPC. 3. Create a VPC service endpoint for DataSync. 4. Configure your network to allow DataSync transfers through a VPC service endpoint. To do this, modify the security group that's associated with your VPC service endpoint. Configuring transfers with Google Cloud Storage 156 AWS DataSync User Guide Step 3: Create a DataSync agent You need a DataSync agent that can access and read your Google Cloud Storage bucket. For Google Cloud In this scenario, the DataSync agent runs in your Google Cloud environment. Before you begin: Install the Google Cloud CLI. To create the agent for Google Cloud 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, then choose Create agent. For Hypervisor, choose VMware ESXi, then choose Download the image to download a .zip file that contains the agent. 4. Open a terminal. Unzip the image by running the following command: unzip AWS-DataSync-Agent-VMWare.zip 5. Extract the contents of the agent's .ova file beginning with aws-datasync by running the following command: tar -xvf aws-datasync-2.0.1655755445.1-x86_64.xfs.gpt.ova 6. Import the agent's .vmdk file into Google Cloud by running the following Google Cloud CLI command: gcloud compute images import aws-datasync-2-test \ --source-file INCOMPLETE-aws-datasync-2.0.1655755445.1-x86_64.xfs.gpt-disk1.vmdk \ --os centos-7 Note Importing the .vmdk file might take up to two hours. 7. Create and start a VM instance for the agent image that you just imported. Configuring transfers with Google Cloud Storage 157 AWS DataSync User Guide The instance needs the following configurations for your agent. (For detailed instructions on how to create an instance, see the Google Cloud Compute Engine documentation.) • For the machine type, choose one of the following: • e2-standard-8 – For DataSync task executions working with up to 20 million objects. • e2-standard-16 – For DataSync task executions working with more than 20 million objects. • For the boot disk settings, go to the custom images section. Then choose the DataSync agent image that you just imported. • For the service account setting, choose your Google service account (the same account that you used in Step 1). • For the firewall setting, choose the option to allow HTTP (port 80) traffic. To activate your DataSync agent, port 80 must be open on the agent. The port doesn't need to be publicly accessible. Once activated, DataSync closes the port. 8. After the VM instance is running, take note of its public IP address. You'll need this IP address to activate the agent. 9. Go back to the DataSync console. On the Create agent screen where you downloaded the agent image, do the following to activate your agent: • For Endpoint type, choose the public service endpoints option (for example, Public service endpoints in US East Ohio). • For Activation key, choose Automatically get the activation key from your agent. • For Agent address, enter the public IP address of the agent VM instance that you just created. • Choose Get key. 10. Give your agent a name, and then choose Create agent. Your agent is online and ready to transfer data. For your VPC In this scenario, the agent runs as an Amazon EC2 instance in a VPC that's associated with your AWS account. Configuring transfers with Google Cloud Storage 158 AWS DataSync User Guide Before you begin: Set up the AWS Command Line Interface (AWS CLI). To create the agent for your VPC 1. Open a terminal. Make sure to configure your AWS CLI profile to use the account that's associated with your S3 bucket. 2. Copy the following command. Replace vpc-region with the AWS Region where your VPC resides (for example, us-east-1). aws ssm get-parameter --name /aws/service/datasync/ami --region vpc-region 3. Run the command. In the output, take note of the "Value" property. This value is the DataSync Amazon Machine Image (AMI) ID of the Region that you specified. For example, an AMI ID could look like ami-1234567890abcdef0. 4. Copy the following URL. Again, replace vpc-region with the AWS Region where your VPC resides. Then, replace ami-id with the AMI ID that you noted in the previous step. https://console.aws.amazon.com/ec2/v2/home?region=vpc- region#LaunchInstanceWizard:ami=ami-id 5. Paste the URL into a browser. The Amazon EC2 instance launch page in the AWS Management Console displays. For Instance type, choose one of the recommended Amazon EC2 instances for DataSync agents. For Key pair, choose an existing key pair, or create a new one. For Network settings, choose
|
sync-dg-054
|
sync-dg.pdf
| 54 |
Image (AMI) ID of the Region that you specified. For example, an AMI ID could look like ami-1234567890abcdef0. 4. Copy the following URL. Again, replace vpc-region with the AWS Region where your VPC resides. Then, replace ami-id with the AMI ID that you noted in the previous step. https://console.aws.amazon.com/ec2/v2/home?region=vpc- region#LaunchInstanceWizard:ami=ami-id 5. Paste the URL into a browser. The Amazon EC2 instance launch page in the AWS Management Console displays. For Instance type, choose one of the recommended Amazon EC2 instances for DataSync agents. For Key pair, choose an existing key pair, or create a new one. For Network settings, choose the VPC and subnet where you want to deploy the agent. 6. 7. 8. 9. Choose Launch instance. 10. Once the Amazon EC2 instance is running, choose your VPC endpoint. 11. Activate your agent. Step 4: Create a DataSync source location for your Google Cloud Storage bucket To set up a DataSync location for your Google Cloud Storage bucket, you need the access ID and secret for the HMAC key that you created in Step 1. Configuring transfers with Google Cloud Storage 159 AWS DataSync User Guide To create the DataSync source location 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. 6. 7. 8. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Object storage. For Agents, choose the agent that you created in Step 3. For Server, enter storage.googleapis.com. For Bucket name, enter the name of your Google Cloud Storage bucket. Expand Additional settings. For Server protocol, choose HTTPS. For Server port, choose 443. Scroll down to the Authentication section. Make sure that the Requires credentials check box is selected, and then do the following: • For Access key, enter your HMAC key's access ID. • For Secret key, enter your HMAC key's secret. 9. Choose Create location. Step 5: Create a DataSync destination location for your S3 bucket You need a DataSync location for where you want your data to end up. To create the DataSync destination location 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations and Create location. 3. Create a DataSync location for the S3 bucket. If you deployed the DataSync agent in your VPC, this tutorial assumes that the S3 bucket is in the same AWS Region as your VPC and DataSync agent. Step 6: Create and start a DataSync task With your source and destinations locations configured, you can start moving your data into AWS. To create and start the DataSync task 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Configuring transfers with Google Cloud Storage 160 AWS DataSync User Guide 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. On the Configure source location page, do the following: a. Choose Choose an existing location. b. Choose the source location that you created in Step 4, then choose Next. 4. On the Configure destination location page, do the following: a. Choose Choose an existing location. b. Choose the destination location that you created in Step 5, then choose Next. 5. On the Configure settings page, do the following: a. Under Data transfer configuration, expand Additional settings and clear the Copy object tags check box. Important Because DataSync communicates with Google Cloud Storage by using the Amazon S3 API, there's a limitation that might cause your DataSync task to fail if you try to copy object tags. b. Configure any other task settings that you want, and then choose Next. 6. On the Review page, review your settings, and then choose Create task. 7. On the task's details page, choose Start, and then choose one of the following: • To run the task without modification, choose Start with defaults. • To modify the task before running it, choose Start with overriding options. When your task finishes, you'll see the objects from your Google Cloud Storage bucket in your S3 bucket. Configuring transfers with Microsoft Azure Blob Storage With AWS DataSync, you can transfer data between Microsoft Azure Blob Storage (including Azure Data Lake Storage Gen2 blob storage) and the following AWS storage services: • Amazon S3 Configuring transfers with Microsoft Azure Blob Storage 161 AWS DataSync • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP User Guide To set up this kind of transfer, you create a location for your Azure Blob Storage. You can use this location as a transfer source or destination. Providing DataSync access to your Azure Blob Storage How DataSync accesses your Azure Blob Storage depends on several factors, including whether you're transferring to or from blob storage and what
|
sync-dg-055
|
sync-dg.pdf
| 55 |
storage services: • Amazon S3 Configuring transfers with Microsoft Azure Blob Storage 161 AWS DataSync • Amazon EFS • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP User Guide To set up this kind of transfer, you create a location for your Azure Blob Storage. You can use this location as a transfer source or destination. Providing DataSync access to your Azure Blob Storage How DataSync accesses your Azure Blob Storage depends on several factors, including whether you're transferring to or from blob storage and what kind of shared access signature (SAS) token you're using. Your objects also must be in an access tier that DataSync can work with. Topics • SAS tokens • Access tiers SAS tokens A SAS token specifies the access permissions for your blob storage. (For more information about SAS, see the Azure Blob Storage documentation.) You can generate SAS tokens to provide different levels of access. DataSync supports tokens with the following access levels: • Account • Container The access permissions that DataSync needs depends on the scope of your token. Not having the correct permissions can cause your transfer to fail. For example, your transfer won't succeed if you're moving objects with tags to Azure Blob Storage but your SAS token doesn't have tag permissions. Topics • SAS token permissions for account-level access Configuring transfers with Microsoft Azure Blob Storage 162 AWS DataSync User Guide • SAS token permissions for container-level access • SAS expiration policies SAS token permissions for account-level access DataSync needs an account-level access token with the following permissions (depending on whether you're transferring to or from Azure Blob Storage). Transfers from blob storage • Allowed services – Blob • Allowed resource types – Container, Object If you don't include these permissions, DataSync can't transfer your object metadata, including object tags. • Allowed permissions – Read, List • Allowed blob index permissions – Read/Write (if you want DataSync to copy object tags) Transfers to blob storage • Allowed services – Blob • Allowed resource types – Container, Object If you don't include these permissions, DataSync can't transfer your object metadata, including object tags. • Allowed permissions – Read, Write, List, Delete (if you want DataSync to remove files that aren't in your transfer source) • Allowed blob index permissions – Read/Write (if you want DataSync to copy object tags) SAS token permissions for container-level access DataSync needs a container-level access token with the following permissions (depending on whether you're transferring to or from Azure Blob Storage). Transfers from blob storage • Read Configuring transfers with Microsoft Azure Blob Storage 163 AWS DataSync • List • Tag (if you want DataSync to copy object tags) User Guide Note You can't add the tag permission when generating a SAS token in the Azure portal. To add the tag permission, instead generate the token by using the Azure Storage Explorer app or generate a SAS token that provides account-level access. Transfers to blob storage • Read • Write • List • Delete (if you want DataSync to remove files that aren't in your transfer source) • Tag (if you want DataSync to copy object tags) Note You can't add the tag permission when generating a SAS token in the Azure portal. To add the tag permission, instead generate the token by using the Azure Storage Explorer app or generate a SAS token that provides account-level access. SAS expiration policies Make sure that your SAS doesn't expire before you expect to finish your transfer. For information about configuring a SAS expiration policy, see the Azure Blob Storage documentation. If the SAS expires during the transfer, DataSync can no longer access your Azure Blob Storage location. (You might see a Failed to open directory error.) If this happens, update your location with a new SAS token and restart your DataSync task. Configuring transfers with Microsoft Azure Blob Storage 164 AWS DataSync Access tiers User Guide When transferring from Azure Blob Storage, DataSync can copy objects in the hot and cool tiers. For objects in the archive access tier, you must rehydrate those objects to the hot or cool tier before you can copy them. When transferring to Azure Blob Storage, DataSync can copy objects into the hot, cool, and archive access tiers. If you're copying objects into the archive access tier, DataSync can't verify the transfer if you're trying to verify all data in the destination. DataSync doesn't support the cold access tier. For more information about access tiers, see the Azure Blob Storage documentation. Considerations with Azure Blob Storage transfers When planning to transfer data to or from Azure Blob Storage with DataSync, there are some things to keep in mind. Topics • Costs • Blob types • AWS Region availability
|
sync-dg-056
|
sync-dg.pdf
| 56 |
copy them. When transferring to Azure Blob Storage, DataSync can copy objects into the hot, cool, and archive access tiers. If you're copying objects into the archive access tier, DataSync can't verify the transfer if you're trying to verify all data in the destination. DataSync doesn't support the cold access tier. For more information about access tiers, see the Azure Blob Storage documentation. Considerations with Azure Blob Storage transfers When planning to transfer data to or from Azure Blob Storage with DataSync, there are some things to keep in mind. Topics • Costs • Blob types • AWS Region availability • Copying object tags • Transferring to Amazon S3 • Deleting directories in a transfer destination • Limitations Costs The fees associated with moving data in or out of Azure Blob Storage can include: • Running an Azure virtual machine (VM) (if you deploy your DataSync agent in Azure) • Running an Amazon EC2 instance (if you deploy your DataSync agent in a VPC within AWS) • Transferring the data by using DataSync, including request charges related to Azure Blob Storage and Amazon S3 (if S3 is one of your transfer locations) • Transferring data in or out of Azure Blob Storage Configuring transfers with Microsoft Azure Blob Storage 165 AWS DataSync User Guide • Storing data in an AWS storage service supported by DataSync Blob types How DataSync works with blob types depends on whether you're transferring to or from Azure Blob Storage. When you're moving data into blob storage, the objects or files that DataSync transfers can only be block blobs. When you're moving data out of blob storage, DataSync can transfer block, page, and append blobs. For more information about blob types, see the Azure Blob Storage documentation. AWS Region availability You can create an Azure Blob Storage transfer location in any AWS Region that's supported by DataSync. Copying object tags The ability for DataSync to preserve object tags when transferring to or from Azure Blob Storage depends on the following factors: • The size of an object's tags – DataSync can't transfer an object with tags that exceed 2 KB. • Whether DataSync is configured to copy object tags – DataSync copies object tags by default. • The namespace that your Azure storage account uses – DataSync can copy object tags if your Azure storage account uses a flat namespace but not if your account uses a hierarchical namespace (a feature of Azure Data Lake Storage Gen2). Your DataSync task will fail if you try to copy object tags and your storage account uses a hierarchical namespace. • Whether your SAS token authorizes tagging – The permissions that you need to copy object tags vary depending on the level of access that your token provides. Your task will fail if you try to copy object tags and your token doesn't have the right permissions for tagging. For more information, check the permission requirements for account-level access tokens or container- level access tokens. Transferring to Amazon S3 When transferring to Amazon S3, DataSync won't transfer Azure Blob Storage objects larger than 5 TB or objects with metadata larger than 2 KB. Configuring transfers with Microsoft Azure Blob Storage 166 AWS DataSync User Guide Deleting directories in a transfer destination When transferring to Azure Blob Storage, DataSync can remove objects in your blob storage that aren't present in your transfer source. (You can configure this option by clearing the Keep deleted files setting in the DataSync console. Your SAS token must also have delete permissions.) When you configure your transfer this way, DataSync won't delete directories in your blob storage if your Azure storage account is using a hierarchical namespace. In this case, you must manually delete the directories (for example, by using Azure Storage Explorer). Limitations Remember the following limitations when transferring data to or from Azure Blob Storage: • DataSync creates some directories in a location to help facilitate your transfer. If Azure Blob Storage is a destination location and your storage account uses a hierarchical namespace, you might notice task-specific subdirectories (such as task-000011112222abcde) in the /.aws- datasync folder. DataSync typically deletes these subdirectories following a transfer. If that doesn't happen, you can delete these task-specific directories yourself as long as a task isn't running. • DataSync doesn't support using a SAS token to access only a specific folder in your Azure Blob Storage container. • You can't provide DataSync a user delegation SAS token for accessing your blob storage. Creating your DataSync agent To get started, you must create a DataSync agent that can connect to your Azure Blob Storage container. This process includes deploying and activating an agent. Tip Although you can deploy your agent on an Amazon EC2 instance, using a Microsoft Hyper- V agent might result in decreased network latency and
|
sync-dg-057
|
sync-dg.pdf
| 57 |
task-specific directories yourself as long as a task isn't running. • DataSync doesn't support using a SAS token to access only a specific folder in your Azure Blob Storage container. • You can't provide DataSync a user delegation SAS token for accessing your blob storage. Creating your DataSync agent To get started, you must create a DataSync agent that can connect to your Azure Blob Storage container. This process includes deploying and activating an agent. Tip Although you can deploy your agent on an Amazon EC2 instance, using a Microsoft Hyper- V agent might result in decreased network latency and more data compression. Microsoft Hyper-V agents You can deploy your DataSync agent directly in Azure with a Microsoft Hyper-V image. Configuring transfers with Microsoft Azure Blob Storage 167 AWS DataSync Tip User Guide Before you continue, consider using a shell script that might help you deploy your Hyper-V agent in Azure quicker. You can get more information and download the code on GitHub. If you use the script, you can skip ahead to the section about Getting your agent's activation key. Topics • Prerequisites • Downloading and preparing your agent • Deploying your agent in Azure • Getting your agent's activation key • Activating your agent Prerequisites To prepare your DataSync agent and deploy it in Azure, you must do the following: • Enable Hyper-V on your local machine. • Install PowerShell (including the Hyper-V Module). • Install the Azure CLI. • Install AzCopy. Downloading and preparing your agent Download an agent from the DataSync console. Before you can deploy the agent in Azure, you must convert it to a fixed-size virtual hard disk (VHD). For more information, see the Azure documentation. To download and prepare your agent 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, choose Agents, and then choose Create agent. For Hypervisor, choose Microsoft Hyper-V, and then choose Download the image. Configuring transfers with Microsoft Azure Blob Storage 168 AWS DataSync User Guide The agent downloads in a .zip file that contains a .vhdx file. 4. Extract the .vhdx file on your local machine. 5. Open PowerShell and do the following: a. Copy the following Convert-VHD cmdlet: Convert-VHD -Path .\local-path-to-vhdx-file\aws-datasync-2.0.1686143940.1- x86_64.xfs.gpt.vhdx ` -DestinationPath .\local-path-to-vhdx-file\aws-datasync-2016861439401- x86_64.vhd -VHDType Fixed b. Replace each instance of local-path-to-vhdx-file with the location of the .vhdx file on your local machine. c. Run the command. Your agent is now a fixed-size VHD (with a .vhd file format) and ready to deploy in Azure. Deploying your agent in Azure Deploying your DataSync agent in Azure involves: • Creating a managed disk in Azure • Uploading your agent to that managed disk • Attaching the managed disk to a Linux virtual machine To deploy your agent in Azure 1. In PowerShell, go to the directory that contains your agent's .vhd file. 2. Run the ls command and save the Length value (for example, 85899346432). This is the size of your agent image in bytes, which you need when creating a managed disk that can hold the image. 3. Do the following to create a managed disk: a. Copy the following Azure CLI command: az disk create -n your-managed-disk ` Configuring transfers with Microsoft Azure Blob Storage 169 AWS DataSync User Guide -g your-resource-group ` -l your-azure-region ` --upload-type Upload ` --upload-size-bytes agent-size-bytes ` --sku standard_lrs b. Replace your-managed-disk with a name for your managed disk. c. Replace your-resource-group with the name of the Azure resource group that your storage account belongs to. d. Replace your-azure-region with the Azure region where your resource group is located. e. Replace agent-size-bytes with the size of your agent image. f. Run the command. This command creates an empty managed disk with a standard SKU where you can upload your DataSync agent. 4. To generate a shared access signature (SAS) that allows write access to the managed disk, do the following: a. Copy the following Azure CLI command: az disk grant-access -n your-managed-disk ` -g your-resource-group ` --access-level Write ` --duration-in-seconds 86400 b. Replace your-managed-disk with the name of the managed disk that you created. c. Replace your-resource-group with the name of the Azure resource group that your storage account belongs to. d. Run the command. In the output, take note of the SAS URI. You need this URI when uploading the agent to Azure. The SAS allows you to write to the disk for up to an hour. This means that you have an hour to upload your agent to the managed disk. To upload your agent to your managed disk in Azure, do the following: 5. Configuring transfers with Microsoft Azure Blob Storage 170 AWS DataSync User Guide a. Copy the following AzCopy command: .\azcopy copy local-path-to-vhd-file sas-uri --blob-type PageBlob b. Replace local-path-to-vhd-file with the location of the agent's .vhd file on
|
sync-dg-058
|
sync-dg.pdf
| 58 |
the command. In the output, take note of the SAS URI. You need this URI when uploading the agent to Azure. The SAS allows you to write to the disk for up to an hour. This means that you have an hour to upload your agent to the managed disk. To upload your agent to your managed disk in Azure, do the following: 5. Configuring transfers with Microsoft Azure Blob Storage 170 AWS DataSync User Guide a. Copy the following AzCopy command: .\azcopy copy local-path-to-vhd-file sas-uri --blob-type PageBlob b. Replace local-path-to-vhd-file with the location of the agent's .vhd file on your local machine. c. Replace sas-uri with the SAS URI that you got when you ran the az disk grant- access command. d. Run the command. 6. When the agent upload finishes, revoke access to your managed disk. To do this, copy the following Azure CLI command: az disk revoke-access -n your-managed-disk -g your-resource-group a. Replace your-resource-group with the name of the Azure resource group that your storage account belongs to. b. Replace your-managed-disk with the name of the managed disk that you created. c. Run the command. 7. Do the following to attach your managed disk to a new Linux VM: a. Copy the following Azure CLI command: az vm create --resource-group your-resource-group ` --location eastus ` --name your-agent-vm ` --size Standard_E4as_v4 ` --os-type linux ` --attach-os-disk your-managed-disk b. Replace your-resource-group with the name of the Azure resource group that your storage account belongs to. c. Replace your-agent-vm with a name for the VM that you can remember. d. Replace your-managed-disk with the name of the managed disk that you're attaching to the VM. e. Run the command. Configuring transfers with Microsoft Azure Blob Storage 171 AWS DataSync User Guide You've deployed your agent. Before you can start configuring your data transfer, you must activate the agent. Getting your agent's activation key To manually get your DataSync agent's activation key, follow these steps. Alternatively, DataSync can automatically get the activation key for you, but this approach requires some network configuration. To get your agent's activation key 1. In the Azure portal, enable boot diagnostics for the VM for your agent by choosing the Enable with custom storage account setting and specifying your Azure storage account. After you've enabled the boot diagnostics for your agent's VM, you can access your agent’s local console to get the activation key. 2. While still in the Azure portal, go to your VM and choose Serial console. 3. In the agent's local console, log in by using the following default credentials: • Username – admin • Password – password We recommend at some point changing at least the agent's password. In the agent's local console, enter 5 on the main menu, then use the passwd command to change the password. 4. 5. Enter 0 to get the agent's activation key. Enter the AWS Region where you're using DataSync (for example, us-east-1). 6. Choose the service endpoint that the agent will use to connect with AWS. 7. Save the value of the Activation key output. Activating your agent After you have the activation key, you can finish creating your DataSync agent. To activate your agent 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, choose Agents, and then choose Create agent. Configuring transfers with Microsoft Azure Blob Storage 172 AWS DataSync User Guide 3. 4. For Hypervisor, choose Microsoft Hyper-V. For Endpoint type, choose the same type of service endpoint that you specified when you got your agent's activation key (for example, choose Public service endpoints in Region name). 5. Configure your network to work with the service endpoint type that your agent is using. For service endpoint network requirements, see the following topics: • VPC endpoints • Public endpoints • Federal Information Processing Standard (FIPS) endpoints 6. For Activation key, do the following: a. b. Choose Manually enter your agent's activation key. Enter the activation key that you got from the agent's local console. 7. Choose Create agent. Your agent is ready to connect with your Azure Blob Storage. For more information, see Creating your Azure Blob Storage transfer location. Amazon EC2 agents You can deploy your DataSync agent on an Amazon EC2 instance. To create an Amazon EC2 agent 1. Deploy an Amazon EC2 agent. 2. Choose a service endpoint that the agent uses to communicate with AWS. In this situation, we recommend using a virtual private cloud (VPC) service endpoint. 3. Configure your network to work with VPC service endpoints. 4. Activate the agent. Creating your Azure Blob Storage transfer location You can configure DataSync to use your Azure Blob Storage as a transfer source or destination. Before you begin Configuring transfers with Microsoft Azure Blob Storage 173 AWS DataSync User Guide Make sure that you know
|
sync-dg-059
|
sync-dg.pdf
| 59 |
on an Amazon EC2 instance. To create an Amazon EC2 agent 1. Deploy an Amazon EC2 agent. 2. Choose a service endpoint that the agent uses to communicate with AWS. In this situation, we recommend using a virtual private cloud (VPC) service endpoint. 3. Configure your network to work with VPC service endpoints. 4. Activate the agent. Creating your Azure Blob Storage transfer location You can configure DataSync to use your Azure Blob Storage as a transfer source or destination. Before you begin Configuring transfers with Microsoft Azure Blob Storage 173 AWS DataSync User Guide Make sure that you know how DataSync accesses Azure Blob Storage and works with access tiers and blob types. You also need a DataSync agent that can connect to your Azure Blob Storage container. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. 4. 5. 6. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Microsoft Azure Blob Storage. For Agents, choose the DataSync agent that can connect with your Azure Blob Storage container. You can choose more than one agent. For more information, see Using multiple DataSync agents. For Container URL, enter the URL of the container that's involved in your transfer. (Optional) For Access tier when used as a destination, choose the access tier that you want your objects or files transferred into. 7. For Folder, enter path segments if you want to limit your transfer to a virtual directory in your container (for example, /my/images). 8. For SAS token, enter the SAS token that allows DataSync to access your blob storage. The token is part of the SAS URI string that comes after the storage resource URI and a question mark (?). A token looks something like this: sp=r&st=2023-12-20T14:54:52Z&se=2023-12-20T22:54:52Z&spr=https&sv=2021-06-08&sr=c&sig=aBBKDWQvyuVcTPH9EBp %2FXTI9E%2F%2Fmq171%2BZU178wcwqU%3D 9. (Optional) Enter values for the Key and Value fields to tag the location. Tags help you manage, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. 10. Choose Create location. Using the AWS CLI 1. Copy the following create-location-azure-blob command: Configuring transfers with Microsoft Azure Blob Storage 174 AWS DataSync User Guide aws datasync create-location-azure-blob \ --container-url "https://path/to/container" \ --authentication-type "SAS" \ --sas-configuration '{ "Token": "your-sas-token" }' \ --agent-arns my-datasync-agent-arn \ --subdirectory "/path/to/my/data" \ --access-tier "access-tier-for-destination" \ --tags [{"Key": "key1","Value": "value1"}] 2. 3. 4. For the --container-url parameter, specify the URL of the Azure Blob Storage container that's involved in your transfer. For the --authentication-type parameter, specify SAS. For the --sas-configuration parameter's Token option, specify the SAS token that allows DataSync to access your blob storage. The token is part of the SAS URI string that comes after the storage resource URI and a question mark (?). A token looks something like this: sp=r&st=2023-12-20T14:54:52Z&se=2023-12-20T22:54:52Z&spr=https&sv=2021-06-08&sr=c&sig=aBBKDWQvyuVcTPH9EBp %2FXTI9E%2F%2Fmq171%2BZU178wcwqU%3D 5. For the --agent-arns parameter, specify the Amazon Resource Name (ARN) of the DataSync agent that can connect to your container. Here's an example agent ARN: arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890aaabfb You can specify more than one agent. For more information, see Using multiple DataSync agents. 6. 7. For the --subdirectory parameter, specify path segments if you want to limit your transfer to a virtual directory in your container (for example, /my/images). (Optional) For the --access-tier parameter, specify the access tier (HOT, COOL, or ARCHIVE) that you want your objects or files transferred into. This parameter applies only when you're using this location as a transfer destination. Configuring transfers with Microsoft Azure Blob Storage 175 AWS DataSync User Guide 8. (Optional) For the --tags parameter, specify key-value pairs that can help you manage, filter, and search for your location. We recommend creating a name tag for your location. 9. Run the create-location-azure-blob command. If the command is successful, you get a response that shows you the ARN of the location that you created. For example: { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh" } Viewing your Azure Blob Storage transfer location You can get details about the existing DataSync transfer location for your Azure Blob Storage. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Locations. 3. Choose your Azure Blob Storage location. You can see details about your location, including any DataSync transfer tasks that are using it. Using the AWS CLI 1. Copy the following describe-location-azure-blob command: aws datasync describe-location-azure-blob \ --location-arn "your-azure-blob-location-arn" 2. For the --location-arn parameter, specify the ARN for the Azure Blob Storage location that you created (for example, arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh). 3. Run the describe-location-azure-blob command. You get a response that shows you details about your location. For example: Configuring transfers with Microsoft Azure Blob Storage 176 AWS DataSync User Guide { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh", "LocationUri": "azure-blob://my-user.blob.core.windows.net/container-1", "AuthenticationType": "SAS", "Subdirectory": "/my/images", "AgentArns": ["arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890deadfb"], } Updating your Azure Blob Storage transfer
|
sync-dg-060
|
sync-dg.pdf
| 60 |
details about your location, including any DataSync transfer tasks that are using it. Using the AWS CLI 1. Copy the following describe-location-azure-blob command: aws datasync describe-location-azure-blob \ --location-arn "your-azure-blob-location-arn" 2. For the --location-arn parameter, specify the ARN for the Azure Blob Storage location that you created (for example, arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh). 3. Run the describe-location-azure-blob command. You get a response that shows you details about your location. For example: Configuring transfers with Microsoft Azure Blob Storage 176 AWS DataSync User Guide { "LocationArn": "arn:aws:datasync:us-east-1:123456789012:location/ loc-12345678abcdefgh", "LocationUri": "azure-blob://my-user.blob.core.windows.net/container-1", "AuthenticationType": "SAS", "Subdirectory": "/my/images", "AgentArns": ["arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890deadfb"], } Updating your Azure Blob Storage transfer location If needed, you can modify your location's configuration in the console or by using the AWS CLI. Using the AWS CLI 1. Copy the following update-location-azure-blob command: aws datasync update-location-azure-blob \ --location-arn "your-azure-blob-location-arn" \ --authentication-type "SAS" \ --sas-configuration '{ "Token": "your-sas-token" }' \ --agent-arns my-datasync-agent-arn \ --subdirectory "/path/to/my/data" \ --access-tier "access-tier-for-destination" 2. 3. 4. For the --location-arn parameter, specify the ARN for the Azure Blob Storage location that you're updating (for example, arn:aws:datasync:us- east-1:123456789012:location/loc-12345678abcdefgh). For the --authentication-type parameter, specify SAS. For the --sas-configuration parameter's Token option, specify the SAS token that allows DataSync to access your blob storage. The token is part of the SAS URI string that comes after the storage resource URI and a question mark (?). A token looks something like this: Configuring transfers with Microsoft Azure Blob Storage 177 AWS DataSync User Guide sp=r&st=2022-12-20T14:54:52Z&se=2022-12-20T22:54:52Z&spr=https&sv=2021-06-08&sr=c&sig=qCBKDWQvyuVcTPH9EBp %2FXTI9E%2F%2Fmq171%2BZU178wcwqU%3D 5. For the --agent-arns parameter, specify the Amazon Resource Name (ARN) of the DataSync agent that you want to connect to your container. Here's an example agent ARN: arn:aws:datasync:us-east-1:123456789012:agent/ agent-01234567890aaabfb You can specify more than one agent. For more information, see Using multiple DataSync agents. 6. 7. For the --subdirectory parameter, specify path segments if you want to limit your transfer to a virtual directory in your container (for example, /my/images). (Optional) For the --access-tier parameter, specify the access tier (HOT, COOL, or ARCHIVE) that you want your objects to be transferred into. This parameter applies only when you're using this location as a transfer destination. Next steps After you finish creating a DataSync location for your Azure Blob Storage, you can continue setting up your transfer. Here are some next steps to consider: 1. If you haven't already, create another location where you plan to transfer your data to or from your Azure Blob Storage. 2. Learn how DataSync handles metadata and special files, particularly if your transfer locations don't have a similar metadata structure. 3. Configure how your data gets transferred. For example, you can transfer only a subset of your data or delete files in your blob storage that aren't in your source location (as long as your SAS token has delete permissions). 4. Start your transfer. Configuring transfers with Microsoft Azure Blob Storage 178 AWS DataSync User Guide Configuring AWS DataSync transfers with Microsoft Azure Files SMB shares You can configure AWS DataSync to transfer data to or from a Microsoft Azure Files Server Message Block (SMB) share. Tip For a full walkthrough on moving data from Azure Files SMB shares to AWS, see the AWS Storage Blog. Providing DataSync access to SMB shares DataSync connects to your SMB share using the SMB protocol and authenticates with credentials that you provide it. Topics • Supported SMB protocol versions • Required permissions Supported SMB protocol versions By default, DataSync automatically chooses a version of the SMB protocol based on negotiation with your SMB file server. You also can configure DataSync to use a specific SMB version, but we recommend doing this only if DataSync has trouble negotiating with the SMB file server automatically. DataSync supports SMB versions 1.0 and later. For security reasons, we recommend using SMB version 3.0.2 or later. Earlier versions, such as SMB 1.0, contain known security vulnerabilities that attackers can exploit to compromise your data. See the following table for a list of options in the DataSync console and API: Configuring transfers with Microsoft Azure Files 179 AWS DataSync User Guide Console option API option Description Automatic AUTOMATIC DataSync and the SMB file server negotiate the highest version of SMB that they mutually support between 2.1 and 3.1.1. This is the default and recommended option. If you instead choose a specific version that your file server doesn't support, you may get an Operation Not Supported error. SMB 3.0.2 SMB3 Restricts the protocol negotiation to only SMB version 3.0.2. SMB 2.1 SMB2 Restricts the protocol negotiation to only SMB version 2.1. SMB 2.0 SMB2_0 Restricts the protocol negotiation to only SMB version 2.0. SMB 1.0 SMB1 Restricts the protocol negotiation to only SMB version 1.0. Required permissions DataSync needs a user who has permission to mount and access your SMB location. This can be a local user on your Windows file server or a
|
sync-dg-061
|
sync-dg.pdf
| 61 |
recommended option. If you instead choose a specific version that your file server doesn't support, you may get an Operation Not Supported error. SMB 3.0.2 SMB3 Restricts the protocol negotiation to only SMB version 3.0.2. SMB 2.1 SMB2 Restricts the protocol negotiation to only SMB version 2.1. SMB 2.0 SMB2_0 Restricts the protocol negotiation to only SMB version 2.0. SMB 1.0 SMB1 Restricts the protocol negotiation to only SMB version 1.0. Required permissions DataSync needs a user who has permission to mount and access your SMB location. This can be a local user on your Windows file server or a domain user that's defined in your Microsoft Active Directory. To set object ownership, DataSync requires the SE_RESTORE_NAME privilege, which is usually granted to members of the built-in Active Directory groups Backup Operators and Domain Admins. Providing a user to DataSync with this privilege also helps ensure sufficient permissions to files, folders, and file metadata, except for NTFS system access control lists (SACLs). Additional privileges are required to copy SACLs. Specifically, this requires the Windows SE_SECURITY_NAME privilege, which is granted to members of the Domain Admins group. If you configure your task to copy SACLs, make sure that the user has the required privileges. To learn Configuring transfers with Microsoft Azure Files 180 AWS DataSync User Guide more about configuring a task to copy SACLs, see Configuring how to handle files, objects, and metadata. When you copy data between an SMB file server and Amazon FSx for Windows File Server file system, the source and destination locations must belong to the same Microsoft Active Directory domain or have an Active Directory trust relationship between their domains. Creating your Azure Files transfer location by using the console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. 3. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Server Message Block (SMB). You configure this location as a source or destination later. 4. For Agents, choose one or more DataSync agents that you want to connect to your SMB share. If you choose more than one agent, make sure you understand using multiple agents for a location. 5. For SMB Server, enter the Domain Name System (DNS) name or IP address of the SMB share that your DataSync agent will mount. Note You can't specify an IP version 6 (IPv6) address. 6. For Share name, enter the name of the share exported by your SMB share where DataSync will read or write data. You can include a subdirectory in the share path (for example, /path/to/subdirectory). Make sure that other SMB clients in your network can also mount this path. To copy all the data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see Required permissions. 7. (Optional) Expand Additional settings and choose an SMB Version for DataSync to use when accessing your SMB share. By default, DataSync automatically chooses a version based on negotiation with the SMB share. For information, see Supported SMB versions. Configuring transfers with Microsoft Azure Files 181 AWS DataSync User Guide 8. 9. For User, enter a user name that can mount your SMB share and has permission to access the files and folders involved in your transfer. For more information, see Required permissions. For Password, enter the password of the user who can mount your SMB share and has permission to access the files and folders involved in your transfer. 10. (Optional) For Domain, enter the Windows domain name that your SMB share belongs to. If you have multiple domains in your environment, configuring this setting makes sure that DataSync connects to the right share. 11. (Optional) Choose Add tag to tag your location. Tags are key-value pairs that help you manage, filter, and search for your locations. We recommend creating at least a name tag for your location. 12. Choose Create location. Configuring transfers with other cloud object storage With AWS DataSync, you can transfer data between AWS storage services and the following cloud object storage providers: • Wasabi Cloud Storage • DigitalOcean Spaces • Oracle Cloud Infrastructure Object Storage • Cloudflare R2 Storage • Backblaze B2 Cloud Storage • NAVER Cloud Object Storage • Alibaba Cloud Object Storage Service • IBM Cloud Object Storage • Seagate Lyve Cloud To set up this kind of transfer, you need to create a DataSync agent that can connect to your cloud object storage. You must also create a transfer location for your cloud object storage (specifically an Object storage location). DataSync can use this location as a source or destination for your transfer. Configuring transfers with other cloud object storage 182 AWS DataSync User Guide Providing DataSync access to your other cloud object storage How DataSync accesses your
|
sync-dg-062
|
sync-dg.pdf
| 62 |
B2 Cloud Storage • NAVER Cloud Object Storage • Alibaba Cloud Object Storage Service • IBM Cloud Object Storage • Seagate Lyve Cloud To set up this kind of transfer, you need to create a DataSync agent that can connect to your cloud object storage. You must also create a transfer location for your cloud object storage (specifically an Object storage location). DataSync can use this location as a source or destination for your transfer. Configuring transfers with other cloud object storage 182 AWS DataSync User Guide Providing DataSync access to your other cloud object storage How DataSync accesses your cloud object storage depends on several factors, including whether your storage is compatible with the Amazon S3 API and the permissions and credentials that DataSync needs to access your storage. Topics • Amazon S3 API compatibility • Storage permissions and endpoints • Storage credentials Amazon S3 API compatibility Your cloud object storage must be compatible with the following Amazon S3 API operations for DataSync to connect to it: • AbortMultipartUpload • CompleteMultipartUpload • CopyObject • CreateMultipartUpload • DeleteObject • DeleteObjects • DeleteObjectTagging • GetBucketLocation • GetObject • GetObjectTagging • HeadBucket • HeadObject • ListObjectsV2 • PutObject • PutObjectTagging • UploadPart Configuring transfers with other cloud object storage 183 AWS DataSync User Guide Storage permissions and endpoints You must configure the permissions that allow DataSync to access your cloud object storage. If your object storage is a source location, DataSync needs read and list permissions for the bucket that you're transferring data from. If your object storage is a destination location, DataSync needs read, list, write, and delete permissions for the bucket. DataSync also needs an endpoint (or server) to connect to your storage. The following table describes the endpoints that DataSync can use to access other cloud object storage: Other cloud provider Endpoint Wasabi Cloud Storage S3.region.wasabisys.com DigitalOcean Spaces region.digitaloceanspaces.com Oracle Cloud Infrastructure Object Storage namespace .compat.objectstor age. region.oraclecloud.com Cloudflare R2 Storage account-id .r2.cloudflarestor age.com Backblaze B2 Cloud Storage S3.region.backblazeb2.com NAVER Cloud Object Storage region.object.ncloudstorage.com (most regions) Alibaba Cloud Object Storage Service region.aliyuncs.com IBM Cloud Object Storage s3.region.cloud-object-stor age.appdomain.cloud Seagate Lyve Cloud s3.region.lyvecloud.seagate.com Important For details on how to configure bucket permissions and updated information on storage endpoints, see your cloud provider's documentation. Configuring transfers with other cloud object storage 184 AWS DataSync Storage credentials User Guide DataSync also needs the credentials to access the object storage bucket involved in your transfer. This might be an access key and secret key or something similar depending on how your cloud storage provider refers to these credentials. For more information, see your cloud provider's documentation. Considerations when transferring from other cloud object storage When planning to transfer objects to or from another cloud storage provider by using DataSync, there are some things to keep in mind. Topics • Costs • Storage classes • Object tags • Transferring to Amazon S3 Costs The fees associated with moving data in and out of another cloud storage provider can include: • Running an Amazon EC2 instance for your DataSync agent • Transferring the data by using DataSync, including request charges related to your cloud object storage and Amazon S3 (if S3 is your transfer destination) • Transferring data in or out of your cloud storage (check your cloud provider's pricing) • Storing data in an AWS storage service supported by DataSync • Storing data in another cloud provider (check your cloud provider's pricing) Storage classes Some cloud storage providers have storage classes (similar to Amazon S3) which DataSync can't read without being restored first. For example, Oracle Cloud Infrastructure Object Storage has an archive storage class. You need to restore objects in that storage class before DataSync can transfer them. For more information, see your cloud provider's documentation. Configuring transfers with other cloud object storage 185 AWS DataSync Object tags User Guide Not all cloud providers support object tags. The ones that do might not allow querying tags through the Amazon S3 API. In either situation, your DataSync transfer task might fail if you try to copy object tags. You can avoid this by clearing the Copy object tags checkbox in the DataSync console when creating, starting, or updating your task. Transferring to Amazon S3 When transferring to Amazon S3, DataSync can't transfer objects larger than 5 TB. DataSync also can only copy object metadata up to 2 KB. Creating your DataSync agent To get started, you need a DataSync agent that can connect to your cloud object storage. This process includes deploying and activating an agent on an Amazon EC2 instance in your virtual private cloud (VPC) in AWS. To create an Amazon EC2 agent 1. Deploy an Amazon EC2 agent. 2. Choose a service endpoint that the agent uses to communicate with AWS. In this situation, we recommend using a VPC service endpoint.
|
sync-dg-063
|
sync-dg.pdf
| 63 |
When transferring to Amazon S3, DataSync can't transfer objects larger than 5 TB. DataSync also can only copy object metadata up to 2 KB. Creating your DataSync agent To get started, you need a DataSync agent that can connect to your cloud object storage. This process includes deploying and activating an agent on an Amazon EC2 instance in your virtual private cloud (VPC) in AWS. To create an Amazon EC2 agent 1. Deploy an Amazon EC2 agent. 2. Choose a service endpoint that the agent uses to communicate with AWS. In this situation, we recommend using a VPC service endpoint. 3. Configure your network to work with VPC service endpoints. 4. Activate the agent. Creating a transfer location for your other cloud object storage You can configure DataSync to use your cloud object storage as a source or destination location. Before you begin Make sure that you know how DataSync accesses your cloud object storage. You also need a DataSync agent that can connect to your cloud object storage. 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. Configuring transfers with other cloud object storage 186 AWS DataSync User Guide 2. 3. 4. In the left navigation pane, expand Data transfer, then choose Locations and Create location. For Location type, choose Object storage. For Agents, choose the DataSync agent that can connect with your cloud object storage. You can choose more than one agent. For more information, see Using multiple DataSync agents. 5. For Server, enter the endpoint that DataSync can use to access your cloud object storage: • Wasabi Cloud Storage – S3.region.wasabisys.com • DigitalOcean Spaces – region.digitaloceanspaces.com • Oracle Cloud Infrastructure Object Storage – namespace.compat.objectstorage.region.oraclecloud.com • Cloudflare R2 Storage – account-id.r2.cloudflarestorage.com • Backblaze B2 Cloud Storage – S3.region.backblazeb2.com • NAVER Cloud Object Storage – region.object.ncloudstorage.com (most regions) • Alibaba Cloud Object Storage Service – region.aliyuncs.com • IBM Cloud Object Storage – s3.region.cloud-object-storage.appdomain.cloud • Seagate Lyve Cloud – s3.region.lyvecloud.seagate.com 6. 7. 8. For Bucket name, enter the name of the object storage bucket that you're transferring data to or from. Expand Additional settings. For Server protocol, choose HTTPS. For Server port, choose 443. Scroll down to the Authentication section. Make sure that the Requires credentials check box is selected, and then provide DataSync your storage credentials. • For Access key, enter the ID to access your cloud object storage. • For Secret key, enter the secret to access your cloud object storage. 9. (Optional) Enter values for the Key and Value fields to tag the location. Tags help you manage, filter, and search for your AWS resources. We recommend creating at least a name tag for your location. 10. Choose Create location. Configuring transfers with other cloud object storage 187 AWS DataSync Next steps User Guide After you finish creating a DataSync location for your cloud object storage, you can continue setting up your transfer. Here are some next steps to consider: 1. If you haven't already, create another location where you plan to transfer your data to or from in AWS. 2. Learn how DataSync handles metadata and special files for object storage locations. 3. Configure how your data gets transferred. For example, maybe you only want to transfer a subset of your data. Important Make sure that you configure how DataSync copies object tags correctly. For more information, see considerations with object tags. 4. Start your transfer. Transferring to or from S3 compatible storage on Snowball Edge With AWS DataSync, you can transfer objects between Amazon S3 compatible storage on an AWS Snowball Edge device or cluster and any of the following AWS storage services: • Amazon S3 • Amazon Elastic File System (Amazon EFS) • Amazon FSx for Windows File Server • Amazon FSx for Lustre • Amazon FSx for OpenZFS • Amazon FSx for NetApp ONTAP Prerequisites Before you get started, make sure that you've done the following: Transferring to or from S3 compatible storage on Snowball Edge 188 AWS DataSync User Guide • Created an AWS storage resource in the AWS Region where you plan to transfer data to or from. For example, this could be an S3 bucket or Amazon EFS file system in US East (N. Virginia). • Established a wide-area network (WAN) connection for traffic into and out of your on-premises storage environment. For example, you can establish this kind of connection with AWS Direct Connect. When you create your DataSync agent, you'll configure this WAN connection so that DataSync can transfer data between your Amazon S3 compatible storage that's on-premises and your storage resource in AWS. • Downloaded and installed the Snowball Edge client. Providing DataSync access to S3 compatible storage To access your Amazon S3 compatible storage bucket, DataSync needs the following: • User credentials on your Snowball Edge device or cluster that can access the bucket that you're transferring
|
sync-dg-064
|
sync-dg.pdf
| 64 |
connection for traffic into and out of your on-premises storage environment. For example, you can establish this kind of connection with AWS Direct Connect. When you create your DataSync agent, you'll configure this WAN connection so that DataSync can transfer data between your Amazon S3 compatible storage that's on-premises and your storage resource in AWS. • Downloaded and installed the Snowball Edge client. Providing DataSync access to S3 compatible storage To access your Amazon S3 compatible storage bucket, DataSync needs the following: • User credentials on your Snowball Edge device or cluster that can access the bucket that you're transferring data to or from. • An HTTPS certificate that allows DataSync to verify the authenticity of the connection between the DataSync agent and the s3api endpoint on your device or cluster. Topics • Getting the user credentials to access your S3 bucket • Getting a certificate for the s3api endpoint connection Getting the user credentials to access your S3 bucket DataSync needs the access key and secret key for a user who can access the bucket that you're working with on your Snowball Edge device or cluster. To get the user credentials to access your bucket 1. Open a terminal and run the Snowball Edge client. For more information about running the Snowball Edge client, see Using the Snowball Edge client in the AWS Snowball Edge Developer Guide. Providing DataSync access to S3 compatible storage 189 AWS DataSync User Guide 2. To get the access keys associated with your device or cluster, run the following snowballEdge command: snowballEdge list-access-keys 3. In the output, locate the access key for the bucket that DataSync will work with (for example, AKIAIOSFODNN7EXAMPLE). 4. To get the secret access key, run the following snowballEdge command. Replace access- key-for-datasync with the access key that you located in the prior step. snowballEdge get-secret-access-key --access-key-id access-key-for-datasync The output includes the access key's corresponding secret key (for example, wJalrXUtnFEMI/ K7MDENG/bPxRfiCYEXAMPLEKEY). 5. Save the access key and secret key somewhere that you can remember. You will need these keys when you're configuring the DataSync source location for your transfer. Getting a certificate for the s3api endpoint connection You need an HTTPS certificate that can verify the authenticity of the connection between your DataSync agent and an s3api endpoint on your Snowball Edge device or cluster. To get a certificate for the s3api endpoint connection 1. In the Snowball Edge client, run the following list-certificates command: snowballEdge list-certificates In the output, take note of the CertificateArn value. This is the certificate's Amazon Resource Name (ARN). You need the ARN to get the certificate's contents. 2. Run the following get-certificate command that specifies the certificate ARN that you just retrieved: snowballEdge get-certificate --certificate-arn arn:aws:snowball- device:::certificate/78EXAMPLE516EXAMPLEf538EXAMPLEa7 Providing DataSync access to S3 compatible storage 190 AWS DataSync User Guide 3. Copy the output, including the BEGIN CERTIFICATE and END CERTIFICATE lines, and save it as a .pem file. Example of get-certificate output: -----BEGIN CERTIFICATE----- Certificate -----END CERTIFICATE----- You specify this .pem file when creating the DataSync source location for your transfer. Creating a DataSync agent in your on-premises storage environment During a transfer, DataSync uses an agent to read from or write to the Amazon S3 compatible storage on your Snowball Edge device or cluster. This agent must be deployed in your on-premises storage environment where it can connect to your device or cluster through your network. For example, you can run the agent on a VMware ESXi hypervisor that has local network access to your cluster. To create a DataSync agent in your on-premises storage environment 1. Make sure that the DataSync agent can run on your hypervisor and that you allocate the agent enough virtual machine (VM) resources. 2. Deploy the agent in your on-premises environment. For instructions, see one of the following topics, depending on the type of hypervisor that you're deploying the agent on: • Deploy your agent on VMware • Deploy your agent on Linux Kernel-based Machine (KVM) • Deploy your agent on Microsoft Hyper-V • Deploy your agent on Amazon EC2 Warning We don't recommend deploying an agent on Amazon EC2 agent to access on- premises storage because of increased network latency. Creating a DataSync agent in your on-premises storage environment 191 AWS DataSync User Guide 3. Configure your network to allow the following traffic between the agent and your Amazon S3 compatible storage: From To Protocol and port DataSync agent A virtual network interface TCP 443 (HTTPS) (VNI) for an s3api endpoint on your device or cluster. If you have a cluster, it can be any s3api endpoint VNI. If you need to find a VNI on your device or cluster, see describing your virtual network interfaces on Snowball Edge. 4. Choose a service endpoint that the agent uses to communicate with the DataSync service. 5. Activate
|
sync-dg-065
|
sync-dg.pdf
| 65 |
on-premises storage environment 191 AWS DataSync User Guide 3. Configure your network to allow the following traffic between the agent and your Amazon S3 compatible storage: From To Protocol and port DataSync agent A virtual network interface TCP 443 (HTTPS) (VNI) for an s3api endpoint on your device or cluster. If you have a cluster, it can be any s3api endpoint VNI. If you need to find a VNI on your device or cluster, see describing your virtual network interfaces on Snowball Edge. 4. Choose a service endpoint that the agent uses to communicate with the DataSync service. 5. Activate your agent. Configuring the source location for your transfer After you create your agent, you can configure the source location for your DataSync transfer. Note The following instructions assume that you're transferring from Amazon S3 compatible storage, but you can also use this location for a transfer destination. To configure the source location by using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer. Choose Tasks, and then choose Create task. 3. On the Configure source location page, choose Create a new location. 4. 5. For Location type, choose Object storage. For Agents, choose the DataSync agent that you created in your on-premises storage environment. Configuring the source location for your transfer 192 AWS DataSync User Guide 6. For Server, enter the VNI for the s3api endpoint that's used by your Amazon S3 compatible storage. If you have a Snowball Edge cluster instead of a single device, you can specify any of the cluster's s3api endpoint VNIs. 7. For Bucket name, enter the name of the Amazon S3 compatible storage bucket that you're transferring objects from. 8. For Folder, enter an object prefix. DataSync only transfers objects with this prefix. 9. To configure the DataSync connection to the Snowball Edge device or cluster, expand Additional settings and do the following: a. b. c. For Server protocol, choose HTTPS. For Server port, enter 443. For Certificate, choose the certificate file for the s3api endpoint connection. 10. Select Requires credentials, and enter the Access key and Secret key to access the Amazon S3 compatible storage bucket on your Snowball Edge device or cluster. 11. Choose Next. Configuring the destination location for your transfer Your transfer's destination location must be in the same AWS Region and AWS account where you created your agent. Before you begin: Make sure you've configured the source location for your transfer. To configure the destination location for your transfer by using the DataSync console 1. On the Configure destination location page, choose Create a new location or Choose an existing location for the AWS storage resource where you're transferring objects to. If you're creating a new location, see one of the following topics: • Amazon S3 • Amazon EFS • FSx for Windows File Server Configuring the destination location for your transfer 193 AWS DataSync User Guide • FSx for Lustre • FSx for OpenZFS • FSx for ONTAP 2. When you're done configuring the destination location, choose Next. Configuring your transfer settings With DataSync, you can specify a transfer schedule, customize how your data integrity is verified, and specify whether you want to transfer only a subset of objects, among other options. Before you begin: Make sure you've configured the destination location for your transfer. To configure your transfer settings by using the DataSync console 1. On the Configure settings page, change the transfer settings or use the defaults. For more information about these settings, see Choosing what AWS DataSync transfers. 2. Choose Next. 3. Review your transfer details, and then choose Create task. Starting your transfer After you create your transfer task, you're ready to start moving data. For instructions on starting a task by using the DataSync console or AWS CLI, see Starting your task. Limitations • If your source storage system uses the NFS protocol (such as Amazon EFS), DataSync can't transfer files with hard links to a Snowball Edge device. • DataSync can’t transfer objects that are longer than 1,024 bytes from a Snowball Edge device to an S3 bucket. For more information, see the Amazon S3 User Guide. Creating a task for transferring your data A task describes where and how AWS DataSync transfers data. A task consists of the following: • Source location – The storage system or service where DataSync transfers data from. Configuring your transfer settings 194 AWS DataSync User Guide • Destination location – The storage system or service where DataSync transfers data to. • Task options – Settings such as what files to transfer, how data gets verified, when the task runs, and more. • Task executions – When you run a task, it's called a task execution. Creating your task When you create a DataSync
|
sync-dg-066
|
sync-dg.pdf
| 66 |
for transferring your data A task describes where and how AWS DataSync transfers data. A task consists of the following: • Source location – The storage system or service where DataSync transfers data from. Configuring your transfer settings 194 AWS DataSync User Guide • Destination location – The storage system or service where DataSync transfers data to. • Task options – Settings such as what files to transfer, how data gets verified, when the task runs, and more. • Task executions – When you run a task, it's called a task execution. Creating your task When you create a DataSync task, you specify your source and destination locations. You also can customize your task by choosing which files to transfer, how metadata gets handled, setting up a schedule, and more. Before you create your task, make sure that you understand how DataSync transfers work and review the task quotas. Important If you're planning to transfer data to or from an Amazon S3 location, review how DataSync can affect your S3 request charges and the DataSync pricing page before you begin. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. Make sure you're in one of the AWS Regions where you plan to transfer data. 3. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 4. On the Configure source location page, create or choose a source location, then choose Next. 5. On the Configure destination location page, create or choose a destination location, then choose Next. 6. (Recommended) On the Configure settings page, give your task a name that you can remember. 7. While still on the Configure settings page, choose your task options or use the default settings. You might be interested in some of the following options: • Specify the task mode that you want to use. Creating your task 195 AWS DataSync User Guide • Specify what data to transfer by using a manifest or filters. • Configure how to handle file metadata and verify data integrity. • Monitor your transfer with task reports or Amazon CloudWatch. We recommend setting up some kind of monitoring for your task. When you're done, choose Next. 8. Review your task configuration, then choose Create task. You're ready to start your task. Using the AWS CLI Once you create your DataSync source and destination locations, you can create your task. 1. In your AWS CLI settings, make sure that you're using one of the AWS Regions where you plan to transfer data. 2. Copy the following create-task command: aws datasync create-task \ --source-location-arn "arn:aws:datasync:us-east-1:account-id:location/location- id" \ --destination-location-arn "arn:aws:datasync:us-east-1:account- id:location/location-id" \ --name "task-name" 3. For --source-location-arn, specify the Amazon Resource Name (ARN) of your source location. 4. For --destination-location-arn, specify the ARN of your destination location. If you're transferring across AWS Regions or accounts, make sure that the ARN includes the other Region or account ID. 5. 6. (Recommended) For --name, specify a name for your task that you can remember. Specify other task options as needed. You might be interested in some of the following options: • Specify what data to transfer by using a manifest or filters. • Configure how to handle file metadata and verify data integrity. Creating your task 196 AWS DataSync User Guide • Monitor your transfer with task reports or Amazon CloudWatch. We recommend setting up some kind of monitoring for your task. For more options, see create-task. Here's an example create-task command that specifies several options: aws datasync create-task \ --source-location-arn "arn:aws:datasync:us-east-1:account-id:location/location- id" \ --destination-location-arn "arn:aws:datasync:us-east-1:account- id:location/location-id" \ --cloud-watch-log-group-arn "arn:aws:logs:region:account-id" \ --name "task-name" \ --options VerifyMode=NONE,OverwriteMode=NEVER,Atime=BEST_EFFORT,Mtime=PRESERVE,Uid=INT_VALUE,Gid=INT_VALUE,PreserveDevices=PRESERVE,PosixPermissions=PRESERVE,PreserveDeletedFiles=PRESERVE,TaskQueueing=ENABLED,LogLevel=TRANSFER 7. Run the create-task command. If the command is successful, you get a response that shows you the ARN of the task that you created. For example: { "TaskArn": "arn:aws:datasync:us-east-1:111222333444:task/ task-08de6e6697796f026" } You're ready to start your task. Task statuses When you create a DataSync task, you can check its status to see if it's ready to run. Console status API status Description Available AVAILABLE The task is ready to start transferring data. Running RUNNING A task execution is in progress. For more information, see Task execution statuses. Task statuses 197 AWS DataSync User Guide Console status API status Description Unavailable UNAVAILABLE A DataSync agent used by the task is offline. For more information, see What do I do if my agent is offline? Queued QUEUED Another task execution that uses the same DataSync agent is in progress. For more information, see Knowing when your task is queued. Partitioning large datasets with multiple tasks If you're transferring a large dataset, such as migrating millions of files or objects, we recommend partitioning your dataset with multiple DataSync tasks. Partitioning your source data across multiple tasks (and possibly agents, depending on your locations) helps reduce the time it takes DataSync
|
sync-dg-067
|
sync-dg.pdf
| 67 |
status Description Unavailable UNAVAILABLE A DataSync agent used by the task is offline. For more information, see What do I do if my agent is offline? Queued QUEUED Another task execution that uses the same DataSync agent is in progress. For more information, see Knowing when your task is queued. Partitioning large datasets with multiple tasks If you're transferring a large dataset, such as migrating millions of files or objects, we recommend partitioning your dataset with multiple DataSync tasks. Partitioning your source data across multiple tasks (and possibly agents, depending on your locations) helps reduce the time it takes DataSync to prepare and transfer your data. Consider some of the ways that you can partition a large dataset across several DataSync tasks: • Create tasks that transfer separate folders. For example, you might create two tasks that target / FolderA and /FolderB, respectively, in your source storage. • Create tasks that transfer subsets of files, objects, and folders by using a manifest or filters. Be mindful that this approach can increase the I/O operations on your storage and affect your network bandwidth. For more information, see the blog on How to accelerate your data transfers with DataSync scale out architectures. Segmenting transferred data with multiple tasks If you're transferring different sets of data to the same destination, you can create multiple tasks to help segment the data that you transfer. For example, if you're transferring to the same S3 bucket named MyBucket, you can create different prefixes in the bucket that correspond to each task. This approach prevents file name conflicts the datasets and allows you to set different permissions for each prefix. Here's how you might set this up: 1. Create three prefixes in the destination MyBucket named task1, task2, and task3: Partitioning large datasets with multiple tasks 198 AWS DataSync User Guide • s3://MyBucket/task1 • s3://MyBucket/task2 • s3://MyBucket/task3 2. Create three DataSync tasks named task1, task2, and task3 that transfer to the corresponding prefix in MyBucket. Choosing a task mode for your data transfer Your AWS DataSync task can run in one of the following modes: • Enhanced mode – Transfer virtually unlimited numbers of objects with higher performance than Basic mode. Enhanced mode tasks optimize the data transfer process by listing, preparing, transferring, and verifying data in parallel. Enhanced mode is currently available for transfers between Amazon S3 locations. • Basic mode – Transfer files or objects between AWS storage and all other supported DataSync locations. Basic mode tasks are subject to quotas on the number of files, objects, and directories in a dataset. Basic mode sequentially prepares, transfers, and verifies data, making it slower than Enhanced mode for most workloads. Understanding task mode differences The following information can help you determine which task mode to use. Capability Performance Enhanced mode behavior Basic mode behavior DataSync lists, prepares, transfers, and verifies your data in parallel. Provides higher performance than Basic mode for most workloads (such as transferr ing large objects) DataSync prepares, transfers , and verifies your data sequentially. Performance is slower than Enhanced mode for most workloads Choosing a task mode for your transfer 199 AWS DataSync User Guide Capability Enhanced mode behavior Basic mode behavior Number of items in a dataset that DataSync can work with Virtually unlimited numbers of objects Quotas apply per task execution Data transfer counters and metrics Less counters and metrics than Enhanced mode More counters and metrics than Basic mode, such as the number of objects that DataSync finds at your source location and how many objects are prepared during each task execution Logging Supported locations Structured logs (JSON format) Unstructured logs Currently for transfers between Amazon S3 locations For transfers between all locations that DataSync only supports Data verification options DataSync verifies only transferred data DataSync verifies all data by default Bandwidth limits Not applicable Supported Cost For more information, see the DataSync pricing page For more information, see the DataSync pricing page Choosing a task mode You can choose Enhanced mode only if your DataSync task uses Amazon S3 locations. Otherwise, you must use Basic mode. For example, a transfer from an on-premises NFS location to an S3 location requires Basic mode. Your task options and performance might vary depending on the task mode you choose. Once you create your task, you can't change the task mode. Choosing a task mode for your transfer 200 AWS DataSync Required permissions User Guide To create an Enhanced mode task, the IAM role that you're using DataSync with must have the iam:CreateServiceLinkedRole permission. For your DataSync user permissions, consider using AWSDataSyncFullAccess. This is an AWS managed policy that provides a user full access to DataSync and minimal access to its dependencies. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data
|
sync-dg-068
|
sync-dg.pdf
| 68 |
vary depending on the task mode you choose. Once you create your task, you can't change the task mode. Choosing a task mode for your transfer 200 AWS DataSync Required permissions User Guide To create an Enhanced mode task, the IAM role that you're using DataSync with must have the iam:CreateServiceLinkedRole permission. For your DataSync user permissions, consider using AWSDataSyncFullAccess. This is an AWS managed policy that provides a user full access to DataSync and minimal access to its dependencies. Using the DataSync console 1. Open the AWS DataSync console at https://console.aws.amazon.com/datasync/. 2. In the left navigation pane, expand Data transfer, then choose Tasks, and then choose Create task. 3. Configure your task's source and destination locations. For more information, see Where can I transfer my data with AWS DataSync? 4. For Task mode, choose one of the following options: • Enhanced • Basic For more information, see Understanding task mode differences. 5. While still on the Configure settings page, choose other task options or use the default settings. You might be interested in some of the following options: • Specify what data to transfer by using a manifest or filters. • Configure how to handle file metadata and verify data integrity. • Monitor your transfer with task reports or Amazon CloudWatch Logs. When you're done, choose Next. 6. Review your task configuration, then choose Create task. Choosing a task mode for your transfer 201 AWS DataSync Using the AWS CLI User Guide 1. In your AWS CLI settings, make sure that you're using one of the AWS Regions where you plan to transfer data. 2. Copy the following create-task command: aws datasync create-task \ --source-location-arn "arn:aws:datasync:us-east-1:account-id:location/location- id" \ --destination-location-arn "arn:aws:datasync:us-east-1:account- id:location/location-id" \ --task-mode "ENHANCED-or-BASIC" 3. For --source-location-arn, specify the Amazon Resource Name (ARN) of your source location. 4. For --destination-location-arn, specify the ARN of your destination location. If you're transferring across AWS Regions or accounts, make sure that the ARN includes the other Region or account ID. 5. For --task-mode, specify ENHANCED or BASIC. For more information, see Understanding task mode differences. 6. Specify other task options as needed. You might be interested in some of the following options: • Specify what data to transfer by using a manifest or filters. • Configure how to handle file metadata and verify data integrity. • Monitor your transfer with task reports or Amazon CloudWatch Logs. For more options, see create-task. Here's an example create-task command that specifies Enhanced mode and several other options: aws datasync create-task \ --source-location-arn "arn:aws:datasync:us-east-1:account-id:location/location- id" \ --destination-location-arn "arn:aws:datasync:us-east-1:account- id:location/location-id" \ --name "task-name" \ --task-mode "ENHANCED" \ Choosing a task mode for your transfer 202 AWS DataSync --options User Guide TransferMode=CHANGED,VerifyMode=ONLY_FILES_TRANSFERRED,ObjectTags=PRESERVE,LogLevel=TRANSFER 7. Run the create-task command. If the command is successful, you get a response that shows you the ARN of the task that you created. For example: { "TaskArn": "arn:aws:datasync:us-east-1:111222333444:task/ task-08de6e6697796f026" } Using the DataSync API You can specify the DataSync task mode by configuring the TaskMode parameter in the CreateTask operation. Choosing what AWS DataSync transfers AWS DataSync lets you choose what to transfer and how you want your data handled. Some options include: • Transferring an exact list of files or object by using a manifest. • Including or excluding certain types of data in your transfer by using a filter. • For recurring transfers, moving only the data that's changed since the last transfer • Overwriting data in the destination location to match what's in the source location. • Choosing which file or object metadata to preserve between your storage locations. Topics • Transferring specific files or objects by using a manifest • Transferring specific files, objects, and folders by using filters • Understanding how DataSync handles file and object metadata • Links and directories copied by AWS DataSync • Configuring how to handle files, objects, and metadata Choosing what data to transfer 203 AWS DataSync User Guide Transferring specific files or objects by using a manifest A manifest is a list of files or objects that you want AWS DataSync to transfer. For example, instead of having to transfer everything in an S3 bucket with potentially millions of objects, DataSync transfers only the objects that you list in your manifest. Manifests are similar to filters but let you identify exactly which files or objects to transfer instead of data that matches a filter pattern. Creating your manifest A manifest is a comma-separated values (CSV)-formatted file that lists the files or objects in your source location that you want DataSync to transfer. If your source is an S3 bucket, you can also include which version of an object to transfer. Topics • Guidelines • Example manifests Guidelines Use these guidelines to help you create a manifest that works with DataSync. Do • Specify the full path of each file or object that you
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.