id
stringlengths
8
78
source
stringclasses
743 values
chunk_id
int64
1
5.05k
text
stringlengths
593
49.7k
timestream-138
timestream.pdf
138
], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], "Resource": "*" } ] } Common operations in Timestream for LiveAnalytics Below are sample IAM policies that allow for common operations in the Timestream for LiveAnalytics service. Topics • Allowing all operations • Allowing SELECT operations • Allowing SELECT operations on multiple resources • Allowing metadata operations • Allowing INSERT operations • Allowing CRUD operations • Cancel queries and select data without specifying resources Identity and access management 544 Amazon Timestream Developer Guide • Create, describe, delete and describe a database • Limit listed databases by tag{"Owner": "${username}"} • List all tables in a database • Create, describe, delete, update and select on a table • Limit a query by table Allowing all operations The following is a sample policy that allows all operations in Timestream for LiveAnalytics. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:*" ], "Resource": "*" } ] } Allowing SELECT operations The following sample policy allows SELECT-style queries on a specific resource. Note Replace <account_ID> with your Amazon account ID. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:Select", "timestream:DescribeTable", Identity and access management 545 Amazon Timestream Developer Guide "timestream:ListMeasures" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/DevOps" }, { "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints", "timestream:SelectValues", "timestream:CancelQuery" ], "Resource": "*" } ] } Allowing SELECT operations on multiple resources The following sample policy allows SELECT-style queries on multiple resources. Note Replace <account_ID> with your Amazon account ID. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:Select", "timestream:DescribeTable", "timestream:ListMeasures" ], "Resource": [ "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/table/ DevOps", "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/table/ DevOps1", "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/table/ DevOps2" Identity and access management 546 Developer Guide Amazon Timestream ] }, { "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints", "timestream:SelectValues", "timestream:CancelQuery" ], "Resource": "*" } ] } Allowing metadata operations The following sample policy allows the user to perform metadata queries, but does not allow the user to perform operations that read or write actual data in Timestream for LiveAnalytics. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints", "timestream:DescribeTable", "timestream:ListMeasures", "timestream:SelectValues", "timestream:ListTables", "timestream:ListDatabases", "timestream:CancelQuery" ], "Resource": "*" } ] } Allowing INSERT operations The following sample policy allows a user to perform an INSERT operation on database/ sampleDB/table/DevOps in account <account_id>. Identity and access management 547 Amazon Timestream Note Replace <account_ID> with your Amazon account ID. Developer Guide { "Version": "2012-10-17", "Statement": [ { "Action": [ "timestream:WriteRecords" ], "Resource": [ "arn:aws:timestream:us-east-1:<account_id>:database/sampleDB/table/ DevOps" ], "Effect": "Allow" }, { "Action": [ "timestream:DescribeEndpoints" ], "Resource": "*", "Effect": "Allow" } ] } Allowing CRUD operations The following sample policy allows a user to perform CRUD operations in Timestream for LiveAnalytics. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints", "timestream:CreateTable", "timestream:DescribeTable", Identity and access management 548 Amazon Timestream Developer Guide "timestream:CreateDatabase", "timestream:DescribeDatabase", "timestream:ListTables", "timestream:ListDatabases", "timestream:DeleteTable", "timestream:DeleteDatabase", "timestream:UpdateTable", "timestream:UpdateDatabase" ], "Resource": "*" } ] } Cancel queries and select data without specifying resources The following sample policy allows a user to cancel queries and perform Select queries on data that does not require resource specification: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:SelectValues", "timestream:CancelQuery" ], "Resource": "*" } ] } Create, describe, delete and describe a database The following sample policy allows a user to create, describe, delete and describe database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", Identity and access management 549 Amazon Timestream "Action": [ "timestream:CreateDatabase", "timestream:DescribeDatabase", "timestream:DeleteDatabase", "timestream:UpdateDatabase" ], Developer Guide "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB" } ] } Limit listed databases by tag{"Owner": "${username}"} The following sample policy allows a user to list all databases that that are tagged with key value pair {"Owner": "${username}"}: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:ListDatabases" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/*", "Condition": { "StringEquals": { "aws:ResourceTag/Owner": "${aws:username}" } } } ] } List all tables in a database The following sample policy to list all tables in database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", Identity and access management 550 Amazon Timestream Developer Guide "Action": [ "timestream:ListTables" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/" } ] } Create, describe, delete, update and select on a table The following sample policy allows a user to create tables, describe tables, delete tables, update tables, and perform Select queries on table DevOps in database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:CreateTable", "timestream:DescribeTable", "timestream:DeleteTable", "timestream:UpdateTable", "timestream:Select" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/DevOps" } ] } Limit a query by table The following sample policy allows a user to query all tables except DevOps in database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:Select" Identity and access management 551 Amazon Timestream ], Developer Guide "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/*" }, { "Effect": "Deny", "Action": [ "timestream:Select" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/DevOps" } ] } Timestream for LiveAnalytics resource access based on tags You can use conditions in
timestream-139
timestream.pdf
139
on table DevOps in database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:CreateTable", "timestream:DescribeTable", "timestream:DeleteTable", "timestream:UpdateTable", "timestream:Select" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/DevOps" } ] } Limit a query by table The following sample policy allows a user to query all tables except DevOps in database sampleDB: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:Select" Identity and access management 551 Amazon Timestream ], Developer Guide "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/*" }, { "Effect": "Deny", "Action": [ "timestream:Select" ], "Resource": "arn:aws:timestream:us-east-1:<account_ID>:database/sampleDB/ table/DevOps" } ] } Timestream for LiveAnalytics resource access based on tags You can use conditions in your identity-based policy to control access to Timestream for LiveAnalytics resources based on tags. This section provides some examples. The following example shows how you can create a policy that grants permissions to a user to view a table if the table's Owner contains the value of that user's user name. { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadOnlyAccessTaggedTables", "Effect": "Allow", "Action": "timestream:Select", "Resource": "arn:aws:timestream:us-east-2:111122223333:database/mydatabase/ table/*", "Condition": { "StringEquals": { "aws:ResourceTag/Owner": "${aws:username}" } } } ] } You can attach this policy to the IAM users in your account. If a user named richard- roe attempts to view an Timestream for LiveAnalytics table, the table must be tagged Identity and access management 552 Amazon Timestream Developer Guide Owner=richard-roe or owner=richard-roe. Otherwise, he is denied access. The condition tag key Owner matches both Owner and owner because condition key names are not case-sensitive. For more information, see IAM JSON Policy Elements: Condition in the IAM User Guide. The following policy grants permissions to a user to create tables with tags if the tag passed in request has a key Owner and a value username: { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateTagTableUser", "Effect": "Allow", "Action": [ "timestream:Create", "timestream:TagResource" ], "Resource": "arn:aws:timestream:us-east-2:111122223333:database/mydatabase/ table/*", "Condition": { "ForAnyValue:StringEquals": { "aws:RequestTag/Owner": "${aws:username}" } } } ] } The policy below allows use of the DescribeDatabase API on any Database that has the env tag set to either dev or test: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowDescribeEndpoints", "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints" ], "Resource": "*" }, { Identity and access management 553 Amazon Timestream Developer Guide "Sid": "AllowDevTestAccess", "Effect": "Allow", "Action": [ "timestream:DescribeDatabase" ], "Resource": "*", "Condition": { "StringEquals": { "timestream:tag/env": [ "dev", "test" ] } } } ] } { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowTagAccessForDevResources", "Effect": "Allow", "Action": [ "timestream:TagResource" ], "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/env": [ "test", "dev" ] } } } ] } This policy uses a Condition key to allow a tag that has the key env and a value of test, qa, or dev to be added to a resource. Identity and access management 554 Amazon Timestream Scheduled queries List, delete, update, execute ScheduledQuery Developer Guide The following sample policy allows a user to list, delete, update and execute scheduled queries. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:DeleteScheduledQuery", "timestream:ExecuteScheduledQuery", "timestream:UpdateScheduledQuery", "timestream:ListScheduledQueries", "timestream:DescribeEndpoints" ], "Resource": "*" } ] } CreateScheduledQuery using a customer managed KMS key The following sample policy allows a user to create a scheduled query that is encrypted using a customer managed KMS key; <keyid for ScheduledQuery>. { "Version": "2012-10-17", "Statement": [ { "Action": [ "iam:PassRole" ], "Resource": [ "arn:aws:iam::123456789012:role/ScheduledQueryExecutionRole" ], "Effect": "Allow" }, { "Action": [ "timestream:CreateScheduledQuery", Identity and access management 555 Amazon Timestream Developer Guide "timestream:DescribeEndpoints" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "kms:DescribeKey", "kms:GenerateDataKey" ], "Resource": "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>", "Effect": "Allow" } ] } DescribeScheduledQuery using a customer managed KMS key The following sample policy allows a user to describe a scheduled query that was created using a customer managed KMS key; <keyid for ScheduledQuery>. { "Version": "2012-10-17", "Statement": [ { "Action": [ "timestream:DescribeScheduledQuery", "timestream:DescribeEndpoints" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "kms:Decrypt" ], "Resource": "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>", "Effect": "Allow" } ] } Identity and access management 556 Amazon Timestream Developer Guide Execution role permissions (using a customer managed KMS key for scheduled query and SSE- KMS for error reports) Attach the following sample policy to the IAM role specified in the ScheduledQueryExecutionRoleArn parameter, of the CreateScheduledQuery API that uses customer managed KMS key for the scheduled query encryption and SSE-KMS encryption for error reports. { "Version": "2012-10-17", "Statement": [ { "Action": [ "kms:GenerateDataKey", ], "Resource": "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>", "Effect": "Allow" }, { "Action": [ "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-west-2:123456789012:key/<keyid for database-1>", "arn:aws:kms:us-west-2:123456789012:key/<keyid for database-n>", "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>" ], "Effect": "Allow" }, { "Action": [ "sns:Publish" ], "Resource": [ "arn:aws:sns:us-west-2:123456789012:scheduled-query-notification-topic- *" ], "Effect": "Allow" }, { "Action": [ "timestream:Select", Identity and access management 557 Amazon Timestream Developer Guide "timestream:SelectValues", "timestream:WriteRecords" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "s3:PutObject", "s3:GetBucketAcl" ], "Resource": [ "arn:aws:s3:::scheduled-query-error-bucket", "arn:aws:s3:::scheduled-query-error-bucket/*" ], "Effect": "Allow" } ] } Execution role trust relationship The following is the trust relationship
timestream-140
timestream.pdf
140
for error reports. { "Version": "2012-10-17", "Statement": [ { "Action": [ "kms:GenerateDataKey", ], "Resource": "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>", "Effect": "Allow" }, { "Action": [ "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-west-2:123456789012:key/<keyid for database-1>", "arn:aws:kms:us-west-2:123456789012:key/<keyid for database-n>", "arn:aws:kms:us-west-2:123456789012:key/<keyid for ScheduledQuery>" ], "Effect": "Allow" }, { "Action": [ "sns:Publish" ], "Resource": [ "arn:aws:sns:us-west-2:123456789012:scheduled-query-notification-topic- *" ], "Effect": "Allow" }, { "Action": [ "timestream:Select", Identity and access management 557 Amazon Timestream Developer Guide "timestream:SelectValues", "timestream:WriteRecords" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "s3:PutObject", "s3:GetBucketAcl" ], "Resource": [ "arn:aws:s3:::scheduled-query-error-bucket", "arn:aws:s3:::scheduled-query-error-bucket/*" ], "Effect": "Allow" } ] } Execution role trust relationship The following is the trust relationship for the IAM role specified in the ScheduledQueryExecutionRoleArn parameter of the CreateScheduledQuery API. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "timestream.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Identity and access management 558 Amazon Timestream Developer Guide Allow access to all scheduled queries created within an account Attach the following sample policy to the IAM role specified in the ScheduledQueryExecutionRoleArn parameter, of the CreateScheduledQuery API, to allow access to all scheduled queries created within the an account Account_ID. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "timestream.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "Account_ID" }, "ArnLike": { "aws:SourceArn": "arn:aws:timestream:us- west-2:Account_ID:scheduled-query/*" } } } ] } Allow access to all scheduled queries with a specific name Attach the following sample policy to the IAM role specified in the ScheduledQueryExecutionRoleArn parameter, of the CreateScheduledQuery API, to allow access to all scheduled queries with a name that starts with Scheduled_Query_Name, within account Account_ID. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "timestream.amazonaws.com" Identity and access management 559 Developer Guide Amazon Timestream }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "Account_ID" }, "ArnLike": { "aws:SourceArn": "arn:aws:timestream:us- west-2:Account_ID:scheduled-query/Scheduled_Query_Name*" } } } ] } Troubleshooting Amazon Timestream for LiveAnalytics identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with Timestream for LiveAnalytics and IAM. Topics • I am not authorized to perform an action in Timestream for LiveAnalytics • I am not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my Timestream for LiveAnalytics resources I am not authorized to perform an action in Timestream for LiveAnalytics If the AWS Management Console tells you that you're not authorized to perform an action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials. The following example error occurs when the mateojackson IAM user tries to use the console to view details about a table but does not have timestream:Select permissions for the table. User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: timestream:Select on resource: mytable In this case, Mateo asks his administrator to update his policies to allow him to access the mytable resource using the timestream:Select action. Identity and access management 560 Amazon Timestream Developer Guide I am not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to Timestream for LiveAnalytics. Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in Timestream for LiveAnalytics. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to allow her to perform the iam:PassRole action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I want to allow people outside of my AWS account to access my Timestream for LiveAnalytics resources You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether Timestream for LiveAnalytics supports these features, see How Amazon Timestream for LiveAnalytics works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to
timestream-141
timestream.pdf
141
your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether Timestream for LiveAnalytics supports these features, see How Amazon Timestream for LiveAnalytics works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM User Guide. Identity and access management 561 Amazon Timestream Developer Guide • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Logging and monitoring in Timestream for LiveAnalytics Monitoring is an important part of maintaining the reliability, availability, and performance of Timestream for LiveAnalytics and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. However, before you start monitoring Timestream for LiveAnalytics, you should create a monitoring plan that includes answers to the following questions: • What are your monitoring goals? • What resources will you monitor? • How often will you monitor these resources? • What monitoring tools will you use? • Who will perform the monitoring tasks? • Who should be notified when something goes wrong? The next step is to establish a baseline for normal Timestream for LiveAnalytics performance in your environment, by measuring performance at various times and under different load conditions. As you monitor Timestream for LiveAnalytics, store historical monitoring data so that you can compare it with current performance data, identify normal performance patterns and performance anomalies, and devise methods to address issues. To establish a baseline, you should, at a minimum, monitor the following items: • System errors, so that you can determine whether any requests resulted in an error. Topics • Monitoring tools • Logging Timestream for LiveAnalytics API calls with AWS CloudTrail Logging and monitoring 562 Amazon Timestream Monitoring tools Developer Guide AWS provides various tools that you can use to monitor Timestream for LiveAnalytics. You can configure some of these tools to do the monitoring for you, while some of the tools require manual intervention. We recommend that you automate monitoring tasks as much as possible. Topics • Automated monitoring tools • Manual monitoring tools Automated monitoring tools You can use the following automated monitoring tools to watch Timestream for LiveAnalytics and report when something is wrong: • Amazon CloudWatch Alarms – Watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a particular state; the state must have changed and been maintained for a specified number of periods. For more information, see Monitoring with Amazon CloudWatch. Manual monitoring tools Another important part of monitoring Timestream for LiveAnalytics involves manually monitoring those items that the CloudWatch alarms don't cover. The Timestream for LiveAnalytics, CloudWatch, Trusted Advisor, and other AWS Management Console dashboards provide an at-a- glance view of the state of your AWS environment. • The CloudWatch home page shows the following: • Current alarms and status • Graphs of alarms and resources • Service health status In addition, you can use CloudWatch to do the following: • Create customized dashboards to monitor the services you care about Logging and monitoring 563 Amazon Timestream Developer Guide • Graph metric data to troubleshoot issues and discover trends • Search and browse all your AWS resource metrics • Create and edit alarms to be notified of problems Logging Timestream for LiveAnalytics API calls with AWS CloudTrail Timestream for LiveAnalytics is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Timestream for LiveAnalytics. CloudTrail captures Data Definition Language (DDL) API calls for Timestream for LiveAnalytics as events. The calls that are captured include calls from the Timestream for LiveAnalytics console and code calls to the Timestream for LiveAnalytics API operations. If you create a trail, you can enable continuous delivery
timestream-142
timestream.pdf
142
all your AWS resource metrics • Create and edit alarms to be notified of problems Logging Timestream for LiveAnalytics API calls with AWS CloudTrail Timestream for LiveAnalytics is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Timestream for LiveAnalytics. CloudTrail captures Data Definition Language (DDL) API calls for Timestream for LiveAnalytics as events. The calls that are captured include calls from the Timestream for LiveAnalytics console and code calls to the Timestream for LiveAnalytics API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon Simple Storage Service (Amazon S3) bucket, including events for Timestream for LiveAnalytics. If you don't configure a trail, you can still view the most recent events on the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to Timestream for LiveAnalytics, the IP address from which the request was made, who made the request, when it was made, and additional details. To learn more about CloudTrail, see the AWS CloudTrail User Guide. Timestream for LiveAnalytics information in CloudTrail CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Timestream for LiveAnalytics, that activity is recorded in a CloudTrail event along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing Events with CloudTrail Event History. Warning Currently, Timestream for LiveAnalytics generates CloudTrail events for all management and Query API operations, but does not generate events for WriteRecords and DescribeEndpoints APIs. For an ongoing record of events in your AWS account, including events for Timestream for LiveAnalytics, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. Logging and monitoring 564 Amazon Timestream Developer Guide By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following topics in the AWS CloudTrail User Guide: • Overview for Creating a Trail • CloudTrail Supported Services and Integrations • Configuring Amazon SNS Notifications for CloudTrail • Receiving CloudTrail Log Files from Multiple Regions • Receiving CloudTrail Log Files from Multiple Accounts • Logging data events Every event or log entry contains information about who generated the request. The identity information helps you determine the following: • Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials • Whether the request was made with temporary security credentials for a role or federated user • Whether the request was made by another AWS service For more information, see the CloudTrail userIdentity Element. For Query API events: • Create a trail that receives all events or select events with Timestream for LiveAnalytics resource type AWS::Timestream::Database or AWS::Timestream::Table. • Query API requests that do not access any database or table or that result in a validation exception due to a malformed query string are recorded in CloudTrail with a resource type AWS::Timestream::Database and an ARN value of: arn:aws:timestream:(region):(accountId):database/NO_RESOURCE_ACCESSED These events are delivered only to trails that receive events with resource type AWS::Timestream::Database. Logging and monitoring 565 Amazon Timestream Developer Guide Resilience in Amazon Timestream Live Analytics The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. For information about data protection functionality for Timestream available through AWS Backup, see Working with AWS Backup. Infrastructure security in Amazon Timestream Live Analytics As a managed service, Amazon Timestream Live Analytics is protected by the AWS global network security procedures that are described in the Amazon Web Services: Overview of Security Processes whitepaper. You use AWS published API calls to access Timestream Live Analytics through the network. Clients must support Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie- Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by
timestream-143
timestream.pdf
143
a managed service, Amazon Timestream Live Analytics is protected by the AWS global network security procedures that are described in the Amazon Web Services: Overview of Security Processes whitepaper. You use AWS published API calls to access Timestream Live Analytics through the network. Clients must support Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie- Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. Timestream Live Analytics is architected so that your traffic is isolated to the specific AWS Region that your Timestream Live Analytics instance resides in. Configuration and vulnerability analysis in Timestream Configuration and IT controls are a shared responsibility between AWS and you, our customer. For more information, see the AWS shared responsibility model. In addition to the shared responsibility model, Timestream for LiveAnalytics users should be aware of the following: Resilience 566 Amazon Timestream Developer Guide • It is the customer responsibility to patch their client applications with the relevant client side dependencies. • Customers should consider penetration testing if appropriate (see https://aws.amazon.com/ security/penetration-testing/.) Incident response in Timestream for LiveAnalytics Amazon Timestream for LiveAnalytics service incidents are reported in the Personal Health Dashboard. You can learn more about the dashboard and AWS Health here. Timestream for LiveAnalytics supports reporting using AWS CloudTrail. For more information, see Logging Timestream for LiveAnalytics API calls with AWS CloudTrail. VPC endpoints (AWS PrivateLink) You can establish a private connection between your VPC and Amazon Timestream for LiveAnalytics by creating an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access Timestream for LiveAnalytics APIs without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC don't need public IP addresses to communicate with Timestream for LiveAnalytics APIs. Traffic between your VPC and Timestream for LiveAnalytics does not leave the Amazon network. Each interface endpoint is represented by one or more Elastic Network Interfaces in your subnets. For more information on Interface VPC endpoints, see Interface VPC endpoints (AWS PrivateLink) in the Amazon VPC User Guide. To get started with Timestream for LiveAnalytics and VPC endpoints, we've provided information on specific considerations for Timestream for LiveAnalytics with VPC endpoints, creating an interface VPC endpoint for Timestream for LiveAnalytics, creating a VPC endpoint policy for Timestream for LiveAnalytics, and using the Timestream client (for either the Write or Query SDK) with VPC endpoints.. Topics • How VPC endpoints work with Timestream • Creating an interface VPC endpoint for Timestream for LiveAnalytics • Creating a VPC endpoint policy for Timestream for LiveAnalytics Incident response 567 Amazon Timestream Developer Guide How VPC endpoints work with Timestream When you create a VPC endpoint to access either the Timestream Write or Timestream Query SDK, all requests are routed to endpoints within the Amazon network and do not access the public internet. More specifically, your requests are routed to the write and query endpoints of the cell that your account has been mapped to for a given region. To learn more about Timestream's cellular architecture and cell-specific endpoints, you can refer to Cellular architecture. For example, suppose that your account has been mapped to cell1 in us-west-2, and you've set up VPC interface endpoints for writes (ingest-cell1.timestream.us-west-2.amazonaws.com) and queries (query-cell1.timestream.us-west-2.amazonaws.com). In this case, any write requests sent using these endpoints will stay entirely within the Amazon network and will not access the public internet. Considerations for Timestream VPC endpoints Consider the following when creating a VPC endpoint for Timestream: • Before you set up an interface VPC endpoint for Timestream for LiveAnalytics, ensure that you review Interface endpoint properties and limitations in the Amazon VPC User Guide. • Timestream for LiveAnalytics supports making calls to all of its API actions from your VPC. • VPC endpoint policies are supported for Timestream for LiveAnalytics. By default, full access to Timestream for LiveAnalytics is allowed through the endpoint. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC User Guide. • Because of Timestream's architecture, access to both Write and Query actions requires the creation of two VPC interface endpoints, one for each SDK. Additionally, you must specify a cell endpoint (you will only be able to create an endpoint for the Timestream cell that you are mapped to). Detailed information can be found in the create an interface VPC endpoint for Timestream for LiveAnalytics section of this
timestream-144
timestream.pdf
144
for LiveAnalytics. By default, full access to Timestream for LiveAnalytics is allowed through the endpoint. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC User Guide. • Because of Timestream's architecture, access to both Write and Query actions requires the creation of two VPC interface endpoints, one for each SDK. Additionally, you must specify a cell endpoint (you will only be able to create an endpoint for the Timestream cell that you are mapped to). Detailed information can be found in the create an interface VPC endpoint for Timestream for LiveAnalytics section of this guide. Now that you understand how Timestream for LiveAnalytics works with VPC endpoints, create an interface VPC endpoint for Timestream for LiveAnalytics. Creating an interface VPC endpoint for Timestream for LiveAnalytics You can create an interface VPC endpoint for the Timestream for LiveAnalytics service using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). To create a VPC endpoint for Timestream, complete the Timestream-specific steps described below. VPC endpoints 568 Amazon Timestream Note Developer Guide Before completing the steps below, ensure that you understand specific considerations for Timestream VPC endpoints. Constructing a VPC endpoint service name using your Timestream cell Because of Timestream's unique architecture, separate VPC interface endpoints must be created for each SDK (Write and Query). Additionally, you must specify a Timestream cell endpoint (you will only be able to create an endpoint for the Timestream cell that you are mapped to). To use Interface VPC Endpoints to directly connect to Timestream from within your VPC, complete the steps below: 1. First, find an available Timestream cell endpoint. To find an available cell endpoint, use the DescribeEndpoints action (available through both the Write and Query APIs) to list the cell endpoints available in your Timestream account. See the example for further details. 2. Once you've selected a cell endpoint to use, create a VPC interface endpoint string for either the Timestream Write or Query API: • For the Write API: com.amazonaws.<region>.timestream.ingest-<cell> • For the Query API: com.amazonaws.<region>.timestream.query-<cell> where <region> is a valid AWS region code and <cell> is one of the cell endpoint addresses (such as cell1 or cell2) returned in the Endpoints object by the DescribeEndpoints action. See the example for further details. 3. Now that you have constructed a VPC endpoint service name, create an interface endpoint. When asked to provide a VPC endpoint service name, use the VPC endpoint service name that you constructed in Step 2. VPC endpoints 569 Amazon Timestream Developer Guide Example: Constructing your VPC endpoint service name In the following example, the DescribeEndpoints action is executed in the AWS CLI using the Write API in the us-west-2 region: aws timestream-write describe-endpoints --region us-west-2 This command will return the following output: { "Endpoints": [ { "Address": "ingest-cell1.timestream.us-west-2.amazonaws.com", "CachePeriodInMinutes": 1440 } ] } In this case, cell1 is the <cell> , and us-west-2 is the <region>. So, the resulting VPC endpoint service name will look like: com.amazonaws.us-west-2.timestream.ingest-cell1 Now that you've created an interface VPC endpoint for Timestream for LiveAnalytics, create a VPC endpoint policy for Timestream for LiveAnalytics. Creating a VPC endpoint policy for Timestream for LiveAnalytics You can attach an endpoint policy to your VPC endpoint that controls access to Timestream for LiveAnalytics. The policy specifies the following information: • The principal that can perform actions. • The actions that can be performed. • The resources on which actions can be performed. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC User Guide. Example: VPC endpoint policy for Timestream for LiveAnalytics actions VPC endpoints 570 Amazon Timestream Developer Guide The following is an example of an endpoint policy for Timestream for LiveAnalytics. When attached to an endpoint, this policy grants access to the listed Timestream for LiveAnalytics actions (in this case, ListDatabases) for all principals on all resources. { "Statement":[ { "Principal":"*", "Effect":"Allow", "Action":[ "timestream:ListDatabases" ], "Resource":"*" } ] } Security best practices for Amazon Timestream for LiveAnalytics Amazon Timestream for LiveAnalytics provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don't represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions. Topics • Timestream for LiveAnalytics preventative security best practices Timestream for LiveAnalytics preventative security best practices The following best practices can help you anticipate and prevent security incidents in Timestream for LiveAnalytics. Encryption at rest Timestream for LiveAnalytics encrypts at rest all user data stored in tables using encryption keys stored in AWS Key Management Service (AWS KMS). This provides an additional layer of data protection by securing your data from unauthorized access to the underlying storage. Security
timestream-145
timestream.pdf
145
solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions. Topics • Timestream for LiveAnalytics preventative security best practices Timestream for LiveAnalytics preventative security best practices The following best practices can help you anticipate and prevent security incidents in Timestream for LiveAnalytics. Encryption at rest Timestream for LiveAnalytics encrypts at rest all user data stored in tables using encryption keys stored in AWS Key Management Service (AWS KMS). This provides an additional layer of data protection by securing your data from unauthorized access to the underlying storage. Security best practices 571 Amazon Timestream Developer Guide Timestream for LiveAnalytics uses a single service default key (AWS owned CMK) for encrypting all of your tables. If this key doesn't exist, it is created for you. Service default keys can't be disabled. For more information, see Timestream for LiveAnalytics Encryption at Rest. Use IAM roles to authenticate access to Timestream for LiveAnalytics For users, applications, and other AWS services to access Timestream for LiveAnalytics, they must include valid AWS credentials in their AWS API requests. You should not store AWS credentials directly in the application or EC2 instance. These are long-term credentials that are not automatically rotated, and therefore could have significant business impact if they are compromised. An IAM role enables you to obtain temporary access keys that can be used to access AWS services and resources. For more information, see IAM Roles. Use IAM policies for Timestream for LiveAnalytics base authorization When granting permissions, you decide who is getting them, which Timestream for LiveAnalytics APIs they are getting permissions for, and the specific actions you want to allow on those resources. Implementing least privilege is key in reducing security risk and the impact that can result from errors or malicious intent. Attach permissions policies to IAM identities (that is, users, groups, and roles) and thereby grant permissions to perform operations on Timestream for LiveAnalytics resources. You can do this by using the following: • AWS managed (predefined) policies • Customer managed policies • Tag-based authorization Consider client-side encryption If you store sensitive or confidential data in Timestream for LiveAnalytics, you might want to encrypt that data as close as possible to its origin so that your data is protected throughout its lifecycle. Encrypting your sensitive data in transit and at rest helps ensure that your plaintext data isn't available to any third party. Working with other services Amazon Timestream for LiveAnalytics integrates with a variety of AWS services and popular third- party tools. Currently, Timestream for LiveAnalytics supports integrations with the following: Working with other services 572 Developer Guide Amazon Timestream Topics • Amazon DynamoDB • AWS Lambda • AWS IoT Core • Amazon Managed Service for Apache Flink • Amazon Kinesis • Amazon MQ • Amazon MSK • Amazon QuickSight • Amazon SageMaker AI • Amazon SQS • Using DBeaver to work with Amazon Timestream • Grafana • Using SquaredUp to work with Amazon Timestream • Open source Telegraf • JDBC • ODBC • VPC endpoints (AWS PrivateLink) Amazon DynamoDB Using EventBridge Pipes to send DynamoDB data to Timestream You can use EventBridge Pipes to send data from a DynamoDB stream to a Amazon Timestream for LiveAnalytics table. Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data. Amazon DynamoDB 573 Amazon Timestream Developer Guide For more information on EventBridge Pipes, see EventBridge Pipes in the EventBridge User Guide. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see EventBridge Pipes target specifics. AWS Lambda You can create Lambda functions that interact with Timestream for LiveAnalytics. For example, you can create a Lambda function that runs at regular intervals to execute a query on Timestream and send an SNS notification based on the query results satisfying one or more criteria. To learn more about Lambda, see the AWS Lambda documentation. Topics • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C# Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python, follow the steps below. AWS Lambda 574 Amazon Timestream Developer Guide 1. Create an IAM role for Lambda to assume that will grant the required permissions to access the Timestream Service, as outlined in
timestream-146
timestream.pdf
146
using Amazon Timestream for LiveAnalytics with Python • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go • Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C# Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python, follow the steps below. AWS Lambda 574 Amazon Timestream Developer Guide 1. Create an IAM role for Lambda to assume that will grant the required permissions to access the Timestream Service, as outlined in Provide Timestream for LiveAnalytics access. 2. Edit the trust relationship of the IAM role to add Lambda service. You can use the commands below to update an existing role so that AWS Lambda can assume it: a. Create the trust policy document: cat > Lambda-Role-Trust-Policy.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } EOF b. Update the role from previous step with the trust document aws iam update-assume-role-policy --role-name <name_of_the_role_from_step_1> -- policy-document file://Lambda-Role-Trust-Policy.json Related references are at TimestreamWrite and TimestreamQuery. Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript, follow the instructions outlined here. Related references are at Timestream Write Client - AWS SDK for JavaScript v3 and Timestream Query Client - AWS SDK for JavaScript v3. AWS Lambda 575 Amazon Timestream Developer Guide Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go, follow the instructions outlined here. Related references are at timestreamwrite and timestreamquery. Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C# To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C#, follow the instructions outlined here. Related references are at Amazon.TimestreamWrite and Amazon.TimestreamQuery. AWS IoT Core You can collect data from IoT devices using AWS IoT Core and route the data to Amazon Timestream through IoT Core rule actions. AWS IoT rule actions specify what to do when a rule is triggered. You can define actions to send data to an Amazon Timestream table, an Amazon DynamoDB database, and invoke an AWS Lambda function. The Timestream action in IoT Rules is used to insert data from incoming messages directly into Timestream. The action parses the results of the IoT Core SQL statement and stores data in Timestream. The names of the fields from returned SQL result set are used as the measure::name and the value of the field is the measure::value. For example, consider the SQL statement and the sample message payload: SELECT temperature, humidity from 'iot/topic' { "dataFormat": 5, "rssi": -88, "temperature": 24.04, "humidity": 43.605, "pressure": 101082, "accelerationX": 40, "accelerationY": -20, "accelerationZ": 1016, "battery": 3007, AWS IoT Core 576 Amazon Timestream Developer Guide "txPower": 4, "movementCounter": 219, "device_id": 46216, "device_firmware_sku": 46216 } If an IoT Core rule action for Timestream is created with the SQL statement above, two records will be added to Timestream with measure names temperature and humidity and measure values of 24.04 and 43.605, respectively. You can modify the measure name of a record being added to Timestream by using the AS operator in the SELECT statement. The SQL statement below will create a record with the message name temp instead of temperature. The data type of the measure are inferred from the data type of the value of the message payload. JSON data types such as integer, double, boolean, and string are mapped to Timestream data types of BIGINT, DOUBLE, BOOLEAN, and VARCHAR respectively. Data can also be forced to specific data types using the cast() function. You can specify the timestamp of the measure. If the timestamp is left blank, the time that the entry was processed is used. You can refer to the Timestream rules action documentation for additional details To create an IoT Core rule action to store data in Timestream, follow the steps below: Topics • Prerequisites • Using the console • Using the CLI • Sample application • Video tutorial Prerequisites 1. Create a database in Amazon Timestream using the instructions described in Create a database. 2. Create a table in Amazon Timestream using the instructions described in Create a table. AWS IoT Core 577 Amazon Timestream Using the console Developer Guide 1. Use the AWS Management Console for AWS IoT Core to create a rule by clicking on Manage > Messsage routing > Rules followed by Create rule. 2. Set the rule name to a name of your choice and the SQL to the text shown below SELECT temperature as temp, humidity from 'iot/topic' Select Timestream from the Action list Specify the Timestream database, table, and dimension names along with
timestream-147
timestream.pdf
147
described in Create a database. 2. Create a table in Amazon Timestream using the instructions described in Create a table. AWS IoT Core 577 Amazon Timestream Using the console Developer Guide 1. Use the AWS Management Console for AWS IoT Core to create a rule by clicking on Manage > Messsage routing > Rules followed by Create rule. 2. Set the rule name to a name of your choice and the SQL to the text shown below SELECT temperature as temp, humidity from 'iot/topic' Select Timestream from the Action list Specify the Timestream database, table, and dimension names along with the role to write data into Timestream. If the role does not exist, you can create one by clicking on Create Roles 3. 4. 5. To test the rule, follow the instructions shown here. Using the CLI If you haven't installed the AWS Command Line Interface (AWS CLI), do so from here. 1. Save the following rule payload in a JSON file called timestream_rule.json. Replace arn:aws:iam::123456789012:role/TimestreamRole with your role arn which grants AWS IoT access to store data in Amazon Timestream { "actions": [ { "timestream": { "roleArn": "arn:aws:iam::123456789012:role/TimestreamRole", "tableName": "devices_metrics", "dimensions": [ { "name": "device_id", "value": "${clientId()}" }, { "name": "device_firmware_sku", "value": "My Static Metadata" } ], "databaseName": "record_devices" } } AWS IoT Core 578 Amazon Timestream ], "sql": "select * from 'iot/topic'", "awsIotSqlVersion": "2016-03-23", "ruleDisabled": false } 2. Create a topic rule using the following command Developer Guide aws iot create-topic-rule --rule-name timestream_test --topic-rule-payload file:// <path/to/timestream_rule.json> --region us-east-1 3. Retrieve details of topic rule using the following command aws iot get-topic-rule --rule-name timestream_test 4. Save the following message payload in a file called timestream_msg.json { "dataFormat": 5, "rssi": -88, "temperature": 24.04, "humidity": 43.605, "pressure": 101082, "accelerationX": 40, "accelerationY": -20, "accelerationZ": 1016, "battery": 3007, "txPower": 4, "movementCounter": 219, "device_id": 46216, "device_firmware_sku": 46216 } 5. Test the rule using the following command aws iot-data publish --topic 'iot/topic' --payload file://<path/to/ timestream_msg.json> AWS IoT Core 579 Amazon Timestream Sample application Developer Guide To help you get started with using Timestream with AWS IoT Core, we've created a fully functional sample application that creates the necessary artifacts in AWS IoT Core and Timestream for creating a topic rule and a sample application for publishing a data to the topic. 1. Clone the GitHub repository for the sample application for AWS IoT Core integration following the instructions from GitHub 2. Follow the instructions in the README to use an AWS CloudFormation template to create the necessary artifacts in Amazon Timestream and AWS IoT Core and to publish sample messages to the topic. Video tutorial This video explains how IoT Core works with Timestream. Amazon Managed Service for Apache Flink You can use Apache Flink to transfer your time series data from Amazon Managed Service for Apache Flink, Amazon MSK, Apache Kafka, and other streaming technologies directly into Amazon Timestream for LiveAnalytics. We've created an Apache Flink sample data connector for Timestream. We've also created a sample application for sending data to Amazon Kinesis so that the data can flow from Kinesis to Managed Service for Apache Flink, and finally on to Amazon Timestream. All of these artifacts are available to you in GitHub. This video tutorial describes the setup. Note Java 11 is the recommended version for using the Managed Service for Apache Flink Application. If you have multiple Java versions, ensure that you export Java 11 to your JAVA_HOME environment variable. Topics • Sample application • Video tutorial Amazon Managed Service for Apache Flink 580 Amazon Timestream Sample application To get started, follow the procedure below: Developer Guide 1. Create a database in Timestream with the name kdaflink following the instructions described in Create a database. 2. Create a table in Timestream with the name kinesisdata1 following the instructions described in Create a table. 3. Create an Amazon Kinesis Data Stream with the name TimestreamTestStream following the instructions described in Creating a Stream. 4. Clone the GitHub repository for the Apache Flink data connector for Timestream following the instructions from GitHub. 5. To compile, run and use the sample application, follow the instructions in the Apache Flink sample data connector README. 6. Compile the Managed Service for Apache Flink application following the instructions for Compiling the Application Code. 7. Upload the Managed Service for Apache Flink application binary following the instructions to Upload the Apache Flink Streaming Code. a. After clicking on Create Application, click on the link of the IAM Role for the application. b. Attach the IAM policies for AmazonKinesisReadOnlyAccess and AmazonTimestreamFullAccess. Note The above IAM policies are not restricted to specific resources and are unsuitable for production use. For a production system, consider using policies that restrict access to specific resources. 8. Clone the GitHub repository for the sample application writing data to Kinesis following the instructions from GitHub. 9.
timestream-148
timestream.pdf
148
for Compiling the Application Code. 7. Upload the Managed Service for Apache Flink application binary following the instructions to Upload the Apache Flink Streaming Code. a. After clicking on Create Application, click on the link of the IAM Role for the application. b. Attach the IAM policies for AmazonKinesisReadOnlyAccess and AmazonTimestreamFullAccess. Note The above IAM policies are not restricted to specific resources and are unsuitable for production use. For a production system, consider using policies that restrict access to specific resources. 8. Clone the GitHub repository for the sample application writing data to Kinesis following the instructions from GitHub. 9. Follow the instructions in the README to run the sample application for writing data to Kinesis. 10. Run one or more queries in Timestream to ensure that data is being sent from Kinesis to Managed Service for Apache Flink to Timestream following the instructions to Create a table. Amazon Managed Service for Apache Flink 581 Amazon Timestream Video tutorial Developer Guide This video explains how to use Timestream with Managed Service for Apache Flink. Amazon Kinesis Using Amazon Managed Service for Apache Flink You can send data from Kinesis Data Streams to Timestream for LiveAnalytics using the sample Timestream data connector for Managed Service for Apache Flink. Refer to Amazon Managed Service for Apache Flink for Apache Flink for more information. Using EventBridge Pipes to send Kinesis data to Timestream You can use EventBridge Pipes to send data from a Kinesis stream to a Amazon Timestream for LiveAnalytics table. Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data. This integration enables you to leverage the power of Timestream's time-series data analysis capabilities, while simplifying your data ingestion pipeline. Using EventBridge Pipes with Timestream offers the following benefits: Amazon Kinesis 582 Amazon Timestream Developer Guide • Real-time Data Ingestion: Stream data from Kinesis directly to Timestream for LiveAnalytics, enabling real-time analytics and monitoring. • Seamless Integration: Utilize EventBridge Pipes to manage the flow of data without the need for complex custom integrations. • Enhanced Filtering and Transformation: Filter or transform Kinesis records before they are stored in Timestream to meet your specific data processing requirements. • Scalability: Handle high-throughput data streams and ensure efficient data processing with built- in parallelism and batching capabilities. Configuration To set up an EventBridge Pipe to stream data from Kinesis to Timestream, follow these steps: 1. Create a Kinesis stream Ensure you have an active Kinesis data stream from which you want to ingest data. 2. Create a Timestream database and table Set up your Timestream database and table where the data will be stored. 3. Configure the EventBridge Pipe: • Source: Select your Kinesis stream as the source. • Target: Choose Timestream as the target. • Batching Settings: Define batching window and batch size to optimize data processing and reduce latency. Important When setting up a pipe, we recommend testing the correctness of all configurations by ingesting a few records. Please note that successful creation of a pipe does not guarantee that the pipeline is correct and data will flow without errors. There may be runtime errors, such as incorrect table, incorrect dynamic path parameter, or invalid Timestream record after applying mapping, that will be discovered when actual data flows through the pipe. The following configurations determine the rate at which data is ingested: Amazon Kinesis 583 Amazon Timestream Developer Guide • BatchSize: The maximum size of the batch that will be sent to Timestream for LiveAnalytics. Range: 0 - 100. Recommendation is to keep this value as 100 to get maximum throughput. • MaximumBatchingWindowInSeconds: The maximum time to wait to fill the batchSize before the batch is sent to Timestream for LiveAnalytics target. Depending on the rate of incoming events, this configuration will decide the delay of ingestion, recommendation is to keep this value < 10s to keep sending the data to Timestream in near real-time. • ParallelizationFactor: The number of batches to process concurrently from each shard. Recommendation is to use the maximum value of 10 to get maximum throughput and near real- time ingestion. If your stream is read by multiple targets, use enhanced fan-out to provide a dedicated consumer to your pipe to achieve high throughput. For more information, see Developing enhanced fan- out consumers with the Kinesis Data Streams API in the Kinesis Data Streams User Guide. Note The maximum throughput that can be achieved is bounded by concurrent pipe executions per account. The following configuration ensures prevention of data loss: • DeadLetterConfig: Recommendation is to always configure DeadLetterConfig to avoid any data loss
timestream-149
timestream.pdf
149
Recommendation is to use the maximum value of 10 to get maximum throughput and near real- time ingestion. If your stream is read by multiple targets, use enhanced fan-out to provide a dedicated consumer to your pipe to achieve high throughput. For more information, see Developing enhanced fan- out consumers with the Kinesis Data Streams API in the Kinesis Data Streams User Guide. Note The maximum throughput that can be achieved is bounded by concurrent pipe executions per account. The following configuration ensures prevention of data loss: • DeadLetterConfig: Recommendation is to always configure DeadLetterConfig to avoid any data loss for cases when events could not be ingested to Timestream for LiveAnalytics due to user errors. Optimize your pipe's performance with the following configuration settings, which helps prevent records from causing slowdowns or blockages. • MaximumRecordAgeInSeconds: Records older than this will not be processed and will directly get moved to DLQ. We recommend setting this value to be no higher than the configured Memory store retention period of the target Timestream table. • MaximumRetryAttempts: The number of retry attempts for a record before the record is sent to DeadLetterQueue. Recommendation is to configure this at 10. This should be able to help address any transient issues and for persistent issues, the record will be moved to DeadLetterQueue and unblock the rest of the stream. Amazon Kinesis 584 Amazon Timestream Developer Guide • OnPartialBatchItemFailure: For sources that support partial batch processing, we recommend you to enable this and configure it as AUTOMATIC_BISECT for additional retry of failed records before dropping/sending to DLQ. Configuration example Here is an example of how to configure an EventBridge Pipe to stream data from a Kinesis stream to a Timestream table: Example IAM policy updates for Timestream { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "timestream:WriteRecords" ], "Resource": [ "arn:aws:timestream:us-east-1:123456789012:database/my-database/table/ my-table" ] }, { "Effect": "Allow", "Action": [ "timestream:DescribeEndpoints" ], "Resource": "*" } ] } Example Kinesis stream configuration { "Source": "arn:aws:kinesis:us-east-1:123456789012:stream/my-kinesis-stream", "SourceParameters": { "KinesisStreamParameters": { "BatchSize": 100, "DeadLetterConfig": { Amazon Kinesis 585 Amazon Timestream Developer Guide "Arn": "arn:aws:sqs:us-east-1:123456789012:my-sqs-queue" }, "MaximumBatchingWindowInSeconds": 5, "MaximumRecordAgeInSeconds": 1800, "MaximumRetryAttempts": 10, "StartingPosition": "LATEST", "OnPartialBatchItemFailure": "AUTOMATIC_BISECT" } } } Example Timestream target configuration { "Target": "arn:aws:timestream:us-east-1:123456789012:database/my-database/table/my- table", "TargetParameters": { "TimestreamParameters": { "DimensionMappings": [ { "DimensionName": "sensor_id", "DimensionValue": "$.data.device_id", "DimensionValueType": "VARCHAR" }, { "DimensionName": "sensor_type", "DimensionValue": "$.data.sensor_type", "DimensionValueType": "VARCHAR" }, { "DimensionName": "sensor_location", "DimensionValue": "$.data.sensor_loc", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": [ { "MultiMeasureName": "readings", "MultiMeasureAttributeMappings": [ { "MultiMeasureAttributeName": "temperature", "MeasureValue": "$.data.temperature", "MeasureValueType": "DOUBLE" Amazon Kinesis 586 Developer Guide Amazon Timestream }, { "MultiMeasureAttributeName": "humidity", "MeasureValue": "$.data.humidity", "MeasureValueType": "DOUBLE" }, { "MultiMeasureAttributeName": "pressure", "MeasureValue": "$.data.pressure", "MeasureValueType": "DOUBLE" } ] } ], "SingleMeasureMappings": [], "TimeFieldType": "TIMESTAMP_FORMAT", "TimestampFormat": "yyyy-MM-dd HH:mm:ss.SSS", "TimeValue": "$.data.time", "VersionValue": "$.approximateArrivalTimestamp" } } } Event transformation EventBridge Pipes allow you to transform data before it reaches Timestream. You can define transformation rules to modify the incoming Kinesis records, such as changing field names. Suppose your Kinesis stream contains temperature and humidity data. You can use an EventBridge transformation to rename these fields before inserting them into Timestream. Best practices Batching and Buffering • Configure the batching window and size to balance between write latency and processing efficiency. • Use a batching window to accumulate enough data before processing, reducing the overhead of frequent small batches. Parallel Processing Amazon Kinesis 587 Amazon Timestream Developer Guide Utilize the ParallelizationFactor setting to increase concurrency, especially for high-throughput streams. This ensures that multiple batches from each shard can be processed simultaneously. Data Transformation Leverage the transformation capabilities of EventBridge Pipes to filter and enhance records before storing them in Timestream. This can help in aligning the data with your analytical requirements. Security • Ensure that the IAM roles used for EventBridge Pipes have the necessary permissions to read from Kinesis and write to Timestream. • Use encryption and access control measures to secure data in transit and at rest. Debugging failures • Automatic Disabling of Pipes Pipes will be automatically disabled in about 2 hours if the target does not exist or has permission issues • Throttles Pipes have the capability to automatically back off and retry until the throttles have reduced. • Enabling Logs We recommend you enable Logs at ERROR level and include execution data to get more insights into failed. Upon any failure, these logs will contain request/response sent/received from Timestream. This helps you understand the error associated and if needed reprocess the records after fixing it. Monitoring We recommend you to set up alarms on the following to detect any issues with data flow: • Maximum Age of the Record in Source • GetRecords.IteratorAgeMilliseconds • Failure metrics in Pipes • ExecutionFailed Amazon Kinesis 588 Amazon Timestream • TargetStageFailed • Timestream Write API errors • UserErrors Developer Guide For additional monitoring metrics, see Monitoring EventBridge in the EventBridge User Guide. Amazon MQ
timestream-150
timestream.pdf
150
execution data to get more insights into failed. Upon any failure, these logs will contain request/response sent/received from Timestream. This helps you understand the error associated and if needed reprocess the records after fixing it. Monitoring We recommend you to set up alarms on the following to detect any issues with data flow: • Maximum Age of the Record in Source • GetRecords.IteratorAgeMilliseconds • Failure metrics in Pipes • ExecutionFailed Amazon Kinesis 588 Amazon Timestream • TargetStageFailed • Timestream Write API errors • UserErrors Developer Guide For additional monitoring metrics, see Monitoring EventBridge in the EventBridge User Guide. Amazon MQ Using EventBridge Pipes to send Amazon MQ data to Timestream You can use EventBridge Pipes to send data from a Amazon MQ broker to a Amazon Timestream for LiveAnalytics table. Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data. For more information on EventBridge Pipes, see EventBridge Pipes in the EventBridge User Guide. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see EventBridge Pipes target specifics. Amazon MQ 589 Amazon Timestream Amazon MSK Developer Guide Using Managed Service for Apache Flink to send Amazon MSK data to Timestream for LiveAnalytics You can send data from Amazon MSK to Timestream by building a data connector similar to the sample Timestream data connector for Managed Service for Apache Flink. Refer to Amazon Managed Service for Apache Flink for more information. Using Kafka Connect to send Amazon MSK data to Timestream for LiveAnalytics You can use Kafka Connect to ingest your time series data from Amazon MSK directly into Timestream for LiveAnalytics. We've created a sample Kafka Sink Connector for Timestream. We've also created a sample Apache jMeter test plan for publishing data to a Kafka topic, so that the data can flow from the topic through the Timestream Kafka Sink Connector, to an Timestream for LiveAnalytics table. All of these artifacts are available on GitHub. Note Java 11 is the recommended version for using the Timestream Kafka Sink Connector. If you have multiple Java versions, ensure that you export Java 11 to your JAVA_HOME environment variable. Creating a sample application To get started, follow the procedure below. 1. In Timestream for LiveAnalytics, create a database with the name kafkastream. See the procedure ??? for detailed instructions. 2. In Timestream for LiveAnalytics, create a table with the name purchase_history. See the procedure ??? for detailed instructions. 3. Follow the instructions shared in the to create the following: , and . • An Amazon MSK cluster Amazon MSK 590 Amazon Timestream Developer Guide • An Amazon EC2 instance that is configured as a Kafka producer client machine • A Kafka topic See the prerequisites of the kafka_ingestor project for detailed instructions. 4. Clone the Timestream Kafka Sink Connector repository. See Cloning a repository on GitHub for detailed instructions. 5. Compile the plugin code. See Connector - Build from source on GitHub for detailed instructions. 6. Upload the following files to an S3 bucket: following the instructions described in . • The jar file (kafka-connector-timestream->VERSION<-jar-with-dependencies.jar) from the / target directory • The sample json schema file, purchase_history.json. See Uploading objects in the Amazon S3 User Guide for detailed instructions. 7. Create two VPC endpoints. These endpoints would be used by the MSK Connector to access the resources using AWS PrivateLink. • One to access the Amazon S3 bucket • One to access the Timestream for LiveAnalytics table. See VPC Endpoints for detailed instructions. 8. Create a custom plugin with the uploaded jar file. See Plugins in the Amazon MSK Developer Guide for detailed instructions. 9. Create a custom worker configuration with the JSON content described in Worker Configuration parameters. following the instructions described in See Creating a custom worker configuration in the Amazon MSK Developer Guide for detailed instructions. 10. Create a service execution IAM role. See IAM Service Role for detailed instructions. Amazon MSK 591 Amazon Timestream Developer Guide 11. Create an Amazon MSK connector with the custom plugin, custom worker configuration, and service execution IAM role created in the previous steps and with the Sample Connector Configuration. See Creating a connector in the Amazon MSK Developer Guide for detailed instructions. Make sure to update the values of the below configuration parameters with respective values. See Connector Configuration parameters for details. • aws.region • timestream.schema.s3.bucket.name • timestream.ingestion.endpoint The connector creation takes 5–10 minutes to complete. The pipeline is ready when its status changes to Running. 12. Publish a continuous stream of messages for writing data to the Kafka topic created.
timestream-151
timestream.pdf
151
Create an Amazon MSK connector with the custom plugin, custom worker configuration, and service execution IAM role created in the previous steps and with the Sample Connector Configuration. See Creating a connector in the Amazon MSK Developer Guide for detailed instructions. Make sure to update the values of the below configuration parameters with respective values. See Connector Configuration parameters for details. • aws.region • timestream.schema.s3.bucket.name • timestream.ingestion.endpoint The connector creation takes 5–10 minutes to complete. The pipeline is ready when its status changes to Running. 12. Publish a continuous stream of messages for writing data to the Kafka topic created. See How to use it for detailed instructions. 13. Run one or more queries to ensure that the data is being sent from Amazon MSK to MSK Connect to the Timestream for LiveAnalytics table. See the procedure ??? for detailed instructions. Additional resources The blog, Real-time serverless data ingestion from your Kafka clusters into Timestream for LiveAnalytics using Kafka Connect explains setting up an end-to-end pipeline using the Timestream for LiveAnalytics Kafka Sink Connector, starting from a Kafka producer client machine that uses the Apache jMeter test plan to publish thousands of sample messages to a Kafka topic to verifying the ingested records in an Timestream for LiveAnalytics table. Amazon QuickSight You can use Amazon QuickSight to analyze and publish data dashboards that contain your Amazon Timestream data. This section describes how you can create a new QuickSight data source connection, modify permissions, create new datasets, and perform an analysis. This video tutorial describes how to work with Timestream and QuickSight. Amazon QuickSight 592 Amazon Timestream Note Developer Guide All datasets in QuickSight are read-only. You can't make any changes to your actual data in Timestream by using QuickSight to remove the data source, dataset, or fields. Topics • Accessing Amazon Timestream from QuickSight • Create a new QuickSight data source connection for Timestream • Edit permissions for the QuickSight data source connection for Timestream • Create a new QuickSight dataset for Timestream • Create a new analysis for Timestream • Video tutorial Accessing Amazon Timestream from QuickSight Before you can proceed, Amazon QuickSight needs to be authorized to connect to Amazon Timestream. If connections are not enabled, you will receive an error when you try to connect. A QuickSight administrator can authorize connections to AWS resources. To authorize a connection from QuickSight to Timestream, follow the procedure at Using Other AWS Services: Scoping Down Access, choosing Amazon Timestream in step 5. Create a new QuickSight data source connection for Timestream Note The connection between Amazon QuickSight and Amazon Timestream is encrypted in transit using SSL (TLS 1.2). You cannot create an unencrypted connection. 1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in Accessing Amazon Timestream from QuickSight. 2. Begin by creating a new dataset. Choose Datasets from the navigation pane, then choose New Dataset. 3. Select the Timestream data source card. Amazon QuickSight 593 Amazon Timestream Developer Guide 4. For Data source name, enter a name for your Timestream data source connection, for example US Timestream Data. Note Because you can create many datasets from a connection to Timestream, it's best to keep the name simple. 5. Choose Validate connection to check that you can successfully connect to Timestream. Note Validate connection only validates that you can connect. However, it doesn't validate a specific table or query. 6. Choose Create data source to proceed. 7. For Database, choose Select... to view the list of available options. Choose the one you want to use. 8. Choose Select to continue. 9. Choose one of the following: • To import your data into QuickSight's in-memory engine (called SPICE), choose Import to SPICE for quicker analytics. • To allow QuickSight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose Directly query your data. 10. Choose Edit/Preview and then Save to save your dataset and close it. Edit permissions for the QuickSight data source connection for Timestream The following procedure describes how to view, add, and revoke permissions for other QuickSight users so that they can access the same Timestream data source. The people need to be active users in QuickSight before you can add them. Note In QuickSight, data sources have two permissions levels: user and owner. • Choose user to allow read access. Amazon QuickSight 594 Amazon Timestream Developer Guide • Choose owner to allow that user to edit, share, or delete this QuickSight data source. 1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in Accessing Amazon Timestream from QuickSight. 2. Choose Datasets at left, then scroll down to find the data source card for your Timestream connection. For example US Timestream Data.
timestream-152
timestream.pdf
152
need to be active users in QuickSight before you can add them. Note In QuickSight, data sources have two permissions levels: user and owner. • Choose user to allow read access. Amazon QuickSight 594 Amazon Timestream Developer Guide • Choose owner to allow that user to edit, share, or delete this QuickSight data source. 1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in Accessing Amazon Timestream from QuickSight. 2. Choose Datasets at left, then scroll down to find the data source card for your Timestream connection. For example US Timestream Data. 3. Choose the Timestream data source card. 4. Choose Share data source. A list of current permissions displays. 5. 6. (Optional) To edit permissions, you can choose user or owner. (Optional) To revoke permissions, choose Revoke access. People you revoke can't create new datasets from this data source. However, their existing datasets will still have access to this data source. 7. To add permissions, choose Invite users, then follow these steps to add a user: a. Add people to allow them to use the same data source. b. For each, choose the Permission that you want to apply. 8. When you are finished, choose Close. Create a new QuickSight dataset for Timestream 1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in Accessing Amazon Timestream from QuickSight. 2. Choose Datasets at left, then scroll down to find the data source card for your Timestream connection. If you have many data sources, you can use the search bar at the top of the page to find it with a partial match on the name. 3. Choose the Timestream data source card. Then choose Create data set. 4. For Database, choose Select to view the list of available options. Choose the database that you want to use. 5. For Tables, choose the table that you want to use. 6. Choose Edit/Preview. 7. (Optional) To add more data, choose Add data at top right. a. Choose Switch data source, and choose a different data source. Amazon QuickSight 595 Amazon Timestream Developer Guide b. c. d. e. Follow the UI prompts to finish adding data. After adding new data to the same dataset, choose Configure this join (the two red dots). Set up a join for each additional table. If you want to add calculated fields, choose Add calculated field. To use Sagemaker, choose Augment with SageMaker. This option is only available in QuickSight Enterprise edition. f. Uncheck any fields you want to omit. g. Update any data types you want to change. 8. When you are done, choose Save to save and close the dataset. Create a new analysis for Timestream 1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in Accessing Amazon Timestream from QuickSight. 2. Choose Analyses at left. 3. Choose one of the following: • To create a new analysis, choose New analysis at right. • To add the Timestream dataset to an existing analysis, open the analysis you want to edit. Choose the pencil icon near at top left, then Add data set. Start the first data visualization by choosing fields on the left. For more information, see Working with Analyses - Amazon QuickSight 4. 5. Video tutorial This video explains how QuickSight works with Timestream. Amazon SageMaker AI You can use Amazon SageMaker Notebooks to integrate your machine learning models with Amazon Timestream. To help you get started, we have created a sample SageMaker Notebook that processes data from Timestream. The data is inserted into Timestream from a multi-threaded Python application continuously sending data. The source code for the sample SageMaker Notebook and the sample Python application are available in GitHub. Amazon SageMaker AI 596 Amazon Timestream Developer Guide 1. Create a database and table following the instructions described in Create a database and Create a table. 2. Clone the GitHub repository for the multi-threaded Python sample application following the instructions from GitHub. 3. Clone the GitHub repository for the sample Timestream SageMaker Notebook following the instructions from GitHub. 4. Run the application for continuously ingesting data into Timestream following the instructions in the README. 5. Follow the instructions to create an Amazon S3 bucket for Amazon SageMaker as described here. 6. Create an Amazon SageMaker instance with latest boto3 installed: In addition to the instructions described here, follow the steps below: a. On the Create notebook instance page, click on Additional Configuration b. Click on Lifecycle configuration - optional and select Create a new lifecycle configuration c. On the Create lifecycle configuration wizard box, do the following: i. ii. Fill in a desired name to the configuration, e.g. on-start In Start Notebook script, copy-paste the script content from Github iii. Replace
timestream-153
timestream.pdf
153
in the README. 5. Follow the instructions to create an Amazon S3 bucket for Amazon SageMaker as described here. 6. Create an Amazon SageMaker instance with latest boto3 installed: In addition to the instructions described here, follow the steps below: a. On the Create notebook instance page, click on Additional Configuration b. Click on Lifecycle configuration - optional and select Create a new lifecycle configuration c. On the Create lifecycle configuration wizard box, do the following: i. ii. Fill in a desired name to the configuration, e.g. on-start In Start Notebook script, copy-paste the script content from Github iii. Replace PACKAGE=scipy with PACKAGE=boto3 in the pasted script. 7. Click on Create configuration 8. Go to the IAM service in the AWS Management Console and find the newly created SageMaker execution role for the notebook instance. 9. Attach the IAM policy for AmazonTimestreamFullAccess to the execution role. Note The AmazonTimestreamFullAccess IAM policy is not restricted to specific resources and is unsuitable for production use. For a production system, consider using policies that restrict access to specific resources. 10. When the status of the notebook instance is InService, choose Open Jupyter to launch a SageMaker Notebook for the instance Amazon SageMaker AI 597 Amazon Timestream Developer Guide 11. Upload the files timestreamquery.py and Timestream_SageMaker_Demo.ipynb into the Notebook by selecting the Upload button 12. Choose Timestream_SageMaker_Demo.ipynb Note If you see a pop up with Kernel not found, choose conda_python3 and click Set Kernel. 13. Modify DB_NAME, TABLE_NAME, bucket, and ENDPOINT to match the database name, table name, S3 bucket name, and region for the training models. 14. Choose the play icon to run the individual cells 15. When you get to the cell Leverage Timestream to find hosts with average CPU utilization across the fleet, ensure that the output returns at least 2 host names. Note If there are less than 2 host names in the output, you may need to rerun the sample Python application ingesting data into Timestream with a larger number of threads and host-scale. 16. When you get to the cell Train a Random Cut Forest (RCF) model using the CPU utilization history, change the train_instance_type based on the resource requirements for your training job 17. When you get to the cell Deploy the model for inference, change the instance_type based on the resource requirements for your inference job Note It may take a few minutes to train the model. When the training is complete, you will see the message Completed - Training job completed in the output of the cell. 18. Run the cell Stop and delete the endpoint to clean up resources. You can also stop and delete the instance from the SageMaker console Amazon SageMaker AI 598 Amazon Timestream Amazon SQS Developer Guide Using EventBridge Pipes to send Amazon SQS data to Timestream You can use EventBridge Pipes to send data from a Amazon SQS queue to a Amazon Timestream for LiveAnalytics table. Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data. For more information on EventBridge Pipes, see EventBridge Pipes in the EventBridge User Guide. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see EventBridge Pipes target specifics. Using DBeaver to work with Amazon Timestream DBeaver is a free universal SQL client that can be used to manage any database that has a JDBC driver. It is widely used among developers and database administrators because of its robust data viewing, editing, and management capabilities. Using DBeaver's cloud connectivity options, you can connect DBeaver to Amazon Timestream natively. DBeaver provides a comprehensive and intuitive interface to work with time series data directly from within a DBeaver application. Using your credentials, it also gives you full access to Amazon SQS 599 Amazon Timestream Developer Guide any queries that you could execute from another query interface. It even lets you create graphs for better understanding and visualization of query results. Setting up DBeaver to work with Timestream Take the following steps to set up DBeaver to work with Timestream: 1. Download and install DBeaver on your local machine. 2. Launch DBeaver, navigate to the database selection area, choose Timeseries in the left pane, and then select the Timestream icon in the right pane: DBeaver 600 Amazon Timestream Developer Guide 3. In the Timestream Connection Settings window, enter all the information necessary to connect to your Amazon Timestream database. Please ensure that the user keys you enter have the permissions necessary to access your Timestream database. Also, be sure to keep the information and keys
timestream-154
timestream.pdf
154
work with Timestream Take the following steps to set up DBeaver to work with Timestream: 1. Download and install DBeaver on your local machine. 2. Launch DBeaver, navigate to the database selection area, choose Timeseries in the left pane, and then select the Timestream icon in the right pane: DBeaver 600 Amazon Timestream Developer Guide 3. In the Timestream Connection Settings window, enter all the information necessary to connect to your Amazon Timestream database. Please ensure that the user keys you enter have the permissions necessary to access your Timestream database. Also, be sure to keep the information and keys you input into DBeaver safe and private, as with any sensitive information. 4. Test the connection to ensure that everything is set up correctly: DBeaver 601 Amazon Timestream Developer Guide 5. If the connection test is successful, you can now interact with your Amazon Timestream database just as you would with any other database in DBeaver. For example, you can navigate to the SQL editor or to the ER Diagram view to run queries: DBeaver 602 Amazon Timestream Developer Guide 6. DBeaver also provides powerful data visualization tools. To use them, run your query, then select the graph icon to visualize the result set. The graphing tool can help you better understand data trends over time. DBeaver 603 Amazon Timestream Developer Guide Pairing Amazon Timestream with DBeaver creates an effective environment for managing time series data. You can integrate it seamlessly into your existing workflow to enhance productivity and efficiency. Grafana You can visualize your time series data and create alerts using Grafana. To help you get started with data visualization, we have created a sample dashboard in Grafana that visualizes data sent to Timestream from a Python application and a video tutorial that describes the setup. Topics • Sample application • Video tutorial Sample application 1. Create a database and a table in Timestream following the instructions described in Create a database for more information. Note The default database name and table name for the Grafana dashboard are set to grafanaDB and grafanaTable respectively. Use these names to minimize setup. 2. 3. Install Python 3.7 or higher. Install and configure the Timestream Python SDK.s 4. Clone the GitHub repository for the multi-thread Python application continuously ingesting data into Timestream following the instructions from GitHub. 5. Run the application for continuously ingesting data into Timestream following the instructions in the README. 6. Complete Learn how to create and use Amazon Managed Grafana resources or complete Install Grafana. 7. If installing Grafana instead of using Amazon Managed Grafana, complete Installing Amazon Timestream on Grafana Cloud. 8. Open the Grafana dashboard using a browser of your choice. If you've locally installed Grafana, you can follow the instructions described in the Grafana documentation to log in. Grafana 604 Amazon Timestream Developer Guide 9. After launching Grafana, go to Datasources, click on Add Datasource, search for Timestream, and select the Timestream datasource. 10. Configure the Auth Provider and the region and click Save and Test. 11. Set the default macros. a. b. c. Set $__database to the name of your Timestream database (e.g. grafanaDB). Set $__table to the name of your Timestream table (e.g. grafanaTable). Set $__measure to the most commonly used measure from the table. 12. Click Save and Test. 13. Click on the Dashboards tab. 14. Click on Import to import the dashboard. 15. Double click the Sample Application Dashboard. 16. Click on the dashboard settings. 17. Select Variables. 18. Change dbName and tableName to match the names of the Timestream database and table. 19. Click Save. 20. Refresh the dashboard. 21. To create alerts, follow the instructions described in the Grafana documentation to Configure Grafana-managed alert rules. 22. To troubleshoot alerts, follow the instructions described in the Grafana documentation for Troubleshooting. 23. For additional information, see the Grafana documentation. Video tutorial This video explains how Grafana works with Timestream. Using SquaredUp to work with Amazon Timestream SquaredUp is an observability platform that integrates with Amazon Timestream. You can use SquaredUp's intuitive dashboard designer to visualize, analyze, and monitor your time-series data. Dashboards can be shared publicly or privately, and notification channels can be created to alert you when the health state of a monitor changes. SquaredUp 605 Amazon Timestream Developer Guide Using SquaredUp with Amazon Timestream 1. Sign up for SquaredUp and get started for free. 2. Add an AWS data source. 3. Create a dashboard tile that uses the Timestream Query data stream. 4. Optionally, enable monitoring for the tile, create a notification channel, or share the dashboard publicly or privately. 5. Optionally create other tiles to see your Timestream data alongside data from your other monitoring and observability tools. Open source Telegraf You can use the Timestream for LiveAnalytics output plugin for Telegraf to write metrics into Timestream for
timestream-155
timestream.pdf
155
of a monitor changes. SquaredUp 605 Amazon Timestream Developer Guide Using SquaredUp with Amazon Timestream 1. Sign up for SquaredUp and get started for free. 2. Add an AWS data source. 3. Create a dashboard tile that uses the Timestream Query data stream. 4. Optionally, enable monitoring for the tile, create a notification channel, or share the dashboard publicly or privately. 5. Optionally create other tiles to see your Timestream data alongside data from your other monitoring and observability tools. Open source Telegraf You can use the Timestream for LiveAnalytics output plugin for Telegraf to write metrics into Timestream for LiveAnalytics directly from open source Telegraf. This section provides an explanation of how to install Telegraf with the Timestream for LiveAnalytics output plugin, how to run Telegraf with the Timestream for LiveAnalytics output plugin, and how open source Telegraf works with Timestream for LiveAnalytics. Topics • Installing Telegraf with the Timestream for LiveAnalytics output plugin • Running Telegraf with the Timestream for LiveAnalytics output plugin • Mapping Telegraf/InfluxDB metrics to the Timestream for LiveAnalytics model Installing Telegraf with the Timestream for LiveAnalytics output plugin As of version 1.16, the Timestream for LiveAnalytics output plugin is available in the official Telegraf release. To install the output plugin on most major operating systems, follow the steps outlined in the InfluxData Telegraf Documentation. To install on the Amazon Linux 2 OS, follow the instructions below. Installing Telegraf with the Timestream for LiveAnalytics output plugin on Amazon Linux 2 To install Telegraf with the Timestream Output Plugin on Amazon Linux 2, perform the following steps. 1. Install Telegraf using the yum package manager. Open source Telegraf 606 Amazon Timestream Developer Guide cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo [influxdb] name = InfluxDB Repository - RHEL \$releasever baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable enabled = 1 gpgcheck = 1 gpgkey = https://repos.influxdata.com/influxdb.key EOF 2. Run the following command. sudo sed -i "s/\$releasever/$(rpm -E %{rhel})/g" /etc/yum.repos.d/influxdb.repo 3. Install and start Telegraf. sudo yum install telegraf sudo service telegraf start Running Telegraf with the Timestream for LiveAnalytics output plugin You can follow the instructions below to run Telegraf with the Timestream for LiveAnalytics plugin. 1. Generate an example configuration using Telegraf. telegraf --section-filter agent:inputs:outputs --input-filter cpu:mem --output- filter timestream config > example.config 2. Create a database in Timestream using the management console, CLI, or SDKs. 3. In the example.config file, add your database name by editing the following key under the [[outputs.timestream]] section. database_name = "yourDatabaseNameHere" 4. By default, Telegraf will create a table. If you wish create a table manually, set create_table_if_not_exists to false and follow the instructions to create a table using the management console, CLI, or SDKs. 5. In the example.config file, configure credentials under the [[outputs.timestream]] section. The credentials should allow the following operations. Open source Telegraf 607 Amazon Timestream Developer Guide timestream:DescribeEndpoints timestream:WriteRecords Note If you leave create_table_if_not_exists set to true, include: timestream:CreateTable Note If you set describe_database_on_start to true, include the following. timestream:DescribeDatabase 6. You can edit the rest of the configuration according to your preferences. 7. When you have finished editing the config file, run Telegraf with the following. ./telegraf --config example.config 8. Metrics should appear within a few seconds, depending on your agent configuration. You should also see the new tables, cpu and mem, in the Timestream console. Mapping Telegraf/InfluxDB metrics to the Timestream for LiveAnalytics model When writing data from Telegraf to Timestream for LiveAnalytics, the data is mapped as follows. • The timestamp is written as the time field. • Tags are written as dimensions. • Fields are written as measures. • Measurements are mostly written as table names (more on this below). Open source Telegraf 608 Amazon Timestream Developer Guide The Timestream for LiveAnalytics output plugin for Telegraf offers multiple options for organizing and storing data in Timestream for LiveAnalytics. This can be described with an example which begins with the data in line protocol format. weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200 airquality,location=us-west no2=5,pm25=16 1465839830100400200 The following describes the data. • The measurement names are weather and airquality. • The tags are location and season. • The fields are temperature, humidity, no2, and pm25. Topics • Storing the data in multiple tables • Storing the data in a single table Storing the data in multiple tables You can choose to create a separate table per measurement and store each field in a separate row per table. The configuration is mapping_mode = "multi-table". • The Timestream for LiveAnalytics adapter will create two tables, namely, weather and airquality. • Each table row will contain a single field only. The resulting Timestream for LiveAnalytics tables, weather and airquality, will look like this. weather time location season measure_name measure_v alue::bigint us-midwest summer temperature 82 2016-06-13 17:43:50 Open source Telegraf 609 Amazon Timestream Developer Guide time location season measure_name measure_v alue::bigint us-midwest summer humidity 71 2016-06-13 17:43:50 airquality
timestream-156
timestream.pdf
156
in multiple tables You can choose to create a separate table per measurement and store each field in a separate row per table. The configuration is mapping_mode = "multi-table". • The Timestream for LiveAnalytics adapter will create two tables, namely, weather and airquality. • Each table row will contain a single field only. The resulting Timestream for LiveAnalytics tables, weather and airquality, will look like this. weather time location season measure_name measure_v alue::bigint us-midwest summer temperature 82 2016-06-13 17:43:50 Open source Telegraf 609 Amazon Timestream Developer Guide time location season measure_name measure_v alue::bigint us-midwest summer humidity 71 2016-06-13 17:43:50 airquality time location measure_name measure_value::big int 2016-06-13 17:43:50 us-midwest 2016-06-13 17:43:50 us-midwest no2 pm25 5 16 Storing the data in a single table You can choose to store all the measurements in a single table and store each field in a separate table row. The configuration is mapping_mode = "single-table". There are two addition configurations when using single-table, single_table_name and single_table_dimension_name_for_telegraf_measurement_name. • The Timestream for LiveAnalytics output plugin will create a single table with name <single_table_name> which includes a <single_table_dimension_name_for_telegraf_measurement_name> column. • The table may contain multiple fields in a single table row. The resulting Timestream for LiveAnalytics table will look like this. Open source Telegraf 610 Amazon Timestream weather time location season Developer Guide measure_n ame measure_v alue::bigint <single_t able_dime nsion_nam e_ for_teleg raf_measu rement_na me> us-midwest summer weather temperature 82 us-midwest summer weather humidity 71 us-midwest summer airquality no2 us-midwest summer weather pm25 5 16 2016-06-13 17:43:50 2016-06-13 17:43:50 2016-06-13 17:43:50 2016-06-13 17:43:50 JDBC You can use a Java Database Connectivity (JDBC) connection to connect Timestream for LiveAnalytics to your business intelligence tools and other applications, such as SQL Workbench. The Timestream for LiveAnalytics JDBC driver currently supports SSO with Okta and Microsoft Azure AD. Topics • Configuring the JDBC driver for Timestream for LiveAnalytics • Connection properties • JDBC URL examples • Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Okta • Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD JDBC 611 Amazon Timestream Developer Guide Configuring the JDBC driver for Timestream for LiveAnalytics Follow the steps below to configure the JDBC driver. Topics • Timestream for LiveAnalytics JDBC driver JARs • Timestream for LiveAnalytics JDBC driver class and URL format • Sample application Timestream for LiveAnalytics JDBC driver JARs You can obtain the Timestream for LiveAnalytics JDBC driver via direct download or by adding the driver as a Maven dependency. • As a direct download:. To directly download the Timestream for LiveAnalytics JDBC driver, complete the following steps: 1. Navigate to https://github.com/awslabs/amazon-timestream-driver-jdbc/releases 2. You can use amazon-timestream-jdbc-1.0.1-shaded.jar directly with your business intelligence tools and applications 3. Download amazon-timestream-jdbc-1.0.1-javadoc.jar to a directory of your choice. 4. In the directory where you have downloaded amazon-timestream-jdbc-1.0.1- javadoc.jar, run the following command to extract the Javadoc HTML files: jar -xvf amazon-timestream-jdbc-1.0.1-javadoc.jar • As a Maven dependency: To add the Timestream for LiveAnalytics JDBC driver as a Maven dependency, complete the following steps: 1. Navigate to and open your application's pom.xml file in an editor of your choice. 2. Add the JDBC driver as a dependency into your application's pom.xml file: <!-- https://mvnrepository.com/artifact/software.amazon.timestream/amazon- timestream-jdbc --> <dependency> <groupId>software.amazon.timestream</groupId> <artifactId>amazon-timestream-jdbc</artifactId> JDBC 612 Amazon Timestream Developer Guide <version>1.0.1</version> </dependency> Timestream for LiveAnalytics JDBC driver class and URL format The driver class for Timestream for LiveAnalytics JDBC driver is: software.amazon.timestream.jdbc.TimestreamDriver The Timestream JDBC driver requires the following JDBC URL format: jdbc:timestream: To specify database properties through the JDBC URL, use the following URL format: jdbc:timestream:// Sample application To help you get started with using Timestream for LiveAnalytics with JDBC, we've created a fully functional sample application in GitHub. 1. Create a database with sample data following the instructions described here. 2. Clone the GitHub repository for the sample application for JDBC following the instructions from GitHub. 3. Follow the instructions in the README to get started with the sample application. Connection properties The Timestream for LiveAnalytics JDBC driver supports the following options: Topics • Basic authentication options • Standard client info option • Driver configuration option JDBC 613 Amazon Timestream • SDK option • Endpoint configuration option • Credential provider options • SAML-based authentication options for Okta • SAML-based authentication options for Azure AD Developer Guide Note If none of the properties are provided, the Timestream for LiveAnalytics JDBC driver will use the default credentials chain to load the credentials. Note All property keys are case-sensitive. Basic authentication options The following table describes the available Basic Authentication options. Option Description Default AccessKeyId The AWS user access key id. NONE SecretAccessKey SessionToken The AWS user secret access key. The temporary session token required to access a database with multi-factor authentic ation (MFA) enabled. NONE NONE Standard client info option The following table describes the Standard Client Info Option. JDBC 614
timestream-157
timestream.pdf
157
authentication options for Azure AD Developer Guide Note If none of the properties are provided, the Timestream for LiveAnalytics JDBC driver will use the default credentials chain to load the credentials. Note All property keys are case-sensitive. Basic authentication options The following table describes the available Basic Authentication options. Option Description Default AccessKeyId The AWS user access key id. NONE SecretAccessKey SessionToken The AWS user secret access key. The temporary session token required to access a database with multi-factor authentic ation (MFA) enabled. NONE NONE Standard client info option The following table describes the Standard Client Info Option. JDBC 614 Amazon Timestream Developer Guide Option Description Default ApplicationName The name of the applicati on currently utilizing the The application name detected by the driver. connection. Applicati onName is used for debugging purposes and will not be communicated to the Timestream for LiveAnalytics service. Driver configuration option The following table describes the Driver Configuration Option. Option Description EnableMetaDataPrep aredStatement Enables Timestream for LiveAnalytics JDBC driver to return metadata for PreparedStatements , but this will incur an additional cost with Timestrea m for LiveAnalytics when retrieving the metadata. Default FALSE Region The database's region. us-east-1 SDK option The following table describes the SDK Option. Option Description Default RequestTimeout The time in milliseconds the AWS SDK will wait for a query 0 JDBC 615 Amazon Timestream Developer Guide Option Description Default SocketTimeout MaxRetryCountClient MaxConnections request before timing out. Non-positive value disables request timeout. The time in milliseconds the AWS SDK will wait for data to be transferred over an open connection before timing out. Value must be non-negative. A value of 0 disables socket timeout. The maximum number of retry attempts for retryable errors with 5XX error codes in the SDK. The value must be non-negative. 50000 NONE The maximum number of allowed concurrently opened 50 HTTP connections to the Timestream for LiveAnalytics service. The value must be positive. Endpoint configuration option The following table describes the Endpoint Configuration Option. Option Endpoint Description The endpoint for the Timestream for LiveAnalytics service. Default NONE JDBC 616 Amazon Timestream Credential provider options Developer Guide The following table describes the available Credential Provider options. Option Description Default AwsCredentialsProviderClass One of Propertie NONE CustomCredentialsFilePath sFileCredentialsPr ovider or InstanceP rofileCredentialsP rovider to use for authentication. The path to a properties file containing AWS security credentials accessKey and secretKey . This is only required if AwsCreden tialsProviderClass is specified as Propertie sFileCredentialsPr ovider . NONE SAML-based authentication options for Okta The following table describes the available SAML-based authentication options for Okta. Option IdpName IdpHost JDBC Description The Identity Provider (Idp) name to use for SAML-based authentication. One of Okta or AzureAD. Default NONE The host name of the specified Idp. NONE 617 Amazon Timestream Developer Guide Default NONE NONE NONE Option Description IdpUserName IdpPassword OktaApplicationID The user name for the specified Idp account. The password for the specified Idp account. The unique Okta-prov ided ID associated with the Timestream for LiveAnaly tics application. AppId can be found in the entityID field provided in the applicati on metadata. Consider the following example: entityID = http://www.okta.co m//IdpAppID RoleARN IdpARN The Amazon Resource Name (ARN) of the role that the NONE caller is assuming. The Amazon Resource Name (ARN) of the SAML provider in NONE IAM that describes the Idp. SAML-based authentication options for Azure AD The following table describes the available SAML-based authentication options for Azure AD. Option IdpName Description The Identity Provider (Idp) name to use for SAML-based authentication. One of Okta or AzureAD . Default NONE JDBC 618 Developer Guide Amazon Timestream Option IdpHost IdpUserName IdpPassword AADApplicationID AADClientSecret AADTenant IdpARN Description The host name of the specified Idp. The user name for the specified Idp account. The password for the specified Idp account. Default NONE NONE NONE The unique id of the registere d application on Azure AD. NONE The client secret associated with the registered applicati NONE on on Azure AD used to authorize fetching tokens. The Azure AD Tenant ID. NONE The Amazon Resource Name (ARN) of the SAML provider in NONE IAM that describes the Idp. JDBC URL examples This section describes how to create a JDBC connection URL, and provides examples. To specify the optional connection properties, use the following URL format: jdbc:timestream://PropertyName1=value1;PropertyName2=value2... Note All connection properties are optional. All property keys are case-sensitive. Below are some examples of JDBC connection URLs. JDBC 619 Amazon Timestream Developer Guide Example with basic authentication options and region: jdbc:timestream:// AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken>;Region=us- east-1 Example with client info, region and SDK options: jdbc:timestream://ApplicationName=MyApp;Region=us- east-1;MaxRetryCountClient=10;MaxConnections=5000;RequestTimeout=20000 Connect using the default credential provider chain with AWS credential set in environment variables: jdbc:timestream Connect using the default credential provider chain with AWS credential set in the connection URL: jdbc:timestream:// AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken> Connect using the PropertiesFileCredentialsProvider as the authentication method: jdbc:timestream:// AwsCredentialsProviderClass=PropertiesFileCredentialsProvider;CustomCredentialsFilePath=<path to properties file> Connect using
timestream-158
timestream.pdf
158
connection properties, use the following URL format: jdbc:timestream://PropertyName1=value1;PropertyName2=value2... Note All connection properties are optional. All property keys are case-sensitive. Below are some examples of JDBC connection URLs. JDBC 619 Amazon Timestream Developer Guide Example with basic authentication options and region: jdbc:timestream:// AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken>;Region=us- east-1 Example with client info, region and SDK options: jdbc:timestream://ApplicationName=MyApp;Region=us- east-1;MaxRetryCountClient=10;MaxConnections=5000;RequestTimeout=20000 Connect using the default credential provider chain with AWS credential set in environment variables: jdbc:timestream Connect using the default credential provider chain with AWS credential set in the connection URL: jdbc:timestream:// AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken> Connect using the PropertiesFileCredentialsProvider as the authentication method: jdbc:timestream:// AwsCredentialsProviderClass=PropertiesFileCredentialsProvider;CustomCredentialsFilePath=<path to properties file> Connect using the InstanceProfileCredentialsProvider as the authentication method: jdbc:timestream://AwsCredentialsProviderClass=InstanceProfileCredentialsProvider Connect using the Okta credentials as the authentication method: jdbc:timestream:// IdpName=Okta;IdpHost=<host>;IdpUserName=<name>;IdpPassword=<password>;OktaApplicationID=<id>;RoleARN=<roleARN>;IdpARN=<IdpARN> Connect using the Azure AD credentials as the authentication method: jdbc:timestream:// IdpName=AzureAD;IdpUserName=<name>;IdpPassword=<password>;AADApplicationID=<id>;AADClientSecret=<secret>;AADTenant=<tenantID>;IdpARN=<IdpARN> JDBC 620 Amazon Timestream Connect with a specific endpoint: Developer Guide jdbc:timestream://Endpoint=abc.us-east-1.amazonaws.com;Region=us-east-1 Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Okta Timestream for LiveAnalytics supports Timestream for LiveAnalytics JDBC single sign-on authentication with Okta. To use Timestream for LiveAnalytics JDBC single sign-on authentication with Okta, complete each of the sections listed below. Topics • Prerequisites • AWS account federation in Okta • Setting up Okta for SAML Prerequisites Ensure that you have met the following prerequisites before using the Timestream for LiveAnalytics JDBC single sign-on authentication with Okta: • Admin permissions in AWS to create the identity provider and the roles. • An Okta account (Go to https://www.okta.com/login/ to create an account). • Access to Amazon Timestream for LiveAnalytics. Now that you have completed the Prerequisites, you may proceed to AWS account federation in Okta. AWS account federation in Okta The Timestream for LiveAnalytics JDBC driver supports AWS Account Federation in Okta. To set up AWS Account Federation in Okta, complete the following steps: 1. Sign in to the Okta Admin dashboard using the following URL: https://<company-domain-name>-admin.okta.com/admin/apps/active JDBC 621 Amazon Timestream Note Developer Guide Replace <company-domain-name> with your domain name. 2. Upon successful sign-in, choose Add Application and search for AWS Account Federation. 3. Choose Add 4. Change the Login URL to the appropriate URL. 5. Choose Next 6. Choose SAML 2.0 As the Sign-On method 7. Choose Identity Provider metadata to open the metadata XML file. Save the file locally. 8. Leave all other configuration options blank. 9. Choose Done Now that you have completed AWS Account Federation in Okta, you may proceed to Setting up Okta for SAML. Setting up Okta for SAML 1. Choose the Sign On tab. Choose the View. 2. Choose the Setup Instructions button in the Settings section. Finding the Okta metadata document 1. To find the document, go to: https://<domain>-admin.okta.com/admin/apps/active Note <domain> is your unique domain name for your Okta account. 2. Choose the AWS Account Federation application 3. Choose the Sign On tab JDBC 622 Amazon Timestream Developer Guide Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD Timestream for LiveAnalytics supports Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD. To use Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete each of the sections listed below. Topics • Prerequisites • Setting up Azure AD • Setting up IAM Identity Provider and roles in AWS Prerequisites Ensure that you have met the following prerequisites before using the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD: • Admin permissions in AWS to create the identity provider and the roles. • An Azure Active Directory account (Go to https://azure.microsoft.com/en-ca/services/active- directory/ to create an account) • Access to Amazon Timestream for LiveAnalytics. Setting up Azure AD 1. Sign in to Azure Portal 2. Choose Azure Active Directory in the list of Azure services. This will redirect to the Default Directory page. 3. Choose Enterprise Applications under the Manage section on the sidebar 4. Choose + New application. 5. Find and select Amazon Web Services. 6. Choose Single Sign-On under the Manage section in the sidebar 7. Choose SAML as the single sign-on method 8. In the Basic SAML Configuration section, enter the following URL for both the Identifier and the Reply URL: JDBC 623 Amazon Timestream Developer Guide https://signin.aws.amazon.com/saml 9. Choose Save 10.Download the Federation Metadata XML in the SAML Signing Certificate section. This will be used when creating the IAM Identity Provider later 11.Return to the Default Directory page and choose App registrations under Manage. 12.Choose Timestream for LiveAnalytics from the All Applications section. The page will be redirected to the application's Overview page Note Note the Application (client) ID and the Directory (tenant) ID. These values are required for when creating a connection. 13.Choose Certificates and Secrets 14.Under Client secrets, create a new client secret with + New client secret. Note Note the generated client secret, as this is required when creating a connection to Timestream for LiveAnalytics.
timestream-159
timestream.pdf
159
Signing Certificate section. This will be used when creating the IAM Identity Provider later 11.Return to the Default Directory page and choose App registrations under Manage. 12.Choose Timestream for LiveAnalytics from the All Applications section. The page will be redirected to the application's Overview page Note Note the Application (client) ID and the Directory (tenant) ID. These values are required for when creating a connection. 13.Choose Certificates and Secrets 14.Under Client secrets, create a new client secret with + New client secret. Note Note the generated client secret, as this is required when creating a connection to Timestream for LiveAnalytics. 15.On the sidebar under Manage, select API permissions 16.In the Configured permissions, use Add a permission to grant Azure AD permission to sign in to Timestream for LiveAnalytics. Choose Microsoft Graph on the Request API permissions page. 17.Choose Delegated permissions and select the User.Read permission 18.Choose Add permissions 19.Choose Grant admin consent for Default Directory Setting up IAM Identity Provider and roles in AWS Complete each section below to set up IAM for Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD: Topics JDBC 624 Amazon Timestream Developer Guide • Create a SAML Identity Provider • Create an IAM role • Create an IAM policy • Provisioning Create a SAML Identity Provider To create a SAML Identity Provider for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps: 1. Sign in to the AWS Management Console 2. Choose Services and select IAM under Security, Identity, & Compliance 3. Choose Identity providers under Access management 4. Choose Create Provider and choose SAML as the provider type. Enter the Provider Name. This example will use AzureADProvider. 5. Upload the previously downloaded Federation Metadata XML file 6. Choose Next, then choose Create. 7. Upon completion, the page will be redirected back to the Identity providers page Create an IAM role To create an IAM role for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps: 1. On the sidebar select Roles under Access management 2. Choose Create role 3. Choose SAML 2.0 federation as the trusted entity 4. Choose the Azure AD provider 5. Choose Allow programmatic and AWS Management Console access 6. Choose Next: Permissions 7. Attach permissions policies or continue to Next:Tags 8. Add optional tags or continue to Next:Review 9. Enter a Role name. This example will use AzureSAMLRole JDBC 625 Amazon Timestream Developer Guide 10.Provide a role description 11.Choose Create Role to complete Create an IAM policy To create an IAM policy for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD complete the following steps: 1. On the sidebar, choose Policies under Access management 2. Choose Create policy and select the JSON tab 3. Add the following policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:ListRoles", "iam:ListAccountAliases" ], "Resource": "*" } ] } 4. Choose Create policy 5. Enter a policy name. This example will use TimestreamAccessPolicy. 6. Choose Create Policy 7. On the sidebar, choose Roles under Access management. 8. Choose the previously created Azure AD role and choose Attach policies under Permissions. 9. Select the previously created access policy. Provisioning To provision the identity provider for Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps: JDBC 626 Amazon Timestream 1. Go back to Azure Portal Developer Guide 2. Choose Azure Active Directory in the list of Azure services. This will redirect to the Default Directory page 3. Choose Enterprise Applications under the Manage section on the sidebar 4. Choose Provisioning 5. Choose Automatic mode for the Provisioning Method 6. Under Admin Credentials, enter your AwsAccessKeyID for clientsecret, and SecretAccessKey for Secret Token 7. Set the Provisioning Status to On 8. Choose save. This allows Azure AD to load the necessary IAM Roles 9. Once the Current cycle status is completed, choose Users and groups on the sidebar 10.Choose + Add user 11.Choose the Azure AD user to provide access to Timestream for LiveAnalytics 12.Choose the IAM Azure AD role and the corresponding Azure Identity Provider created in AWS 13.Choose Assign ODBC The open-source ODBC driver for Amazon Timestream for LiveAnalytics provides an SQL- relational interface to Timestream for LiveAnalytics for developers and enables connectivity from business intelligence (BI) tools such as Power BI Desktop and Microsoft Excel. The Timestream for LiveAnalytics ODBC driver is currently available on Windows, macOS and Linux, and also supports SSO with Okta and Microsoft Azure Active Directory (AD). For more information, see Amazon Timestream for LiveAnalytics ODBC driver documentation on GitHub. Topics • Setting up the Timestream for LiveAnalytics ODBC driver • Connection string syntax and options for the ODBC driver • Connection string examples for the Timestream for LiveAnalytics ODBC driver •
timestream-160
timestream.pdf
160
for LiveAnalytics provides an SQL- relational interface to Timestream for LiveAnalytics for developers and enables connectivity from business intelligence (BI) tools such as Power BI Desktop and Microsoft Excel. The Timestream for LiveAnalytics ODBC driver is currently available on Windows, macOS and Linux, and also supports SSO with Okta and Microsoft Azure Active Directory (AD). For more information, see Amazon Timestream for LiveAnalytics ODBC driver documentation on GitHub. Topics • Setting up the Timestream for LiveAnalytics ODBC driver • Connection string syntax and options for the ODBC driver • Connection string examples for the Timestream for LiveAnalytics ODBC driver • Troubleshooting connection with the ODBC driver ODBC 627 Amazon Timestream Developer Guide Setting up the Timestream for LiveAnalytics ODBC driver Set up access to Timestream for LiveAnalytics in your AWS account If you haven't already set up your AWS account to work with Timestream for LiveAnalytics, follow the insructions in Accessing Timestream for LiveAnalytics. Install the ODBC driver on your system Download the appropriate Timestream ODBC driver installer for your system from the ODBC GitHub repository, and follow the installation instructions that apply to your system:. • Windows installation guide • MacOS installation guide • Linux installation guide Set up a data source name (DSN) for the ODBC driver Follow the instructions in the DSN configuration guide for your system: • Windows DSN configuration • MacOS DSN configuration • Linux DSN configuration Set up your business intelligence (BI) application to work with the ODBC driver Here are instructions for setting several common BI applications to work with the ODBC driver: • Setting up Microsoft Power BI. • Setting up Microsoft Excel • Setting up Tableau For other applications Connection string syntax and options for the ODBC driver The syntax for specifying connection-string options for the ODBC driver is as follows: ODBC 628 Amazon Timestream Developer Guide DRIVER={Amazon Timestream ODBC Driver};(option)=(value); Available options are as follows: Driver connection options • Driver (required) – The driver being used with ODBC. The default is Amazon Timestream. • DSN – The data source name (DSN) to use for configuring the connection. The default is NONE. • Auth – The authentication mode. This must be one of the following: • AWS_PROFILE – Use the default credential chain. • IAM – Use AWS IAM credentials. • AAD – Use the Azure Active Directory (AD) identity provider. • OKTA – Use the Okta identity provider. The default is AWS_PROFILE. Endpoint configuration options • EndpointOverride – The endpoint override for the Timestream for LiveAnalytics service. This is an advanced option that overrides the region. For example: query-cell2.timestream.us-east-1.amazonaws.com • Region – The signing region for the Timestream for LiveAnalytics service endpoint. The default is us-east-1. Credentials provider option • ProfileName – The profile name in the AWS config file. The default is NONE. ODBC 629 Amazon Timestream Developer Guide AWS IAM authentication options • UID or AccessKeyId – The AWS user access key id. If both UID and AccessKeyId are provided in the connection string, the UID value will be used unless it is empty. The default is NONE. • PWD or SecretKey – The AWS user secret access key. If both PWD and SecretKey are provided in the connection string, the PWD value with will be used unless it's empty. The default is NONE. • SessionToken – The temporary session token required to access a database with multi-factor authentication (MFA) enabled. Do not include a trailing = in the input. The default is NONE. SAML-based authentication options for Okta • IdPHost – The hostname of the specified IdP. The default is NONE. • UID or IdPUserName – The user name for the specified IdP account. If both UID and IdPUserName are provided in the connection string, the UID value will be used unless it's empty. The default is NONE. • PWD or IdPPassword – The password for the specified IdP account. If both PWD and IdPPassword are provided in the connection string, the PWD value will be used unless it's empty. The default is NONE. • OktaApplicationID – The unique Okta-provided ID associated with the Timestream for LiveAnalytics application. A place to find the application ID (AppId) is in the entityID field provided in the application metadata. An example is: entityID="http://www.okta.com//(IdPAppID) The default is NONE. • RoleARN – The Amazon Resource Name (ARN) of the role that the caller is assuming. The default is NONE. ODBC 630 Amazon Timestream Developer Guide • IdPARN – The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the IdP. The default is NONE. SAML-based authentication options for Azure Active Directory • UID or IdPUserName – The user name for the specified IdP account.. The default is NONE. • PWD or IdPPassword – The password for the specified IdP account. The default is NONE. • AADApplicationID – The unique
timestream-161
timestream.pdf
161
example is: entityID="http://www.okta.com//(IdPAppID) The default is NONE. • RoleARN – The Amazon Resource Name (ARN) of the role that the caller is assuming. The default is NONE. ODBC 630 Amazon Timestream Developer Guide • IdPARN – The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the IdP. The default is NONE. SAML-based authentication options for Azure Active Directory • UID or IdPUserName – The user name for the specified IdP account.. The default is NONE. • PWD or IdPPassword – The password for the specified IdP account. The default is NONE. • AADApplicationID – The unique id of the registered application on Azure AD. The default is NONE. • AADClientSecret – The client secret associated with the registered application on Azure AD used to authorize fetching tokens. The default is NONE. • AADTenant – The Azure AD Tenant ID. The default is NONE. • RoleARN – The Amazon Resource Name (ARN) of the role that the caller is assuming. The default is NONE. • IdPARN – The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the IdP. The default is NONE. AWS SDK (advanced) Options • RequestTimeout – The time in milliseconds that the AWS SDK waits for a query request before timing out. Any non-positive value disables the request timeout. The default is 3000. ODBC 631 Amazon Timestream Developer Guide • ConnectionTimeout – The time in milliseconds that the AWS SDK waits for data to be transferred over an open connection before timing out. A value of 0 disables the connection timeout. This value must not be negative. The default is 1000. • MaxRetryCountClient – The maximum number of retry attempts for retryable errors with 5xx error codes in the SDK. The value must not be negative. The default is 0. • MaxConnections – The maximum number of allowed concurrently open HTTP connections to the Timestream service. The value must be positive. The default is 25. ODBC driver logging Options • LogLevel – The log level for driver logging. Must be one of: • 0 (OFF). • 1 (ERROR). • 2 (WARNING). • 3 (INFO). • 4 (DEBUG). The default is 1 (ERROR). Warning: personal information could be logged by the driver when using the DEBUG logging mode. • LogOutput – Folder in which to store the log file. The default is: • Windows: %USERPROFILE%, or if not available, %HOMEDRIVE%%HOMEPATH%. • macOS and Linux: $HOME, or if not available, the field pw_dir from the function getpwuid(getuid()) return value. SDK logging options ODBC 632 Amazon Timestream Developer Guide The AWS SDK log level is separate from the Timestream for LiveAnalytics ODBC driver log level. Setting one does not affect the other. The SDK Log Level is set using the environment variable TS_AWS_LOG_LEVEL. Valid values are: • OFF • ERROR • WARN • INFO • DEBUG • TRACE • FATAL If TS_AWS_LOG_LEVEL is not set, the SDK log level is set to the default, which is WARN. Connecting through a proxy The ODBC driver supports connecting to Amazon Timestream for LiveAnalytics through a proxy. To use this feature, configure the following environment variables based on your proxy setting: • TS_PROXY_HOST – the proxy host. • TS_PROXY_PORT – The proxy port number. • TS_PROXY_SCHEME – The proxy scheme, either http or https. • TS_PROXY_USER – The user name for proxy authentication. • TS_PROXY_PASSWORD – The user password for proxy authentication. • TS_PROXY_SSL_CERT_PATH – The SSL Certificate file to use for connecting to an HTTPS proxy. • TS_PROXY_SSL_CERT_TYPE – The type of the proxy client SSL certificate. • TS_PROXY_SSL_KEY_PATH – The private key file to use for connecting to an HTTPS proxy. • TS_PROXY_SSL_KEY_TYPE – The type of the private key file used to connect to an HTTPS proxy. • TS_PROXY_SSL_KEY_PASSWORD – The passphrase to the private key file used to connect to an HTTPS proxy. ODBC 633 Amazon Timestream Developer Guide Connection string examples for the Timestream for LiveAnalytics ODBC driver Example of connecting to the ODBC driver with IAM credentials Driver={Amazon Timestream ODBC Driver};Auth=IAM;AccessKeyId=(your access key ID);secretKey=(your secret key);SessionToken=(your session token);Region=us-east-2; Example of connecting to the ODBC driver with a profile Driver={Amazon Timestream ODBC Driver};ProfileName=(the profile name);region=us-west-2; The driver will attempt to connect using the credentials provided in ~/.aws/credentials, or if a file is specified in the environment variable AWS_SHARED_CREDENTIALS_FILE, using the credentials in that file. Example of connecting to the ODBC driver with Okta driver={Amazon Timestream ODBC Driver};auth=okta;region=us-west-2;idPHost=(your host at Okta);idPUsername=(your user name);idPPassword=(your password);OktaApplicationID=(your Okta AppId);roleARN=(your role ARN);idPARN=(your Idp ARN); Example of connecting to the ODBC driver with Azure Active Directory (AAD) driver={Amazon Timestream ODBC Driver};auth=aad;region=us-west-2;idPUsername=(your user name);idPPassword=(your password);aadApplicationID=(your AAD AppId);aadClientSecret=(your AAD client secret);aadTenant=(your AAD tenant);roleARN=(your role ARN);idPARN=(your idP ARN); Example of connecting to the ODBC driver with a specified endpoint and a log level of 2 (WARNING) Driver={Amazon Timestream ODBC Driver};Auth=IAM;AccessKeyId=(your
timestream-162
timestream.pdf
162
using the credentials provided in ~/.aws/credentials, or if a file is specified in the environment variable AWS_SHARED_CREDENTIALS_FILE, using the credentials in that file. Example of connecting to the ODBC driver with Okta driver={Amazon Timestream ODBC Driver};auth=okta;region=us-west-2;idPHost=(your host at Okta);idPUsername=(your user name);idPPassword=(your password);OktaApplicationID=(your Okta AppId);roleARN=(your role ARN);idPARN=(your Idp ARN); Example of connecting to the ODBC driver with Azure Active Directory (AAD) driver={Amazon Timestream ODBC Driver};auth=aad;region=us-west-2;idPUsername=(your user name);idPPassword=(your password);aadApplicationID=(your AAD AppId);aadClientSecret=(your AAD client secret);aadTenant=(your AAD tenant);roleARN=(your role ARN);idPARN=(your idP ARN); Example of connecting to the ODBC driver with a specified endpoint and a log level of 2 (WARNING) Driver={Amazon Timestream ODBC Driver};Auth=IAM;AccessKeyId=(your access key ID);secretKey=(your secret key);EndpointOverride=ingest.timestream.us- west-2.amazonaws.com;Region=us-east-2;LogLevel=2; ODBC 634 Amazon Timestream Developer Guide Troubleshooting connection with the ODBC driver Note When the username and password are already specified in the DSN, there is no need to specify them again when the ODBC driver manager asks for them. An error code of 01S02 with a message, Re-writing (connection string option) (have you specified it several times? occurs when a connection string option is passed more than once in the connection string. Specifying an option more than once raises an error. When making a connection with a DSN and a connection string, if a connection option is already specified in the DSN, do not specify it again in the connection string. VPC endpoints (AWS PrivateLink) You can establish a private connection between your VPC and Amazon Timestream for LiveAnalytics by creating an interface VPC endpoint. For more information, see VPC endpoints (AWS PrivateLink). Best practices To fully realize the benefits of the Amazon Timestream for LiveAnalytics, follow the best practices described below. Note When running proof-of-concept applications, consider the amount of data your application will accumulate over a few months or years while evaluating the performance and scale of Timestream for LiveAnalytics. As your data grows over time, you'll notice that the performance of Timestream for LiveAnalytics remains mostly unchanged because its serverless architecture can leverage massive amounts of parallelism for processing larger data volumes and automatically scale to match needs of your application. Topics • Data modeling VPC endpoints 635 Developer Guide Amazon Timestream • Security • Configuring Amazon Timestream for LiveAnalytics • Writes • Queries • Scheduled queries • Client applications and supported integrations • General Data modeling Amazon Timestream for LiveAnalytics is designed to collect, store, and analyze time series data from applications and devices emitting a sequence of data with a timestamp. For optimal performance, the data being sent to Timestream for LiveAnalytics must have temporal characteristics and time must be a quintessential component of the data. Timestream for LiveAnalytics provides you the flexibility to model your data in different ways to suit your application's requirements. In this section, we cover several of these patterns and provide guidelines for you to optimize your costs and performance. Familiarize yourself with key Amazon Timestream for LiveAnalytics concepts such as dimensions and measures. In this section, you will learn more about the following when deciding whether to create a single table or multiple tables to store data: • Which data to put in the same table vs. when you want to separate data across multiple tables and databases. • How to choose between Timestream for LiveAnalytics multi-measure records compared to single-measure records, and the benefits of modeling using multi-measure records especially when your application is tracking multiple measurements at the same time instant. • Which attributes to model as dimensions or as measures. • How to effectively use the measure name attributes to optimize your query latency. Topics • Single table vs. multiple tables • Multi-measure records vs. single-measure records • Dimensions and measures Data modeling 636 Amazon Timestream Developer Guide • Using measure name with multi-measure records • Recommendations for partitioning multi-measure records Single table vs. multiple tables As you are modeling your data in application, another important aspect is how to model the data into tables and databases. Databases and tables in Timestream for LiveAnalytics are abstractions for access control, specifying KMS keys, retention periods, and so on. Timestream for LiveAnalytics automatically partitions your data and is designed to scale resources to match the ingestion, storage, and query load and requirements for your applications. A table in Timestream for LiveAnalytics can scale to petabytes of data stored and tens of gigabytes per second of data writes. Queries can process hundreds of terabytes per hour. Queries in Timestream for LiveAnalytics can span multiple tables and databases, providing joins and unions to provide seamless access to your data across multiple tables and databases. So scale of data or request volumes are usually not the primary concern when deciding how to organize your data in Timestream for LiveAnalytics. Below are some important considerations when deciding which data to co-locate in the same table compared to in different tables, or tables in different databases.
timestream-163
timestream.pdf
163
can scale to petabytes of data stored and tens of gigabytes per second of data writes. Queries can process hundreds of terabytes per hour. Queries in Timestream for LiveAnalytics can span multiple tables and databases, providing joins and unions to provide seamless access to your data across multiple tables and databases. So scale of data or request volumes are usually not the primary concern when deciding how to organize your data in Timestream for LiveAnalytics. Below are some important considerations when deciding which data to co-locate in the same table compared to in different tables, or tables in different databases. • Data retention policies (memory store retention, magnetic store retention, etc.) are supported at the granularity of a table. Therefore, data that requires different retention policies needs to be in different tables. • AWS KMS keys that are used to encrypt your data are configured at the database level. Therefore, different encryption key requirements imply the data will need to be in different databases. • Timestream for LiveAnalytics supports resource-based access control at the granularity of tables and databases. Consider your access control requirements when deciding which data you write to the same table vs. different tables. • Be aware of the limits on the number of dimensions, measure names, and multi-measure attribute names when deciding which data is stored in which table. • Consider your query workload and access patterns when deciding how you organize your data, as the query latency and ease of writing your queries will be dependent on that. • If you store data that you frequently query in the same table, that will generally ease the way you write your queries so that you can often avoid having to write joins, unions, or common table expressions. This also usually results in lower query latency. You can use predicates on dimensions and measure names to filter the data that is relevant to the queries. Data modeling 637 Amazon Timestream Developer Guide For instance, consider a case where you store data from devices located in six continents. If your queries frequently access data from across continents to get a global aggregated view, then storing data from these continents in the same table will result in easier to write queries. On the other hand, if you store data on different tables, you still can combine the data in the same query, however, you will need to write a query to union the data from across tables. • Timestream for LiveAnalytics uses adaptive partitioning and indexing on your data so queries only get charged for data that is relevant to your queries. For instance, if you have a table storing data from a million devices across six continents, if your query has predicates of the form WHERE device_id = 'abcdef' or WHERE continent = 'North America', then queries are only charged for data for the device or for the continent. • Wherever possible, if you use measure name to separate out data in the same table that is not emitted at the same time or not frequently queried, then using predicates such as WHERE measure_name = 'cpu' in your query, not only do you get the metering benefits, Timestream for LiveAnalytics can also effectively eliminate partitions that do not have the measure name used in your query predicate. This enables you to store related data with different measure names in the same table without impacting query latency or costs, and avoids spreading the data into multiple tables. The measure name is essentially used to partition the data and prune partitions irrelevant to the query. Multi-measure records vs. single-measure records Timestream for LiveAnalytics allows you to write data with multiple measures per record (multi- measure) or single measure per record (single-measure). Multi-measure records In many use cases, a device or an application you are tracking may emit multiple metrics or events at the same timestamp. In such cases, you can store all the metrics emitted at the same timestamp in the same multi-measure record. That is, all the measures stored in the same multi-measure record appear as different columns in the same row of data. Consider, for instance, that your application is emitting metrics such as cpu, memory, and disk_iops from a device measured at the same time instant. The following is an example of such a table where multiple metrics emitted at the same time instant are stored in the same row. You will that see two hosts are emitting the metrics once every second. Data modeling 638 Amazon Timestream Developer Guide Hostname measure_n ame host-24Gju metrics host-24Gju metrics host-28Gju metrics host-28Gju metrics Single-measure records Time cpu Memory disk_iops 2021-12-01 19:00:00 2021-12-01 19:00:01 2021-12-01 19:00:00 2021-12-01 19:00:01 35 36 15 16 54.9 38.2 58 55 50 39 92 40 The single-measure records are suitable when your devices emit different
timestream-164
timestream.pdf
164
from a device measured at the same time instant. The following is an example of such a table where multiple metrics emitted at the same time instant are stored in the same row. You will that see two hosts are emitting the metrics once every second. Data modeling 638 Amazon Timestream Developer Guide Hostname measure_n ame host-24Gju metrics host-24Gju metrics host-28Gju metrics host-28Gju metrics Single-measure records Time cpu Memory disk_iops 2021-12-01 19:00:00 2021-12-01 19:00:01 2021-12-01 19:00:00 2021-12-01 19:00:01 35 36 15 16 54.9 38.2 58 55 50 39 92 40 The single-measure records are suitable when your devices emit different metrics at different time periods, or you are using custom processing logic that emits metrics/events at different time periods (for instance, when a device's reading/state changes). Because every measure has a unique timestamp, the measures can be stored in their own records in Timestream for LiveAnalytics. For instance, consider an IoT sensor, which tracks soil temperature and moisture, that emits a record only when it detects a change from the previous reported entry. The following example provides an example of such data being emitted using single measure records. device_id measure_name Time measure_v alue::double measure_v alue::bigint sensor-sea478 temperature sensor-sea478 temperature sensor-sea478 moisture 2021-12-01 19:22:32 2021-12-01 18:07:51 2021-12-01 19:05:30 35 36 NULL NULL NULL 21 Data modeling 639 Amazon Timestream Developer Guide device_id measure_name Time measure_v alue::double measure_v alue::bigint sensor-sea478 moisture 2021-12-01 19:00:01 NULL 23 Comparing single-measure and multi-measure records Timestream for LiveAnalytics provides you the flexibility to model your data as single-measure or multi-measure records depending on your application's requirements and characteristics. A single table can store both single-measure and multi-measure records if your application requirements so desire. In general, when your application is emitting multiple measures/events at the same time instant, then modeling the data as multi-measure records is usually recommended for performant data access and cost-effective data storage. For instance, if you consider a DevOps use case tracking metrics and events from hundreds of thousands of servers, each server periodically emits 20 metrics and 5 events, where the events and metrics are emitted at the same time instant. That data can be modeled either using single- measure records or using multi-measure records (see the open-sourced data generator for the resulting schema). For this use case, modeling the data using multi-measure records compared to single-measure records results in: • Ingestion metering - Multi-measure records results in about 40 percent lower ingestion bytes written. • Ingestion batching - Multi-measure records result in bigger batches of data being sent, which implies the clients need fewer threads and fewer CPU to process the ingestion. • Storage metering - Multi-measure records result in about 8X lower storage, resulting in significant storage savings for both memory and magnetic store. • Query latency - Multi-measure records results in lower query latency for most query types when compared to single-measure records. • Query metered bytes - For queries scanning less than 10 MB data, both single-measure and multi- measure records are comparable. For queries accessing a single measure and scanning > 10 MB data, single measure records usually results in lower bytes metered. For queries referencing three or more measures, multi-measure records result in lower bytes metered. Data modeling 640 Amazon Timestream Developer Guide • Ease of expressing multi-measure queries - When your queries reference multiple measures, modeling your data with multi-measure records results in easier to write and more compact queries. The previous factors will vary depending on how many metrics you are tracking, how many dimensions your data has, etc. While the preceding example provides some concrete data for one example, we see across many application scenarios and use cases where if your application emits multiple measures at the same instant, storing data as multi-measure records is more effective. Moreover, multi-measure records provide you the flexibility of data types and storing multiple other values as context (for example, storing request IDs, and additional timestamps, which is discussed later). Note that a multi-measure record can also model sparse measures such as the previous example for single-measure records: you can use the measure_name to store the name of the measure and use a generic multi-measure attribute name, such as value_double to store DOUBLE measures, value_bigint to store BIGINT measures, value_timestamp to store additional TIMESTAMP values, and so on. Dimensions and measures A table in Timestream for LiveAnalytics allows you to store dimensions (identifying attributes of the device/data you are storing) and measures (the metrics/values you are tracking); see Amazon Timestream for LiveAnalytics concepts for more details. As you are modeling your application on Timestream for LiveAnalytics, how you map your data into dimensions and measures impacts your ingestion and query latency. The following are guidelines on how to model your data as dimensions and measures that you can apply to your use case. Choosing dimensions Data that
timestream-165
timestream.pdf
165
BIGINT measures, value_timestamp to store additional TIMESTAMP values, and so on. Dimensions and measures A table in Timestream for LiveAnalytics allows you to store dimensions (identifying attributes of the device/data you are storing) and measures (the metrics/values you are tracking); see Amazon Timestream for LiveAnalytics concepts for more details. As you are modeling your application on Timestream for LiveAnalytics, how you map your data into dimensions and measures impacts your ingestion and query latency. The following are guidelines on how to model your data as dimensions and measures that you can apply to your use case. Choosing dimensions Data that identifies the source that is sending the time series data is a natural fit for dimensions, which are attributes that don't change over time. For instance, if you have a server emitting metrics, then the attributes identifying the server, such as hostname, Region, rack, and Availability Zone, are candidates for dimensions. Similarly, for an IoT device with multiple sensors reporting time series data, attributes such as device ID and sensor ID are candidates for dimensions. If you are writing data as multi-measure records, dimensions and multi-measure attributes appear as columns in the table when you do a DESCRIBE or run a SELECT statement on the table. Therefore, when writing your queries, you can freely use the dimensions and measures in the same Data modeling 641 Amazon Timestream Developer Guide query. However, as you construct your write record to ingest data, keep the following in mind as you choose which attributes are specified as dimensions and which ones are measure values: • The dimension names, dimension values, measure name, and timestamp uniquely identify the time series data. Timestream for LiveAnalytics uses this unique identifier to automatically de- duplicate data. That is, if Timestream for LiveAnalytics receives two data points with the same values of dimension names, dimension values, measure name, and timestamp, and the values have the same version number, then Timestream for LiveAnalytics de-duplicates. If the new write request has a lower version than the data already existing in Timestream for LiveAnalytics, the write request is rejected. If the new write request has a higher version, then the new value overwrites the old value. Therefore, how you choose your dimension values will impact this de- duplication behavior. • Dimension names and values cannot be updated, but measure value can be. Therefore, any data that might need updates is better modeled as measure values. For instance, if you have a machine on the factory floor whose color can change, you can model the color as a measure value, unless you also want to use the color as an identifying attribute that is needed for de- duplication. That is, measure values can be used to store attributes that only slowly change over time. Note that a table in Timestream for LiveAnalytics does not limit the number of unique combinations of dimension names and values. For instance, you can have billions of such unique value combinations stored in a table. However, as you will see with the following examples, careful choice of dimensions and measures can significantly optimize your request latency, especially for queries. Unique IDs in dimensions If your application scenario requires you to store a unique identifier for every data point (for example, a request ID, a transaction ID, or a correlation ID), modeling the ID attribute as a measure value will result in significantly better query latency. When modeling your data with multi-measure records, the ID appears in the same row in context with your other dimensions and time series data, so your queries can continue to use them effectively. For instance, considering a DevOps use case where every data point emitted by a server has a unique request ID attribute, modeling the request ID as a measure value results in up to 4x lower query latency across different query types, as opposed to modeling the unique request ID as a dimension. You can use the similar analogy for attributes that are not entirely unique for every data point, but have hundreds of thousands or millions of unique values. You can model those attributes both Data modeling 642 Amazon Timestream Developer Guide as dimensions or measure values. You would want to model it as a dimension if the values are necessary for de-duplication on the write path as discussed earlier or you often use it as a predicate (for example, in the WHERE clause with an equality predicate on a value of that attribute such as device_id = 'abcde' where your application is tracking millions of devices) in your queries. Richness of data types with multi-measure records Multi-measure records provide you the flexibility to effectively model your data. Data that you store in a multi-measure record appear as columns in the table similar to dimensions, thus providing the same ease
timestream-166
timestream.pdf
166
to model it as a dimension if the values are necessary for de-duplication on the write path as discussed earlier or you often use it as a predicate (for example, in the WHERE clause with an equality predicate on a value of that attribute such as device_id = 'abcde' where your application is tracking millions of devices) in your queries. Richness of data types with multi-measure records Multi-measure records provide you the flexibility to effectively model your data. Data that you store in a multi-measure record appear as columns in the table similar to dimensions, thus providing the same ease of querying for dimension and measure values. You saw some of these patterns in the examples discussed earlier. Below you will find additional patterns to effectively use multi-measure records to meet your application's use cases. Multi-measure records support attributes of data types DOUBLE, BIGINT, VARCHAR, BOOLEAN, and TIMESTAMP. Therefore, they naturally fit different types of attributes: • Location information: For instance, if you want to track a location (expressed as latitude and longitude), then modeling it as a multi-measure attribute will result in lower query latency compared to storing them as VARCHAR dimensions, especially when you have predicates on the latitudes and longitudes. • Multiple timestamps in a record: If your application scenario requires you to track multiple timestamps for a time series record, you can model them as additional attributes in the multi-measure record. This pattern can be used to store data with future timestamps or past timestamps. Note that every record will still use the timestamp in the time column to partition, index, and uniquely identify a record. In particular, if you have numeric data or timestamps on which you have predicates in the query, modeling those attributes as multi-measure attributes as opposed to dimensions will result in lower query latency. This is because when you model such data using the rich data types supported in multi-measure records, you can express the predicates using native data types instead of casting values from VARCHAR to another data type if you modeled such data as dimensions. Using measure name with multi-measure records Tables in Timestream for LiveAnalytics support a special attribute (or column) called measure name. You specify a value for this attribute for every record you write to Timestream for LiveAnalytics. For single-measure records, it is natural to use the name of your metric (such as CPU or memory for server metrics, or temperature or pressure for sensor metrics). When using multi- Data modeling 643 Amazon Timestream Developer Guide measure records, attributes in a multi-measure record are named and these names become column names in the table. Therefore, cpu, memory, temperature, and pressure can become multi-measure attribute names. A natural question is how to effectively use the measure name. Timestream for LiveAnalytics uses the values in the measure name attribute to partition and index the data. Therefore, if a table has multiple different measure names, and if the queries use those values as query predicates, then Timestream for LiveAnalytics can use its custom partitioning and indexing to prune out data that is not relevant to queries. For instance, if your table has cpu and memory measure names, and your query has a predicate WHERE measure_name = 'cpu', Timestream for LiveAnalytics can effectively prune data for measure names not relevant to the query, for example, rows with measure name memory in this example. This pruning applies even when using measure names with multi-measure records. You can use the measure name attribute effectively as a partitioning attribute for a table. Measure name along with dimension names and values, and time are used to partition the data in a Timestream for LiveAnalytics table. Be aware of the limits on the number of unique measure names allowed in a Timestream for LiveAnalytics table. Also note that a measure name is associated with a measure value data type as well. For example, a single measure name can only be associated with one type of measure value. That type can be one of DOUBLE, BIGINT, BOOLEAN, VARCHAR, or MULTI. Multi-measure records stored with a measure name will have the data type of MULTI. Since a single multi-measure record can store multiple metrics with different data types (DOUBLE, BIGINT, VARCHAR, BOOLEAN, and TIMESTAMP), you can associate data of different types in a multi-measure record. The following sections describe a few different examples of how the measure name attribute can be effectively used to group together different types of data in the same table. IoT sensors reporting quality and value Consider you have an application monitoring data from IoT sensors. Each sensor tracks different measures, such as temperature and pressure. In addition to the actual values, the sensors also report the quality of the measurements, which is a measure of how accurate the reading is, and a
timestream-167
timestream.pdf
167
types (DOUBLE, BIGINT, VARCHAR, BOOLEAN, and TIMESTAMP), you can associate data of different types in a multi-measure record. The following sections describe a few different examples of how the measure name attribute can be effectively used to group together different types of data in the same table. IoT sensors reporting quality and value Consider you have an application monitoring data from IoT sensors. Each sensor tracks different measures, such as temperature and pressure. In addition to the actual values, the sensors also report the quality of the measurements, which is a measure of how accurate the reading is, and a unit for the measurement. Since quality, unit, and value are emitted together, they can be modeled as multi-measure records, as shown in the example data below where device_id is a dimension, and quality, value, and unit are multi-measure attributes: Data modeling 644 Amazon Timestream Developer Guide device_id sensor-se a478 sensor-se a478 sensor-se a478 sensor-se a478 measure_n ame temperature temperature pressure pressure Time Quality Value Unit 2021-12-01 19:22:32 2021-12-01 18:07:51 2021-12-01 19:05:30 2021-12-01 19:00:01 92 93 98 24 35 34 31 132 c c psi psi This approach allows you to combine the benefits of multi-measure records along with partitioning and pruning data using the values of measure name. If queries reference a single measure, such as temperature, then you can include a measure_name predicate in the query. The following is an example of such a query, which also projects the unit for measurements whose quality is above 90. SELECT device_id, time, value AS temperature, unit FROM db.table WHERE time > ago(1h) AND measure_name = 'temperature' AND quality > 90 Using the measure_name predicate on the query enables Timestream for LiveAnalytics to effectively prune partitions and data that is not relevant to the query, thus improving your query latency. It is also possible to have all of the metrics stored in the same multi-measure record if all the metrics are emitted at the same timestamp and/or multiple metrics are queried together in the same query. For instance, you can construct a multi-measure record with attributes such as temperature_quality, temperature_value, temperature_unit, pressure_quality, pressure_value, and pressure_unit. Many of the points discussed earlier about modeling data using single-measure vs. multi-measure records apply in your decision of how to model the data. Consider your query access patterns and how your data is generated to choose a model that optimizes your cost, ingestion and query latency, and ease of writing your queries. Data modeling 645 Amazon Timestream Developer Guide Different types of metrics in the same table Another use case where you can combine multi-measure records with measure name values is to model different types of data that are independently emitted from the same device. Consider the DevOps monitoring use case where servers are emitting two types of data: regularly emitted metrics and irregular events. An example of this approach is the schema discussed in the data generator modeling a DevOps use case. In this case, you can store the different types of data emitted from the same server in the same table by using different measure names. For instance, all the metrics that are emitted at the same time instant are stored with measure name metrics. All the events that are emitted at a different time instant from the metrics are stored with measure name events. The measure schema for the table (for example, output of SHOW MEASURES query) is: measure_name data_type Dimensions events multi metrics multi [{"data_type":"varchar","di mension_name":"availability _zone"},{"data_type":"varch ar","dimension_name":"micro service_name"},{"data_type" :"varchar","dimension_name" :"instance_name"},{"data_ty pe":"varchar","dimension_na me":"process_name"},{"data_ type":"varchar","dimension_ name":"jdk_version"},{"data _type":"varchar","dimension _name":"cell"},{"data_type" :"varchar","dimension_name" :"region"},{"data_type":"va rchar","dimension_name":"si lo"}] [{"data_type":"varchar","di mension_name":"availability _zone"},{"data_type":"varch ar","dimension_name":"micro service_name"},{"data_type" Data modeling 646 Amazon Timestream Developer Guide measure_name data_type Dimensions :"varchar","dimension_name" :"instance_name"},{"data_ty pe":"varchar","dimension_na me":"os_version"},{"data_ty pe":"varchar","dimension_na me":"cell"},{"data_type":"v archar","dimension_name":"r egion"},{"data_type":"varch ar","dimension_name":"silo" },{"data_type":"varchar","d imension_name":"instance_ty pe"}] In this case, you can see that the events and metrics also have different sets of dimensions, where events have different dimensions jdk_version and process_name while metrics have dimensions instance_type and os_version. Using different measure names allow you to write queries with predicates such as WHERE measure_name = 'metrics' to get only the metrics. Also having all the data emitted from the same instance in the same table implies you can also write a simpler query with the instance_name predicate to get all data for that instance. For instance, a predicate of the form WHERE instance_name = 'instance-1234' without a measure_name predicate will return all data for a specific server instance. Recommendations for partitioning multi-measure records Important This section is deprecated! These recommendations are out of date. Partitioning is now better controlled using customer-defined partition keys. Data modeling 647 Amazon Timestream Developer Guide We have seen that there is a growing number of workloads in the time series ecosystem that require ingesting and storing massive amounts of data while simultaneously needing low latency query responses when accessing data by a high cardinality set of
timestream-168
timestream.pdf
168
for that instance. For instance, a predicate of the form WHERE instance_name = 'instance-1234' without a measure_name predicate will return all data for a specific server instance. Recommendations for partitioning multi-measure records Important This section is deprecated! These recommendations are out of date. Partitioning is now better controlled using customer-defined partition keys. Data modeling 647 Amazon Timestream Developer Guide We have seen that there is a growing number of workloads in the time series ecosystem that require ingesting and storing massive amounts of data while simultaneously needing low latency query responses when accessing data by a high cardinality set of dimension values. Because of such characteristics, recommendations in this section will be useful for customer workloads that have the following: • Adopted or want to adopt multi-measure records. • Expect to have a high volume of data coming into the system that will be stored for long periods. • Require low latency response times for their main access (query) patterns. • Know that the most important queries patterns involve a filtering condition of some sort in the predicate. This filtering condition is based around a high cardinality dimension. For example, consider events or aggregations by UserId, DeviceId, ServerID, host-name, and so forth. In these cases, a single name for all the multi-measure measures will not help since our engine uses multi-measure names to partition the data and having a single value limits the partition advantage that you get. The partitioning for these records is mainly based on two dimensions. Let’s say time is on the x-axis and a hash of dimension names and the measure_name is on the y-axis. The measure_name in these cases works almost like a partitioning key. Our recommendation is as follows: • When modeling your data for use cases like the one we mentioned, use a measure_name that is a direct derivative of your main query access pattern. For example: • Your use case requires tracking application performance and QoE from the end user point of view. This could also be tracking measurements for a single server or IoT device. • If you are querying and filtering by UserId, then you need, at ingestion time, to find the best way to associate measure_name to UserId. • Since a multi-measure table can only hold 8,192 different measure names, whatever formula is adopted should not generate more that 8,192 different values. • One approach that we have applied with success for string values is to apply a hashing algorithm to the string value. Then perform the modulo operation with the absolute value of the hash result and 8,192. measure_name = getMeasureName(UserId) int getMeasureName(value) { hash_value = abs(hash(value)) Data modeling 648 Amazon Timestream Developer Guide return hash_value % 8192 } • We also added abs() to remove the sign eliminating the possibility for values to range from -8,192 to 8,192. This should be performed prior to the modulo operation. • By using this method your queries can run on a fraction of the time that would take to run on an unpartitioned data model. • When querying the data, make sure that you include a filtering condition in the predicate that uses the newly derived value of the measure_name. For example: • SELECT * FROM your_database.your_table WHERE host_name = 'Host-1235' time BETWEEN '2022-09-01' AND '2022-09-18' AND measure_name = (SELECT cast(abs(from_big_endian_64(xxhash64(CAST('HOST-1235' AS varbinary))))%8192 AS varchar)) • This will minimize the total number of partitions scanned to get you data that will translate in faster queries over time. Keep in mind that if you want to obtain the benefits from this partition schema, the hash needs to be calculated on the client side and passed to Timestream for LiveAnalytics as a static value to the query engine. The preceding example provides a way to validate that the generated hash can be resolved by the engine when needed. time host_name location server_ty pe cpu_usage available _memory cpu_temp us-east1 5.8xl 55 16.2 78 us-west1 5.8xl 62 18.1 81 host-1235 2022-09-0 7 21:48:44 .00000000 0 host-3587 R2022-09- 07 21:48:44 .00000000 0 Data modeling 649 Amazon Timestream time host_name location server_ty pe cpu_usage available _memory Developer Guide cpu_temp eu-central 5.8xl 88 9.4 91 us-east2 5.8xl 29 24 54 us-west1 5.8xl 44 32 48 2022-09-0 7 21:48:45. host-2587 43 000000000 2022-09-0 7 host-3565 4 21:48:45 .00000000 0 host-254 R2022-09- 07 21:48:45 .00000000 0 To generate the associated measure_name following our recommendation, there are two paths that depend on your ingestion pattern. 1. For batch ingestion of historical data—You can add the transformation to your write code if you will use your own code for the batch process. Building on top of the preceding example. List<String> hosts = new ArrayList<>(); hosts.add("host-1235"); hosts.add("host-3587"); hosts.add("host-258743"); hosts.add("host-35654"); hosts.add("host-254"); for (String h: hosts){ ByteBuffer buf2 = ByteBuffer.wrap(h.getBytes()); partition = abs(hasher.hash(buf2, 0L)) % 8192; System.out.println(h + " - " + partition); }
timestream-169
timestream.pdf
169
21:48:45. host-2587 43 000000000 2022-09-0 7 host-3565 4 21:48:45 .00000000 0 host-254 R2022-09- 07 21:48:45 .00000000 0 To generate the associated measure_name following our recommendation, there are two paths that depend on your ingestion pattern. 1. For batch ingestion of historical data—You can add the transformation to your write code if you will use your own code for the batch process. Building on top of the preceding example. List<String> hosts = new ArrayList<>(); hosts.add("host-1235"); hosts.add("host-3587"); hosts.add("host-258743"); hosts.add("host-35654"); hosts.add("host-254"); for (String h: hosts){ ByteBuffer buf2 = ByteBuffer.wrap(h.getBytes()); partition = abs(hasher.hash(buf2, 0L)) % 8192; System.out.println(h + " - " + partition); } Data modeling 650 Developer Guide Amazon Timestream Output host-1235 - 6445 host-3587 - 6399 host-258743 - 640 host-35654 - 2093 host-254 - 7051 Resulting dataset time host_name location measure_n ame server_ty pe cpu_usage available _memory cpu_temp 2022-09-0 7 host-1235 us-east1 6445 5.8xl 55 16.2 78 21:48:44 .00000000 0 R2022-09- 07 host-3587 us-west1 6399 5.8xl 62 18.1 81 21:48:44 .00000000 0 2022-09-0 7 host-2587 43 eu- central 640 5.8xl 88 9.4 91 21:48:45. 000000000 host-3565 4 2022-09-0 7 21:48:45 .00000000 0 host-254 R2022-09- 07 21:48:45 .00000000 0 us-east2 2093 5.8xl 29 24 54 us-west1 7051 5.8xl 44 32 48 Data modeling 651 Amazon Timestream Developer Guide 2. For real-time ingestion—You need to generate the measure_name in-flight as data is coming in. In both cases, we recommend you test your hash generating algorithm at both ends (ingestion and querying) to make sure you are getting the same results. Here are some code examples to generate the hashed value based on host_name. Example Python >>> import xxhash >>> from bitstring import BitArray >>> b=xxhash.xxh64('HOST-ID-1235').digest() >>> BitArray(b).int % 8192 ### 3195 Example Go package main import ( "bytes" "fmt" "github.com/cespare/xxhash" ) func main() { buf := bytes.NewBufferString("HOST-ID-1235") x := xxhash.New() x.Write(buf.Bytes()) // convert unsigned integer to signed integer before taking mod fmt.Printf("%f\n", abs(int64(x.Sum64())) % 8192) } func abs(x int64) int64 { if (x < 0) { return -x } return x } Data modeling 652 Amazon Timestream Example Java import java.nio.ByteBuffer; import net.jpountz.xxhash.XXHash64; public class test { public static void main(String[] args) { Developer Guide XXHash64 hasher = net.jpountz.xxhash.XXHashFactory.fastestInstance().hash64(); String host = "HOST-ID-1235"; ByteBuffer buf = ByteBuffer.wrap(host.getBytes()); Long result = Math.abs(hasher.hash(buf, 0L)); Long partition = result % 8192; System.out.println(result); System.out.println(partition); } } Example dependency in Maven <dependency> <groupId>net.jpountz.lz4</groupId> <artifactId>lz4</artifactId> <version>1.3.0</version> </dependency> Security • For continuous access to Timestream for LiveAnalytics, ensure that encryption keys are secured and are not revoked or made inaccessible. • Monitor API access logs from AWS CloudTrail. Audit and revoke any anomalous access pattern from unauthorized users. • Follow additional guidelines described in Security best practices for Amazon Timestream for LiveAnalytics. Security 653 Amazon Timestream Developer Guide Configuring Amazon Timestream for LiveAnalytics Configure the data retention period for the memory store and the magnetic store to match the data processing, storage, query performance, and cost requirements. • Set the data retention of the memory store to match your application's requirements for processing late-arriving data. Late-arriving data is incoming data with a timestamp earlier than the current time. It is emitted from resources that batch events for a time period before sending the data to Timestream for LiveAnalytics, and also from resources with intermittent connectivity e.g. an IoT sensor that is online intermittently. • If you expect late-arriving data to occasionally arrive with timestamps earlier than the memory store retention, you should enable magnetic store writes for your table. Once you set the EnableMagneticStoreWrites in MagneticStoreWritesProperties for a table, the table will accept data with timestamp earlier than your memory store retention but within your magnetic store retention period. • Consider the characteristics of queries that you plan to run on Timestream for LiveAnalytics such as the types of queries, frequency, time range, and performance requirements. This is because the memory store and magnetic store are optimized for different scenarios. The memory store is optimized for fast point-in-time queries that process small amounts of recent data sent to Timestream for LiveAnalytics. The magnetic store is optimized for fast analytical queries that process medium to large volumes of data sent to Timestream for LiveAnalytics. • Your data retention period should also be influenced by the cost requirements of your system. For example, consider a scenario where the late-arriving data threshold for your application is 2 hours and your applications send many queries that process a day's-worth, week's-worth, or month's-worth of data. In that case, you may want to configure a smaller retention period for the memory store (2-3 hours) and allow more data to flow to the magnetic store given the magnetic store is optimized for fast analytical queries. Understand the impact of increasing or decreasing the data retention period of the memory store and the magnetic store of an existing table. • When you decrease the retention period of the memory
timestream-170
timestream.pdf
170
consider a scenario where the late-arriving data threshold for your application is 2 hours and your applications send many queries that process a day's-worth, week's-worth, or month's-worth of data. In that case, you may want to configure a smaller retention period for the memory store (2-3 hours) and allow more data to flow to the magnetic store given the magnetic store is optimized for fast analytical queries. Understand the impact of increasing or decreasing the data retention period of the memory store and the magnetic store of an existing table. • When you decrease the retention period of the memory store, the data is moved from the memory store to the magnetic store, and this data transfer is permanent. Timestream for LiveAnalytics does not retrieve data from the magnetic store to populate the memory store. When you decrease the retention period of the magnetic store, the data is deleted from the system, and the data deletion is permanent. Configuring Timestream for LiveAnalytics 654 Amazon Timestream Developer Guide • When you increase the retention period of the memory store or the magnetic store, the change takes effect for data being sent to Timestream for LiveAnalytics from that point onwards. Timestream for LiveAnalytics does not retrieve data from the magnetic store to populate the memory store. For example, if the retention period of the memory store was initially set to 2 hours and then increased to 24 hours, it will take 22 hours for the memory store to contain 24 hours worth of data. Writes • Ensure that the timestamp of the incoming data is not earlier than data retention configured for the memory store and no later than the future ingestion period defined in Quotas. Sending data with a timestamp outside these bounds will result in the data being rejected by Timestream for LiveAnalytics unless you enable magnetic store writes for your table. If you enable magnetic store writes, ensure that the timestamp for incoming data is not earlier than data retention configured for the magnetic store. • If you expect late arriving data, turn on magnetic store writes for your table. This will allow ingestion for data with timestamps that fall outside your memory store retention period but still within your magnetic store retention period. You can set this by updating the EnableMagneticStoreWrites flag in the MagneticStoreWritesProperties for your table. This property is false by default. Note that writes to the magnetic store will not be immediately available to query. They will be available within 6 hours. • Target high throughput workloads to the memory store by ensuring the timestamps of the ingested data fall within the memory store retention bounds. Writes to the magnetic store are limited to a max number of active magnetic store partitions that can receive concurrent ingestion for a database. You can see this ActiveMagneticStorePartitions metric in CloudWatch. To reduce active magnetic store partitions, aim to reduce the number of series and duration of time you ingest into concurrently for magnetic store ingestion. • While sending data to Timestream for LiveAnalytics, batch multiple records in a single request to optimize data ingestion performance. • It is beneficial to batch together records from the same time series and records with the same measure name. • Batch as many records as possible in a single request as long as the requests are within the service limits defined in Quotas. • Use common attributes where possible to reduce data transfer and ingestion costs. For more information, see WriteRecords API. Writes 655 Amazon Timestream Developer Guide • If you encounter partial client-side failures while writing data to Timestream for LiveAnalytics, you can resend the batch of records that failed ingestion after you've addressed the rejection cause. • Data ordered by timestamps has better write performance. • Amazon Timestream for LiveAnalytics is designed to automatically scale to the needs of your application. When Timestream for LiveAnalytics notices spikes in write requests from your application, your application may experience some level of initial memory store throttling. If your application experiences memory store throttling, continue sending data to Timestream for LiveAnalytics at the same (or increased) rate to enable Timestream for LiveAnalytics to automatically scale to satisfy the needs of your application. If you see magnetic store throttling, you should decrease your rate of magnetic store ingestion until your number of ActiveMagneticStorePartitions falls. Batch load Best practices for batch load are described in Batch load best practices. Queries Following are suggested best practices for queries with Amazon Timestream for LiveAnalytics. • Include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans, which impacts the performance of queries. • Before deploying your query in production, we recommend that you review query insights to make sure that the spatial and temporal pruning is optimal. For more information,
timestream-171
timestream.pdf
171
If you see magnetic store throttling, you should decrease your rate of magnetic store ingestion until your number of ActiveMagneticStorePartitions falls. Batch load Best practices for batch load are described in Batch load best practices. Queries Following are suggested best practices for queries with Amazon Timestream for LiveAnalytics. • Include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans, which impacts the performance of queries. • Before deploying your query in production, we recommend that you review query insights to make sure that the spatial and temporal pruning is optimal. For more information, see Using query insights to optimize queries in Amazon Timestream. • Where possible, push the data computation to Timestream for LiveAnalytics using the built-in aggregates and scalar functions in the SELECT clause and WHERE clause as applicable to improve query performance and reduce cost. See SELECT and Aggregate functions. • Where possible, use approximate functions. E.g., use APPROX_DISTINCT instead of COUNT(DISTINCT column_name) to optimize query performance and reduce the query cost. See Aggregate functions. • Use a CASE expression to perform complex aggregations instead of selecting from the same table multiple times. See The CASE statement. Queries 656 Amazon Timestream Developer Guide • Where possible, include a time range in the WHERE clause of your query. This optimizes query performance and costs. For example, if you only need the last one hour of data in your dataset, then include a time predicate such as time > ago(1h). See SELECT and Interval and duration. • When a query accesses a subset of measures in a table, always include the measure names in the WHERE clause of the query. • Where possible, use the equality operator when comparing dimensions and measures in the WHERE clause of a query. An equality predicate on dimensions and measure names allows for improved query performance and reduced query costs. • Wherever possible, avoid using functions in the WHERE clause to optimize for cost. • Refrain from using LIKE clause multiple times. Rather, use regular expressions when you are filtering for multiple values on a string column. See Regular expression functions. • Only use the necessary columns in the GROUP BY clause of a query. • If the query result needs to be in a specific order, explicitly specify that order in the ORDER BY clause of the outermost query. If your query result does not require ordering, avoid using an ORDER BY clause to improve query performance. • Use a LIMIT clause if you only need the first N rows in your query. • If you are using an ORDER BY clause to look at the top or bottom N values, use a LIMIT clause to reduce the query costs. • Use the pagination token from the returned response to retrieve the query results. For more information, see Query. • If you've started running a query and realize that the query will not return the results you're looking for, cancel the query to save cost. For more information, see CancelQuery. • If your application experiences throttling, continue sending data to Amazon Timestream for LiveAnalytics at the same rate to enable Amazon Timestream for LiveAnalytics to auto-scale to the satisfy the query throughput needs of your application. • If the query concurrency requirements of your applications exceed the default limits of Timestream for LiveAnalytics, contact Support for limit increases. Scheduled queries Scheduled queries help you optimize your dashboards by pre-computing some fleet-wide aggregate statistics. So a natural question to ask is how do you take your use case and identify which results to pre-compute and how to use these results stored in a derived table to create your Scheduled queries 657 Amazon Timestream Developer Guide dashboard. The first step in this process is to identify which panels to pre-compute. Below are some high-level guidelines: • Consider the bytes scanned by the queries that are used to populate the panels, the frequency of dashboard reload, and number of concurrent users who would load these dashboards. You should start with the dashboards loaded most frequently and scanning significant amounts of data. The first two dashboards in the aggregate dashboard example as well as the aggregate dashboard in the drill down example are good examples of such dashboards. • Consider which computations are being repeatedly used. While it is possible to create a scheduled query for every panel and every variable value used in the panel, you can significantly optimize your costs and the number of scheduled queries by looking for avenues to use one computation to pre-compute the data necessary for multiple panels. • Consider the frequency of your scheduled queries to refresh the materialized results in the derived table. You would want to analyze how frequently a dashboard is refreshed vs. the time window that is queried in
timestream-172
timestream.pdf
172
down example are good examples of such dashboards. • Consider which computations are being repeatedly used. While it is possible to create a scheduled query for every panel and every variable value used in the panel, you can significantly optimize your costs and the number of scheduled queries by looking for avenues to use one computation to pre-compute the data necessary for multiple panels. • Consider the frequency of your scheduled queries to refresh the materialized results in the derived table. You would want to analyze how frequently a dashboard is refreshed vs. the time window that is queried in a dashboard vs. the time binning used in the pre-computation as well as the panels in the dashboards. For instance, if a dashboard that is plotting hourly aggregates for the past few days is only refreshed once in a few hours, you might want to configure your scheduled queries to only refresh once every 30 mins or an hour. On the other hand, if you have a dashboard that plots per minute aggregates and is refreshed every minute or so, you would want your scheduled queries to refresh the results every minute or few minutes. • Consider which query patterns can be further optimized (both from a query cost and query latency perspective) using scheduled queries. For instance, when computing the unique dimension values frequently used as variables in dashboards, or returning the last data point emitted from a sensor or the first data point emitted from a sensor after a certain date, etc. Some of these example patterns are discussed in this guide. The preceding considerations will have a significant impact on your savings when you move your dashboard to query the derived tables, the freshness of data in your dashboards, and the cost incurred by the scheduled queries. Client applications and supported integrations Run your client application from the same Region as Timestream for LiveAnalytics to reduce network latencies and data transfer costs. For more information about working with other services, see Working with other services. The following are some other helpful links. • Best Practices for AWS Development with the AWS SDK for Java Client applications and supported integrations 658 Amazon Timestream Developer Guide • Best practices for working with AWS Lambda functions • Best Practices for Amazon Managed Service for Apache Flink • Best practices for creating dashboards in Grafana General • Ensure that you follow the The AWS Well-Architected Framework when using Timestream for LiveAnalytics. This whitepaper provides guidance around best practices in operational excellence, security, reliability, performance efficiency, and cost optimization. Metering and cost optimization With Amazon Timestream for LiveAnalytics, you pay only for what you use. Timestream for LiveAnalytics meters separately for writes, data stored, and data scanned by queries. The price of each metering dimension is specified on the pricing page. You can estimate your monthly bill using the Amazon Timestream for LiveAnalytics Pricing Calculator. This section describes how metering works for writes, storage and queries in Timestream for LiveAnalytics. Example scenarios and calculations are also provided. In addition, a list of best practices for cost optimization is included. You can select a topic below: Topics • Writes • Storage • Queries • Cost optimization • Monitoring with Amazon CloudWatch Writes The write size of each time series event is calculated as the sum of the size of the timestamp and one or more dimension names, dimension values, measure names, and measure values. The size of the timestamp is 8 bytes. The size of dimension names, dimension values, and measure names are the length of the UTF-8 encoded bytes of the string representing each dimension name, dimension value, and measure name. The size of the measure value depends on the data type. It is 1 byte for General 659 Amazon Timestream Developer Guide the boolean data type, 8 bytes for bigint and double, and the length of the UTF-8 encoded bytes for strings. Each write is counted in units of 1 KiB. Two example calculations are provided below: Topics • Calculating the write size of a time series event • Calculating the number of writes Calculating the write size of a time series event Consider a time series event representing the CPU utilization of an EC2 instance as shown below: Time region az vpc Hostname measure_n ame 160298343 523856300 0 us-east-1 1d vpc-1a2b3 c4d host-24Gju cpu_utili zation measure_v alue::dou ble 35.0 The write size of the time series event can be calculated as: • time = 8 bytes • first dimension = 15 bytes (region+us-east-1) • second dimension = 4 bytes (az+1d) • third dimension = 15 bytes (vpc+vpc-1a2b3c4d) • fourth dimension = 18 bytes (hostname+host-24Gju) • name of the measure = 15 bytes (cpu_utilization) • value of the measure = 8 bytes Write size of the time series event = 83
timestream-173
timestream.pdf
173
utilization of an EC2 instance as shown below: Time region az vpc Hostname measure_n ame 160298343 523856300 0 us-east-1 1d vpc-1a2b3 c4d host-24Gju cpu_utili zation measure_v alue::dou ble 35.0 The write size of the time series event can be calculated as: • time = 8 bytes • first dimension = 15 bytes (region+us-east-1) • second dimension = 4 bytes (az+1d) • third dimension = 15 bytes (vpc+vpc-1a2b3c4d) • fourth dimension = 18 bytes (hostname+host-24Gju) • name of the measure = 15 bytes (cpu_utilization) • value of the measure = 8 bytes Write size of the time series event = 83 bytes Writes 660 Amazon Timestream Developer Guide Calculating the number of writes Now consider 100 EC2 instances, similar to the instance described in Calculating the write size of a time series event, emitting metrics every 5 seconds. The total monthly writes for the EC2 instances will vary based on how many time series events exist per write and if common attributes are being used while batching time series events. An example of calculating total monthly writes is provided for each of the following scenarios: Topics • One time series event per write • Batching time series events in a write • Batching time series events and using common attributes in a write One time series event per write If each write contains only one time series event, the total monthly writes are calculated as: • 100 time series events = 100 writes every 5 seconds • x 12 writes/minute = 1,200 writes • x 60 minutes/hour = 72,000 writes • x 24 hours/day = 1,728,000 writes • x 30 days/month = 51,840,000 writes Total monthly writes = 51,840,000 Batching time series events in a write Given each write is measured in units of 1 KB, a write can contain a batch of 12 time series events (998 bytes) and the total monthly writes are calculated as: • 100 time series events = 9 writes (12 time series events per write) every 5 seconds • x 12 writes/minute = 108 writes • x 60 minutes/hour = 6,480 writes • x 24 hours/day = 155,520 writes • x 30 days/month = 4,665,600 writes Writes 661 Amazon Timestream Developer Guide Total monthly writes = 4,665,600 Batching time series events and using common attributes in a write If the region, az, vpc, and measure name are common across 100 EC2 instances, the common values can be specified just once per write and are referred to as common attributes. In this case, the size of common attributes is 52 bytes, and the size of the time series events is 27 bytes. Given each write is measured in units of 1 KiB, a write can contain 36 time series events and common attributes, and the total monthly writes are calculated as: • 100 time series events = 3 writes (36 time series events per write) every 5 seconds • x 12 writes/minute = 36 writes • x 60 minutes/hour = 2,160 writes • x 24 hours/day = 51,840 writes • x 30 days/month = 1,555,200 writes Total monthly writes = 1,555,200 Note Due to usage of batching, common attributes and rounding of the writes to units of 1KB, the storage size of the time series events may be different than write size. Storage The storage size of each time series event in the memory store and the magnetic store is calculated as the sum of the size of the timestamp, dimension names, dimension values, measure names, and measure values. The size of the timestamp is 8 bytes. The size of dimension names, dimension values, and measure names are the length of the UTF-8 encoded bytes of each string representing the dimension name, dimension value, and measure name. The size of the measure value depends on the data type. It is 1 byte for boolean data types, 8 bytes for bigint and double, and the length of the UTF-8 encoded bytes for strings. Each measure is stored as a separate record in Amazon Timestream for LiveAnalytics, i.e. if your time series event has four measures, there will be four records for that time series event in storage. Storage 662 Amazon Timestream Developer Guide Considering the example of the time series event representing the CPU utilization of an EC2 instance (see Calculating the write size of a time series event), the storage size of the time series event is calculated as: • time = 8 bytes • first dimension = 15 bytes (region+us-east-1) • second dimension = 4 bytes (az+1d) • third dimension = 15 bytes (vpc+vpc-1a2b3c4d) • fourth dimension = 18 bytes (hostname+host-24Gju) • name of the measure = 15 bytes (cpu_utilization) • value of the measure = 8 bytes Storage size of the time series event = 83 bytes Note The memory store is metered in
timestream-174
timestream.pdf
174
the time series event representing the CPU utilization of an EC2 instance (see Calculating the write size of a time series event), the storage size of the time series event is calculated as: • time = 8 bytes • first dimension = 15 bytes (region+us-east-1) • second dimension = 4 bytes (az+1d) • third dimension = 15 bytes (vpc+vpc-1a2b3c4d) • fourth dimension = 18 bytes (hostname+host-24Gju) • name of the measure = 15 bytes (cpu_utilization) • value of the measure = 8 bytes Storage size of the time series event = 83 bytes Note The memory store is metered in GB-hour and the magnetic store is metered in GB-month. Queries Queries are charged based on the duration of Timestream compute units (TCUs) used by your application in TCU-hours as specified on the Amazon Timestream pricing page. Amazon Timestream for LiveAnalytics' query engine prunes irrelevant data while processing a query. Queries with projections and predicates including time ranges, measure names, and/or dimension names enable the query processing engine to prune a significant amount of data and help with lowering query costs. Cost optimization To optimize the cost of writes, storage, and queries, use the following best practices with Amazon Timestream for LiveAnalytics: • Batch multiple time series events per write to reduce the number of write requests. • Consider using Multi-measure records, which allows you to write multiple time-series measures in a single write request and stores your data in a more compact manner. This reduces the number of write requests as well as data storage cost and query cost. Queries 663 Amazon Timestream Developer Guide • Use common attributes with batching to batch more time series events per write to further reduce the number of write requests. • Set the data retention of the memory store to match your application's requirements for processing late-arriving data. Late-arriving data is incoming data with a timestamp earlier than the current time and outside the memory store retention period. • Set the data retention of the magnetic store to match your long term data storage requirements. • While writing queries, include only the measure and dimension names essential to query. Adding extraneous columns will increase data scans and therefore will also increase the query cost. We recommend that you review query insights to assess the pruning efficiency of the included dimensions and measures. • Where possible, include a time range in the WHERE clause of your query. For example, if you only need the last one hour of data in your dataset, include a time predicate such as time > ago(1h). • When a query accesses a subset of measures in a table, always include the measure names in the WHERE clause of the query. • If you've started running a query and realize that the query will not return the results you're looking for, cancel the query to save on cost. Monitoring with Amazon CloudWatch You can monitor Timestream for LiveAnalytics using Amazon CloudWatch, which collects and processes raw data from Timestream for LiveAnalytics into readable, near-real-time metrics. It records these statistics for two weeks so that you can access historical information and gain a better perspective on how your web application or service is performing. By default, Timestream for LiveAnalytics metric data is automatically sent to CloudWatch in 1-minute or 15-minute periods. For more information, see What Is Amazon CloudWatch? in the Amazon CloudWatch User Guide. Topics • How do I use Timestream for LiveAnalytics metrics? • Timestream for LiveAnalytics metrics and dimensions • Creating CloudWatch alarms to monitor Timestream for LiveAnalytics Monitoring with Amazon CloudWatch 664 Amazon Timestream Developer Guide How do I use Timestream for LiveAnalytics metrics? The metrics reported by Timestream for LiveAnalytics provide information that you can analyze in different ways. The following list shows some common uses for the metrics. These are suggestions to get you started, not a comprehensive list. How can I? Relevant metrics How can I determine if any system errors You can monitor SystemErrors to determine whether any requests resulted in a server error code. Typically, this metric occurred? should be equal to zero. If it isn't, you might want to investiga te. How can I monitor the amount of data in the You can monitor MemoryCumulativeBytesMetered over the specified time period, to monitor the amount of data memory store? stored in memory store in bytes. This metric is emitted every hour and you can track the bytes stored at an account as well as at database granularity. The memory store is metered in GB-hour (the cost of storing 1GB of data for one hour). So multiplying the hourly value of MemoryCumulativeBy tesMetered you the cost incurred per hour. with GB-hour pricing in your Region will give Dimensions: Operation (storage), DatabaseName, Metric name How can I monitor the amount of data in the
timestream-175
timestream.pdf
175
You can monitor MemoryCumulativeBytesMetered over the specified time period, to monitor the amount of data memory store? stored in memory store in bytes. This metric is emitted every hour and you can track the bytes stored at an account as well as at database granularity. The memory store is metered in GB-hour (the cost of storing 1GB of data for one hour). So multiplying the hourly value of MemoryCumulativeBy tesMetered you the cost incurred per hour. with GB-hour pricing in your Region will give Dimensions: Operation (storage), DatabaseName, Metric name How can I monitor the amount of data in the You can monitor MagneticCumulativeBytesMetered over the specified time period, to monitor the amount of data magnetic store? stored in magnetic store in bytes. This metric is emitted every hour and you can track the bytes stored at an account as well as at database granularity. The memory store is metered in GB-month (the cost of storing 1GB of data for one month). So multiplying the hourly value of MagneticCumulative with GB-month pricing in your Region will BytesMetered give you the cost incurred per hour. For example, if the value of MagneticCumulativeBytesMetered is 107374182 400 bytes (100GB), then the hourly charge of 1GB of data in magnetic store = (0.03) (us-east-1 pricing) / (30.4*24). Multiplyi Monitoring with Amazon CloudWatch 665 Amazon Timestream Developer Guide How can I? Relevant metrics ng this value with the MagneticCumulativeBytesMete red in GB will give ~$0.004 for that hour. Dimensions: Operation (Storage), DatabaseName, Metric name How can I monitor the data scanned by You can monitor CumulativeBytesMetered over the specified time period, to monitor the data scanned by queries queries? (in bytes) sent to Timestream for LiveAnalytics. This metric is emitted after the query execution and you can track the data scanned at account and database granularity. You can calculate the query cost for a particular period by multiplying the value of the metric with per GB scanned pricing in your Region. The bytes scanned by scheduled queries are accounted for in this metric. Dimensions: Operation (Query), DatabaseName, Metric name How can I monitor the data scanned by You can monitor CumulativeBytesMetered over the specified time period, to monitor the data scanned by scheduled queries? scheduled queries (in bytes) executed by Timestream for LiveAnalytics. This metric is emitted after the query execution and you can track the data scanned at account and database granularity. You can calculate the query cost for a particula r period by multiplying the value of the metric with per GB scanned pricing in your Region. Note The bytes metered are also accounted for in the query CumulativeBytesMetered . Dimensions: Operation (TriggeredScheduledQuery), DatabaseN ame, Metric name Monitoring with Amazon CloudWatch 666 Amazon Timestream Developer Guide How can I? Relevant metrics How can I monitor the number of records You can monitor NumberOfRecords over the specified time period to monitor the number of records ingested. You can ingested? track the bytes stored at an account as well as at database granularity. You can also use this metric to monitor the writes made by Scheduled Queries when query results are written into a separate table. When using the WriteRecords API, the metric is emitted for each WriteRecords request, with the CloudWatch Operation dimension being WriteRecords . When using the BatchLoad or ScheduledQuery APIs, the metric is emitted at intervals determined by the service until the task completes . The CloudWatch Operation dimension for this metric is either BatchLoad or ScheduledQuery , depending on which API is used. Dimensions: Operation (WriteRecords, BatchLoad, or Scheduled Query), DatabaseName, Metric name Monitoring with Amazon CloudWatch 667 Amazon Timestream Developer Guide How can I? Relevant metrics How can I monitor the cost of records You can monitor CumulativeBytesMetered to monitor the number of bytes ingested that accrue cost. You can track ingested? the bytes stored at an account as well as at database granulari ty. Ingested records are metered in cumulative bytes. Multiplyi ng the value of CumulativeBytesMetered pricing in your Region gives you the ingestion cost incurred. by Writes When using the WriteRecords API, this metric is emitted for each WriteRecords request, with the CloudWatch Operation dimension being WriteRecords . When using the BatchLoad or ScheduledQuery API, the metric is emitted at intervals determined by the service until the task completes. The CloudWatch Operation dimension for this metric is BatchLoad or ScheduledQuery depending on which API is used.. Dimensions: Operation (WriteRecords, BatchLoad, or Scheduled Query), DatabaseName, Metric name How can I monitor the Timestream Compute You can monitor QueryTCU over the desired time period, to monitor the compute units provisioned in your account. This Units (TCUs) used in metric is emitted every 15-minutes. my account? Units: Count Valid Statistics: Minimum, Maximum Metric: ResourceCount Dimensions: Service: Timestream , Namespace:AWS/ Usage , Resource: QueryTCU, Type: Resource, Class: OnDemand Monitoring with
timestream-176
timestream.pdf
176
metric is emitted at intervals determined by the service until the task completes. The CloudWatch Operation dimension for this metric is BatchLoad or ScheduledQuery depending on which API is used.. Dimensions: Operation (WriteRecords, BatchLoad, or Scheduled Query), DatabaseName, Metric name How can I monitor the Timestream Compute You can monitor QueryTCU over the desired time period, to monitor the compute units provisioned in your account. This Units (TCUs) used in metric is emitted every 15-minutes. my account? Units: Count Valid Statistics: Minimum, Maximum Metric: ResourceCount Dimensions: Service: Timestream , Namespace:AWS/ Usage , Resource: QueryTCU, Type: Resource, Class: OnDemand Monitoring with Amazon CloudWatch 668 Amazon Timestream Developer Guide How can I? Relevant metrics How can I monitor the number of provisioned Timestream Compute Units (TCUs) used in my account? Note Provisioned TCU is available only in the Asia Pacific (Mumbai) region. You can monitor QueryTCU to monitor the number of provisioned TCUs used for query workload in the account. This metric is emitted every minute for the during active query workload from the account. Units: Count Valid Statistics: Minimum, Maximum Metric: ResourceCount Dimensions: Service: Timestream , Namespace: AWS/ Usage , Resource: ProvisionedQueryTCU , Class: None Monitoring with Amazon CloudWatch 669 Amazon Timestream Developer Guide How can I? Relevant metrics How can I monitor the provisioned Timestrea Note m Compute Units (TCUs) used in my account? Provisioned TCU is available only in the Asia Pacific (Mumbai) region. You can monitor QueryTCU over the specified time period, to monitor the compute units consumed for query workload in the account. This metric is emitted with maximum and minimum compute units for every minute during active query workload from the account. Units: Count Valid Statistics: Minimum, Maximum Metric: ResourceCount Dimensions: Service: Timestream , Namespace: AWS/ Usage , Resource: QueryTCU, Class: Provisioned Timestream for LiveAnalytics metrics and dimensions When you interact with Timestream for LiveAnalytics, it sends the following metrics and dimensions to Amazon CloudWatch. All metrics are aggregated and reported every minute. You can use the following procedures to view the metrics for Timestream for LiveAnalytics. To view metrics using the CloudWatch console Metrics are grouped first by the service namespace, and then by the various dimension combinations within each namespace. 1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. 2. If necessary, change the Region. On the navigation bar, choose the Region where your AWS resources reside. For more information, see AWS Service Endpoints. 3. In the navigation pane, choose Metrics. Monitoring with Amazon CloudWatch 670 Amazon Timestream Developer Guide 4. Under the All metrics tab, choose AWS/Timestream for LiveAnalytics. To view metrics using the AWS CLI • At a command prompt, use the following command. aws cloudwatch list-metrics --namespace "AWS/Timestream" Dimensions for Timestream for LiveAnalytics metrics The metrics for Timestream for LiveAnalytics are qualified by the values for the account, table name, or operation. You can use the CloudWatch console to retrieve Timestream for LiveAnalytics data along any of the dimensions in the following table: Dimension Description DatabaseName This dimension limits the data to a specific Timestream for LiveAnalytics database. This value can be any database in the current Region and the current AWS account Operation This dimension limits the data to one of the Timestream for LiveAnalytics operations, such as Storage, WriteRecords , BatchLoad , or ScheduledQuery . See the Timestream for LiveAnalytics Query API Reference for a list of available values. TableName This dimension limits the data to a specific table in a Timestrea m for LiveAnalyticss database. Important CumulativeBytesMetered, UserErrors and SystemErrors metrics only have the Operation dimension. SuccessfulRequestLatency metrics always have Operation dimension, but may also have the DatabaseName and TableName dimensions too, depending on the value of Operation. This is because Timestream for LiveAnalytics table- Monitoring with Amazon CloudWatch 671 Amazon Timestream Developer Guide level operations have DatabaseName and TableName as dimensions, but account level operations do not. Timestream for LiveAnalytics metrics Note Amazon CloudWatch aggregates all the following Timestream for LiveAnalytics metrics at one-minute intervals. General metrics Metric SuccessfulRequestLatency Description The successful requests to Timestream for LiveAnalytics during the specified time period. SuccessfulRequestLatency can provide two different kinds of information: • The elapsed time for successful requests (Minimum, Maximum,Sum, or Average). • The number of successful requests (SampleCount). SuccessfulRequestLatency reflects activity only within Timestream for LiveAnalytics and does not take into account network latency or client-side activity. Units: Milliseconds Dimensions • DatabaseName • TableName • Operation Monitoring with Amazon CloudWatch 672 Amazon Timestream Metric Developer Guide Description Valid Statistics: • Minimum • Maximum • Average • SampleCount • P10 • p50 • p90 • p95 • p99 Writing and storage metrics Metric Description MagneticStoreRejectedRecordCount The number of magnetic store written records that were rejected asynchronously. This can happen if the new record has a version that is less than the current version or the new record has version equal
timestream-177
timestream.pdf
177
reflects activity only within Timestream for LiveAnalytics and does not take into account network latency or client-side activity. Units: Milliseconds Dimensions • DatabaseName • TableName • Operation Monitoring with Amazon CloudWatch 672 Amazon Timestream Metric Developer Guide Description Valid Statistics: • Minimum • Maximum • Average • SampleCount • P10 • p50 • p90 • p95 • p99 Writing and storage metrics Metric Description MagneticStoreRejectedRecordCount The number of magnetic store written records that were rejected asynchronously. This can happen if the new record has a version that is less than the current version or the new record has version equal to the current version but has different data. Units: Count Dimensions • DatabaseName • TableName • Operation Valid Statistics: • Sum Monitoring with Amazon CloudWatch 673 Amazon Timestream Metric MagneticStoreRejectedUpload UserFailures Developer Guide Description • SampleCount The number of magnetic store rejected record reports that were not uploaded due to user errors. This can be due to IAM permissions not configured correctly or a deleted S3 bucket. Units: Count Dimensions • DatabaseName • TableName • Operation Valid Statistics: • Sum • SampleCount MagneticStoreRejectedUpload SystemFailures The number of magnetic store rejected record reports that were not uploaded due to system errors. Units: Count Dimensions • DatabaseName • TableName • Operation Valid Statistics: • Sum • SampleCount Monitoring with Amazon CloudWatch 674 Amazon Timestream Metric Description Developer Guide ActiveMagneticStorePartitions The number of magnetic store partitions actively ingesting data at a given time. Units: Count Dimensions • DatabaseName • Operation Valid Statistics: • Sum • SampleCount Monitoring with Amazon CloudWatch 675 Amazon Timestream Metric Description Developer Guide MagneticStorePendingRecords Latency The oldest write to a magnetic store that is not available for query. Records written to the magnetic store will be available for querying within 6 hours. Units: Milliseconds Dimensions • DatabaseName • TableName • Operation Valid Statistics: • Minimum • Maximum • Average • SampleCount • P10 • p50 • p90 • p95 • p99 MemoryCumulativeBytesMetered The amount of data stored in memory store, in bytes Units: Bytes Dimensions: Operation Valid Statistics: • Average Monitoring with Amazon CloudWatch 676 Amazon Timestream Metric Description Developer Guide MagneticCumulativeBytesMetered The amount of data stored in magnetic store, in bytes Units: Bytes Dimensions: Operation Valid Statistics: • Average CumulativeBytesMetered The amount of data metered by ingestion to Timestream for LiveAnalytics, in bytes. Units: Bytes Dimensions: Operation Valid Statistics: Sum NumberOfRecords The number of records ingested into Timestream for LiveAnalytics. Units: Count Dimensions: Operation Valid Statistics: Sum Description Query metrics Metric CumulativeBytesMetered The amount of data scanned by queries sent to Timestream for LiveAnalytics, in bytes. Units: Bytes Dimensions: Operation Monitoring with Amazon CloudWatch 677 Amazon Timestream Metric ResourceCount Error metrics Metric SystemErrors Developer Guide Description Valid Statistics: • Sum The Timestream Compute Units (TCUs) consumed for query workload in the account. This metric is emitted with maximum and minimum compute units for every minute during active query workload from the account. Units: Count Valid Statistics: Minimum, Maximum Dimensions: Service: Timestream , Resource: QueryTCU, Type: Resource, Class: OnDemand Description The requests to Timestream for LiveAnaly tics that generate a SystemError during the specified time period. A SystemError usually indicates an internal service error. Units: Count Dimensions: Operation Valid Statistics: • Sum • SampleCount Monitoring with Amazon CloudWatch 678 Amazon Timestream Metric UserErrors Developer Guide Description Requests to Timestream for LiveAnalytics that generate an InvalidRequest error during the specified time period. An InvalidRequest usually indicates a client-side error, such as an invalid combination of parameters, an attempt to update a nonexistent table, or an incorrect request signature. UserErrors represents the aggregate of invalid requests for the current AWS Region and the current AWS account. Units: Count Dimensions: Operation Valid Statistics: • Sum • SampleCount Important Not all statistics, such as Average or Sum, are applicable for every metric. However, all of these values are available through the Timestream for LiveAnalytics console, or by using the CloudWatch console, AWS CLI, or AWS SDKs for all metrics. Creating CloudWatch alarms to monitor Timestream for LiveAnalytics You can create an Amazon CloudWatch alarm for Timestream for LiveAnalytics that sends an Amazon Simple Notification Service (Amazon SNS) message when the alarm changes state. An alarm watches a single metric over a time period that you specify. It performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon SNS topic or Auto Scaling policy. Monitoring with Amazon CloudWatch 679 Amazon Timestream Developer Guide Alarms invoke actions for sustained state changes only. CloudWatch alarms do not invoke actions simply because they are in a particular state. The state must have changed and been maintained for a specified number of periods. For more information about creating CloudWatch alarms, see Using Amazon CloudWatch Alarms in
timestream-178
timestream.pdf
178
that you specify. It performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon SNS topic or Auto Scaling policy. Monitoring with Amazon CloudWatch 679 Amazon Timestream Developer Guide Alarms invoke actions for sustained state changes only. CloudWatch alarms do not invoke actions simply because they are in a particular state. The state must have changed and been maintained for a specified number of periods. For more information about creating CloudWatch alarms, see Using Amazon CloudWatch Alarms in the Amazon CloudWatch User Guide. Troubleshooting This section contains information on troubleshooting Timestream for LiveAnalytics. Topics • Handling WriteRecords throttles • Handling rejected records • Troubleshooting UNLOAD from Timestream for LiveAnalytics • Timestream for LiveAnalytics specific error codes Handling WriteRecords throttles Your memory store write requests to Timestream may be throttled as Timestream scales to adapt to the data ingestion needs of your application. If your applications encounter throttling exceptions, you must continue to send data at the same (or higher) throughput to allow Timestream to automatically scale to your application's needs. Your magnetic store write requests to Timestream may be throttled if the maximum limit of magnetic store partitions receiving ingestion. You will see a throttle message directing you to check the ActiveMagneticStorePartitions Cloudwatch metric for this database. This throttle may take up to 6 hours to resolve. To avoid this throttle, you should use the memory store for any high throughput ingestion workload. For magnetic store ingestion, you can target ingesting into fewer partitions by limiting how many series and the time duration that you ingest into For more information about data ingestion best practices, see Writes. Handling rejected records If Timestream rejects records, you will receive a RejectedRecordsException with details about the rejection. Please refer to Handling write failure for more information on how to extract this information from the WriteRecords response. Troubleshooting 680 Amazon Timestream Developer Guide All rejections will be included in this response with the exception of updates to the magnetic store where the new record's version is less than or equal to the existing record's version. In this case, Timestream will not update the existing record that has the higher version. Timestream will reject the new record with lower or equal version and write these errors asynchronously to your S3 bucket. In order to receive these asynchronous error reports, you should set the MagneticStoreRejectedDataLocation property in MagneticStoreWriteProperties on your table. Troubleshooting UNLOAD from Timestream for LiveAnalytics Following is guidance for troubleshooting related to the UNLOAD command. Category Error message How to troubleshoot S3 Key length UNLOAD result file key when using the S3 prefix [%s] provided in the destination When exporting query results using the UNLOAD statement, the S3 key length, will exceed the S3 allowed comprising of sum of the key length. See documenta length of S3 bucket name and tion for more details. prefix exceeds the maximum UNLOAD result file key when using partitioned_by [%s] will exceed the S3 allowed key length. See documentation for more details. supported S3 key length. We recommend to reduce your prefix or bucket name length. When exporting query results using the UNLOAD statement, the S3 Key length using the partitioned_by column exceeds the maximum supported S3 key length. We recommend to partition with an alternate column or reduce the length of the partition ed_column (if feasible). UNLOAD result file key when using the S3 prefix [%s] along with the partitioned_by [%s] When exporting query results using the UNLOAD statement , the S3 Key length, comprisin Troubleshooting UNLOAD 681 Amazon Timestream Developer Guide Category Error message How to troubleshoot will exceed the S3 allowed g of sum of the length of S3 key length. See documenta tion for more details. bucket name, the prefix, and the partitioned_by column The generated S3 object key: %s is too long. See documentation for more details. name exceeds the maximum supported S3 key length. We recommend to reduce your prefix, bucket name length, or use an alternate column to partition your data. While processing your query using the UNLOAD statement , one of the values in the partitioned column exceeds the maximum supported S3 key length. The partition column and value can be found in the object key generated. Troubleshooting UNLOAD 682 Amazon Timestream Category S3 throttles Developer Guide Error message How to troubleshoot We have detected that Amazon S3 is throttling Refer to S3 documentation here. S3 API call rate could the writes from UNLOAD be throttled when multiple command. See Amazon readers/writers access the Timestream documentation same folder. Please audit the for more information call volume to the bucket provided. If you are using same bucket for multiple concurrent UNLOAD queries, try using different buckets for the same.
timestream-179
timestream.pdf
179
the maximum supported S3 key length. The partition column and value can be found in the object key generated. Troubleshooting UNLOAD 682 Amazon Timestream Category S3 throttles Developer Guide Error message How to troubleshoot We have detected that Amazon S3 is throttling Refer to S3 documentation here. S3 API call rate could the writes from UNLOAD be throttled when multiple command. See Amazon readers/writers access the Timestream documentation same folder. Please audit the for more information call volume to the bucket provided. If you are using same bucket for multiple concurrent UNLOAD queries, try using different buckets for the same. If you are using same bucket for multiple operations other than Timestream for LiveAnaly tics UNLOAD, consider moving UNLOAD results to separate bucket. Timestream for LiveAnalytics specific error codes This section contains the specific error codes for Timestream for LiveAnalytics. Timestream for LiveAnalytics write API errors InternalServerException HTTP Status Code: 500 ThrottlingException HTTP Status Code: 429 ValidationException HTTP Status Code: 400 ConflictException HTTP Status Code: 409 Timestream for LiveAnalytics specific error codes 683 Amazon Timestream AccessDeniedException You do not have sufficient access to perform this action. Developer Guide HTTP Status Code: 403 ServiceQuotaExceededException HTTP Status Code: 402 ResourceNotFoundException HTTP Status Code: 404 RejectedRecordsException HTTP Status Code: 419 InvalidEndpointException HTTP Status Code: 421 Timestream for LiveAnalytics query API errors ValidationException HTTP Status Code: 400 QueryExecutionException HTTP Status Code: 400 ConflictException HTTP Status Code: 409 ThrottlingException HTTP Status Code: 429 InternalServerException HTTP Status Code: 500 InvalidEndpointException HTTP Status Code: 421 Timestream for LiveAnalytics specific error codes 684 Amazon Timestream Quotas Developer Guide This topic describes current quotas, also referred to as limits, within Amazon Timestream for LiveAnalytics. Each quota applies on a per-Region basis unless otherwise specified. Topics • Default quotas • Service limits • Supported data types • Batch load • Naming constraints • Reserved keywords • System identifiers • UNLOAD Default quotas The following table contains the Timestream for LiveAnalytics quotas and the default values. displayName Description defaultValue Databases per account Tables per account Request rate for CRUD APIs The maximum number of databases you can create per 500 AWS account. The maximum number of tables you can create per AWS account. 50000 The maximum number of Create/Update/Delete requests allowed per second per account, in the current 1 Region. Quotas 685 Amazon Timestream Developer Guide displayName Description defaultValue Request rate for other APIs Scheduled queries per account Maximum count of active magnetic store partitions The maximum number of List/Describe/Prepare/ 5 ExecuteScheduledQueryAPI requests allowed per second per account, in the current Region. 10000 250 The maximum number of scheduled queries you can create per AWS account. The maximum number of active magnetic store partitions per database. A partition might remain active for up to six hours after receiving ingestion. Service limits The following table contains the Timestream for LiveAnalytics service limits and the default values. To edit data retention for a table from the console, see Edit a table. displayName Description defaultValue Future ingestion period in minutes 15 The maximum lead time (in minutes) for your time series data compared to the current system time. For example, if the future ingestion period is 15 minutes, then Timestream for LiveAnalytics will accept data that is up to 15 minutes Service limits 686 Amazon Timestream Developer Guide displayName Description defaultValue ahead of the current system time. Minimum retention period for memory store in hours The minimum duration (in hours) for which data must be 1 retained in the memory store per table. Maximum retention period for memory store in hours The maximum duration (in hours) for which data can be 8766 retained in the memory store per table. Minimum retention period for magnetic store in days The minimum duration (in days) for which data must be 1 retained in the magnetic store per table. Maximum retention period for magnetic store in days The maximum duration (in days) for which data can be 73000 retained in the magnetic store. This value is equivalent to 200 years. Default retention period for magnetic store in days The default value (in days) for which data is retained in the 73000 magnetic store per table. This value is equivalent to 200 years. Default retention period for memory store in hours The default duration (in hours) for which data is retained in the memory store. 6 Dimensions per table The maximum number of dimensions per table. 128 Service limits 687 Amazon Timestream Developer Guide displayName Description defaultValue Measure names per table The maximum number of unique measure names per 8192 table. Dimension name dimension value pair size per series The maximum size of dimension name and 2 Kilobytes dimension value pair per series. Maximum record size The maximum size of a record. 2 Kilobytes Records per WriteRecords API request The maximum number of records in a WriteRecords API 100
timestream-180
timestream.pdf
180
in hours The default duration (in hours) for which data is retained in the memory store. 6 Dimensions per table The maximum number of dimensions per table. 128 Service limits 687 Amazon Timestream Developer Guide displayName Description defaultValue Measure names per table The maximum number of unique measure names per 8192 table. Dimension name dimension value pair size per series The maximum size of dimension name and 2 Kilobytes dimension value pair per series. Maximum record size The maximum size of a record. 2 Kilobytes Records per WriteRecords API request The maximum number of records in a WriteRecords API 100 request. Dimension name length The maximum number of bytes for a Dimension name. 60 bytes Measure name length The maximum number of bytes for a Measure name. 256 bytes Database name length The maximum number of bytes for a Database name. 256 bytes Table name length The maximum number of bytes for a Table name. 256 bytes QueryString length in KiB Execution duration for queries in hours The maximum length (in KiB) of a query string in UTF-8 encoded characters for a query. The maximum execution duration (in hours) for a query. Queries that take longer will timeout. 256 1 Service limits 688 Amazon Timestream Developer Guide displayName Description defaultValue Query Insights The maximum number of Query API requests allowed 1 with query insights enabled per second per account, in the current Region. Metadata size for query result The maximum metadata size for a query result. 100 Kilobytes Data size for query result The maximum data size for a query result. 5 Gigabytes Measures per multi-measure record The maximum number of measures per multi-measure 256 record. Measure value size per multi- measure record The maximum size of measure values per multi- 2048 measure record. Unique measures across multi-measure records per The unique measures in all the multi-measure records 1024 table defined in a single table. Timestream Compute Units (TCUs) per account The default maximum TCUs per account. 200 Service limits 689 Amazon Timestream Developer Guide displayName Description defaultValue Maximum Provisioned Timestream Compute Units The maximum number of TCUs you can provision in 1000 (TCUs) per account. your account. Note Provisioned TCU is available only in the Asia Pacific (Mumbai) region. maxQueryTCU The maximum query TCUs you can set for your account. 1000 Supported data types The following table describes the supported data types for measure and dimension values. Description Timestream for LiveAnalytics value Supported data types for measure values. Big int, double, string, boolean, MULTI, Timestamp Supported data types for dimension values. String Batch load The current quotas, also referred to as limits, within batch load are as follows. Description Timestream for LiveAnalytics value Max batch load task size Max batch load task size cannot exceed 100 GB. Supported data types 690 Amazon Timestream Developer Guide Description Timestream for LiveAnalytics value Files quantity A batch load task cannot have more than 100 files. Maximum file size Maximum file size in a batch load task cannot exceed 5 GB. CSV file row size A row in a CSV file cannot exceed 16 MB. This is a hard limit which cannot be increased. Active batch load tasks A table cannot have more than 5 active batch load tasks and an account cannot have more than 10 active batch load tasks. Timestream for LiveAnalytics will throttle new batch load tasks until more resources are available. Naming constraints The following table describes naming constraints. Description Timestream for LiveAnalytics value The maximum length of a dimension name. 60 bytes The maximum length of a measure name. 256 bytes The maximum length of a table name or database name. 256 bytes Table and Database Name • We recommend you do not use System identifiers. • Can contain a-z A-Z 0-9 _ (underscore) - (dash) . (dot). • All names must be encoded as UTF-8, and are case sensitive. Naming constraints 691 Amazon Timestream Developer Guide Description Timestream for LiveAnalytics value Note Table and database names are compared using UTF-8 binary representation. This means that comparison for ASCII characters is case sensitive. Measure Name • Must not contain System identifiers or colon ':'. • Must not start with a reserved prefix (ts_, measure_value ). Note Table and database names are compared using UTF-8 binary representation. This means that comparison for ASCII characters is case sensitive. Dimension Name • Must not contain System identifiers, colon ':' or double quote ("). • Must not start with a reserved prefix (ts_, measure_value ). • Must not contain Unicode characters [0,31] listed here or "\u2028" or "\u2029". Note Dimension and measure names are compared using UTF-8 binary representation. This means that comparison for ASCII characters is case sensitive. All Column Names Column names can not be duplicated. Since multi-measure records represent dimensions and measures as columns, the name for a dimension
timestream-181
timestream.pdf
181
database names are compared using UTF-8 binary representation. This means that comparison for ASCII characters is case sensitive. Dimension Name • Must not contain System identifiers, colon ':' or double quote ("). • Must not start with a reserved prefix (ts_, measure_value ). • Must not contain Unicode characters [0,31] listed here or "\u2028" or "\u2029". Note Dimension and measure names are compared using UTF-8 binary representation. This means that comparison for ASCII characters is case sensitive. All Column Names Column names can not be duplicated. Since multi-measure records represent dimensions and measures as columns, the name for a dimension can not be the same as the name for a measure. Names are case sensitive. Naming constraints 692 Amazon Timestream Reserved keywords All of the following are reserved keywords: Developer Guide • ALTER • AND • AS • BETWEEN • BY • CASE • CAST • CONSTRAINT • CREATE • CROSS • CUBE • CURRENT_DATE • CURRENT_TIME • CURRENT_TIMESTAMP • CURRENT_USER • DEALLOCATE • DELETE • DESCRIBE • DISTINCT • DROP • ELSE • END • ESCAPE • EXCEPT • EXECUTE • EXISTS Reserved keywords 693 Amazon Timestream • EXTRACT • FALSE • FOR • FROM • FULL • GROUP • GROUPING • HAVING • IN • INNER • INSERT • INTERSECT • INTO • IS • JOIN • LEFT • LIKE • LOCALTIME • LOCALTIMESTAMP • NATURAL • NORMALIZE • NOT • NULL • ON • OR • ORDER • OUTER • PREPARE Reserved keywords Developer Guide 694 Developer Guide Amazon Timestream • RECURSIVE • RIGHT • ROLLUP • SELECT • TABLE • THEN • TRUE • UESCAPE • UNION • UNNEST • USING • VALUES • WHEN • WHERE • WITH System identifiers We reserve column names "measure_value", "ts_non_existent_col" and "time" to be Timestream for LiveAnalytics system identifiers. Additionally, column names may not start with "ts_" or "measure_name". System identifiers are case sensitive. Identifiers compared using UTF-8 binary representation. This means that comparison for identifiers is case sensitive. Note System identifiers may not be used for dimension or measure names. We recommend you do not use system identifiers for database or table names. UNLOAD For limits related to the UNLOAD command, see Using UNLOAD to export query results to S3 from Timestream. System identifiers 695 Amazon Timestream Developer Guide Query language reference Note This query language reference includes the following third-party documentation from the Trino Software Foundation (formerly Presto Software Foundation), which is licensed under the Apache License, Version 2.0. You may not use this file except in compliance with this license. To get a copy of the Apache License, Version 2.0, see the Apache website. Timestream for LiveAnalytics supports a rich query language for working with your data. You can see the available data types, operators, functions and constructs below. You can also get started right away with Timestream's query language in the Sample queries section. Topics • Supported data types • Built-in time series functionality • SQL support • Logical operators • Comparison operators • Comparison functions • Conditional expressions • Conversion functions • Mathematical operators • Mathematical functions • String operators • String functions • Array operators • Array functions • Bitwise functions • Regular expression functions • Date / time operators Query language reference 696 Amazon Timestream • Date / time functions • Aggregate functions • Window functions • Sample queries Supported data types Developer Guide Timestream for LiveAnalytics's query language supports the following data types. Note Data types supported for writes are described in Data types. Data type int bigint boolean double Description Represents a 32-bit integer. Represents a 64-bit signed integer. One of the two truth values of logic, True and False. Represents a 64-bit variable-precision data type. Implements IEEE Standard 754 for Binary Floating-Point Arithmetic. Note The query language is for reading data. There are functions for Infinity and NaN double values which can be used in queries. But you cannot write those values to Timestream. varchar Variable length character data with a maximum size of 2KB. array[T,...] Contains one or more elements of a specified data type T, where T can be any of the data types supported in Timestrea m. Supported data types 697 Amazon Timestream Data type row(T,...) date time Developer Guide Description Contains one or more named fields of data type T. The fields may be of any data type supported by Timestream, and are accessed with the dot field reference operator: . Represents a date in the form YYYY-MM-DD. where YYYY is the year, MM is the month, and DD is the day, respectively. The supported range is from 1970-01-01 to 2262-04-11 . Example: 1971-02-03 Represents the time of day in UTC. The time datatype is represented in the form HH.MM.SS.sssssssss . Supports nanosecond precision. Example: 17:02:07.496000000 timestamp Represents an instance in time using nanosecond precision
timestream-182
timestream.pdf
182
date time Developer Guide Description Contains one or more named fields of data type T. The fields may be of any data type supported by Timestream, and are accessed with the dot field reference operator: . Represents a date in the form YYYY-MM-DD. where YYYY is the year, MM is the month, and DD is the day, respectively. The supported range is from 1970-01-01 to 2262-04-11 . Example: 1971-02-03 Represents the time of day in UTC. The time datatype is represented in the form HH.MM.SS.sssssssss . Supports nanosecond precision. Example: 17:02:07.496000000 timestamp Represents an instance in time using nanosecond precision time in UTC. YYYY-MM-DD hh:mm:ss.sssssssss Query supports timestamps in the range 1677-09-21 00:12:44.000000000 854775807 . to 2262-04-11 23:47:16. Supported data types 698 Amazon Timestream Data type interval Developer Guide Description Represents an interval of time as a string literal Xt, composed of two parts, X and t. X is an numeric value greater than or equal to 0, and t is a unit of time like second or hour. The unit is not pluralize d. The unit of time t is must be one of the following string literals: • nanosecond • microsecond • millisecond • second • minute • hour • day • ns (same as nanosecond ) • us (same as microsecond ) • ms (same as millisecond ) • s (same as second) • m (same as minute) • h (same as hour) • d (same as day) Examples: 17s 12second 21hour Supported data types 699 Amazon Timestream Developer Guide Data type Description 2d timeseries[row(tim Represents the values of a measure recorded over a time estamp, T,...)] interval as an array composed of row objects. Each row contains a timestamp and one or more measure values of data type T, where T can be any one of bigint, boolean, double, or varchar. Rows are assorted in ascending order by timestamp . The timeseries datatype represents the values of a measure over time. unknown Represents null data. Built-in time series functionality Timestream for LiveAnalytics provides built-in time series functionality that treat time series data as a first class concept. Built-in time series functionality can be divided into two categories: views and functions. You can read about each construct below. Topics • Timeseries views • Time series functions Timeseries views Timestream for LiveAnalytics supports the following functions for transforming your data to the timeseries data type: Topics • CREATE_TIME_SERIES • UNNEST Built-in time series functionality 700 Amazon Timestream CREATE_TIME_SERIES Developer Guide CREATE_TIME_SERIES is an aggregation function that takes all the raw measurements of a time series (time and measure values) and returns a timeseries data type. The syntax of this function is as follows: CREATE_TIME_SERIES(time, measure_value::<data_type>) where <data_type> is the data type of the measure value and can be one of bigint, boolean, double, or varchar. The second parameter cannot be null. Consider the CPU utilization of EC2 instances stored in a table named metrics as shown below: Time region az vpc instance_ id measure_n ame measure_v alue::dou us-east-1 us-east-1d vpc-1a2b3 c4d i-1234567 890abcdef cpu_utili zation 0 ble 35.0 us-east-1 us-east-1d vpc-1a2b3 c4d i-1234567 890abcdef cpu_utili zation 38.2 0 us-east-1 us-east-1d vpc-1a2b3 c4d i-1234567 890abcdef cpu_utili zation 45.3 us-east-1 us-east-1d us-east-1 us-east-1d vpc-1a2b3 c4d vpc-1a2b3 c4d 0 i-1234567 890abcdef 1 i-1234567 890abcdef 1 cpu_utili zation 54.1 cpu_utili zation 42.5 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:01. 000000000 2019-12-0 4 19:00:02. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:01. 000000000 Built-in time series functionality 701 Amazon Timestream Developer Guide Time region az vpc instance_ id measure_n ame measure_v alue::dou 2019-12-0 4 19:00:02. 000000000 us-east-1 us-east-1d vpc-1a2b3 c4d i-1234567 890abcdef cpu_utili zation 1 ble 33.7 Running the query: SELECT region, az, vpc, instance_id, CREATE_TIME_SERIES(time, measure_value::double) as cpu_utilization FROM metrics WHERE measure_name=’cpu_utilization’ GROUP BY region, az, vpc, instance_id will return all series that have cpu_utilization as a measure value. In this case, we have two series: region az vpc instance_id cpu_utilization us-east-1 us-east-1d vpc-1a2b3c4d i-1234567 890abcdef0 [{time: 2019-12-0 4 19:00:00. 000000000 , measure_v alue::double: 35.0}, {time: 2019-12-0 4 19:00:01. 000000000 , measure_v alue::double: 38.2}, {time: 2019-12-0 4 19:00:02. 000000000 , measure_v Built-in time series functionality 702 Amazon Timestream Developer Guide region az vpc instance_id cpu_utilization us-east-1 us-east-1d vpc-1a2b3c4d i-1234567 890abcdef1 alue::double: 45.3}] [{time: 2019-12-0 4 19:00:00. 000000000 , measure_v alue::double: 35.1}, {time: 2019-12-0 4 19:00:01. 000000000 , measure_v alue::double: 38.5}, {time: 2019-12-0 4 19:00:02. 000000000 , measure_v alue::double: 45.7}] UNNEST UNNEST is a table function that enables you to transform timeseries data into the flat model. The syntax is as follows: UNNEST transforms a timeseries into two columns, namely, time and value. You can also use aliases with UNNEST as shown below: UNNEST(timeseries) AS <alias_name> (time_alias, value_alias) where <alias_name> is the alias for the flat table, time_alias is the alias for the time column and value_alias is the alias for the value
timestream-183
timestream.pdf
183
[{time: 2019-12-0 4 19:00:00. 000000000 , measure_v alue::double: 35.1}, {time: 2019-12-0 4 19:00:01. 000000000 , measure_v alue::double: 38.5}, {time: 2019-12-0 4 19:00:02. 000000000 , measure_v alue::double: 45.7}] UNNEST UNNEST is a table function that enables you to transform timeseries data into the flat model. The syntax is as follows: UNNEST transforms a timeseries into two columns, namely, time and value. You can also use aliases with UNNEST as shown below: UNNEST(timeseries) AS <alias_name> (time_alias, value_alias) where <alias_name> is the alias for the flat table, time_alias is the alias for the time column and value_alias is the alias for the value column. Built-in time series functionality 703 Amazon Timestream Developer Guide For example, consider the scenario where some of the EC2 instances in your fleet are configured to emit metrics at a 5 second interval, others emit metrics at a 15 second interval, and you need the average metrics for all instances at a 10 second granularity for the past 6 hours. To get this data, you transform your metrics to the time series model using CREATE_TIME_SERIES. You can then use INTERPOLATE_LINEAR to get the missing values at 10 second granularity. Next, you transform the data back to the flat model using UNNEST, and then use AVG to get the average metrics across all instances. WITH interpolated_timeseries AS ( SELECT region, az, vpc, instance_id, INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, measure_value::double), SEQUENCE(ago(6h), now(), 10s)) AS interpolated_cpu_utilization FROM timestreamdb.metrics WHERE measure_name= ‘cpu_utilization’ AND time >= ago(6h) GROUP BY region, az, vpc, instance_id ) SELECT region, az, vpc, instance_id, avg(t.cpu_util) FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_cpu_utilization) AS t (time, cpu_util) GROUP BY region, az, vpc, instance_id The query above demonstrates the use of UNNEST with an alias. Below is an example of the same query without using an alias for UNNEST: WITH interpolated_timeseries AS ( SELECT region, az, vpc, instance_id, INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, measure_value::double), SEQUENCE(ago(6h), now(), 10s)) AS interpolated_cpu_utilization FROM timestreamdb.metrics WHERE measure_name= ‘cpu_utilization’ AND time >= ago(6h) GROUP BY region, az, vpc, instance_id ) SELECT region, az, vpc, instance_id, avg(value) FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_cpu_utilization) GROUP BY region, az, vpc, instance_id Built-in time series functionality 704 Amazon Timestream Time series functions Developer Guide Amazon Timestream for LiveAnalytics supports timeseries functions, such as derivatives, integrals, and correlations, as well as others, to derive deeper insights from your time series data. This section provides usage information for each of these functions, as well as sample queries. Select a topic below to learn more. Topics • Interpolation functions • Derivatives functions • Integral functions • Correlation functions • Filter and reduce functions Interpolation functions If your time series data is missing values for events at certain points in time, you can estimate the values of those missing events using interpolation. Amazon Timestream supports four variants of interpolation: linear interpolation, cubic spline interpolation, last observation carried forward (locf) interpolation, and constant interpolation. This section provides usage information for the Timestream for LiveAnalytics interpolation functions, as well as sample queries. Usage information Function Output data type Description interpolate_linear timeseries (timeseries, array[timestamp]) interpolate_linear double (timeseries, timestamp) Fills in missing data using linear interpolation. Fills in missing data using linear interpolation. Built-in time series functionality 705 Amazon Timestream Developer Guide Function Output data type Description interpolate_spline timeseries _cubic(timeseries, array[timestamp]) interpolate_spline double _cubic(timeseries, timestamp) interpolate_locf(t timeseries imeseries, array[tim estamp]) interpolate_locf(t double imeseries, timestamp ) interpolate_fill(t timeseries imeseries, array[tim estamp], double) interpolate_fill(t double imeseries, timestamp , double) Query examples Example Fills in missing data using cubic spline interpolation. Fills in missing data using cubic spline interpolation. Fills in missing data using the last sampled value. Fills in missing data using the last sampled value. Fills in missing data using a constant value. Fills in missing data using a constant value. Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using linear interpolation: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' Built-in time series functionality 706 Amazon Timestream Developer Guide AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LINEAR( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Example Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using interpolation based on the last observation carried forward: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LOCF( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY
timestream-184
timestream.pdf
184
BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Example Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using interpolation based on the last observation carried forward: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LOCF( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Built-in time series functionality 707 Amazon Timestream Derivatives functions Developer Guide Derivatives are used calculate the rate of change for a given metric and can be used to proactively respond to an event. For example, suppose you calculate the derivative of the CPU utilization of EC2 instances over the past 5 minutes, and you notice a significant positive derivative. This can be indicative of increased demand on your workload, so you may decide want to spin up more EC2 instances to better handle your workload. Amazon Timestream supports two variants of derivative functions. This section provides usage information for the Timestream for LiveAnalytics derivative functions, as well as sample queries. Usage information Function Output data type Description derivative_linear( timeseries timeseries, interval) non_negative_deriv timeseries ative_linear(times eries, interval) Query examples Example Calculates the derivativ e of each point in the timeseries for the specified interval. Same as derivativ e_linear(timeserie s, interval) returns positive values. , but only Find the rate of change in the CPU utilization every 5 minutes over the past 1 hour: SELECT DERIVATIVE_LINEAR(CREATE_TIME_SERIES(time, measure_value::double), 5m) AS result FROM “sampleDB”.DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' and time > ago(1h) GROUP BY hostname, measure_name Built-in time series functionality 708 Amazon Timestream Example Developer Guide Calculate the rate of increase in errors generated by one or more microservices: WITH binned_view as ( SELECT bin(time, 5m) as binned_timestamp, ROUND(AVG(measure_value::double), 2) as value FROM “sampleDB”.DevOps WHERE micro_service = 'jwt' AND time > ago(1h) AND measure_name = 'service_error' GROUP BY bin(time, 5m) ) SELECT non_negative_derivative_linear(CREATE_TIME_SERIES(binned_timestamp, value), 1m) as rateOfErrorIncrease FROM binned_view Integral functions You can use integrals to find the area under the curve per unit of time for your time series events. As an example, suppose you're tracking the volume of requests received by your application per unit of time. In this scenario, you can use the integral function to determine the total volume of requests served per specified interval over a specific time period. Amazon Timestream supports one variant of integral functions. This section provides usage information for the Timestream for LiveAnalytics integral function, as well as sample queries. Usage information Function Output data type Description integral_trapezoid double al(timeseries(doub le)) integral_trapezoid al(timeseries(doub le), interval day to second) Approximates the integral per the specified interval day to second for the timeseries provided, using the trapezoidal rule. The interval day to second parameter is optional and the default is 1s. For more Built-in time series functionality 709 Amazon Timestream Developer Guide Function Output data type Description information about intervals, see Interval and duration. integral_trapezoid al(timeseries(bigi nt)) integral_trapezoid al(timeseries(bigi nt), interval day to second) integral_trapezoid al(timeseries(inte ger), interval day to second) integral_trapezoid al(timeseries(inte ger)) Query examples Example Calculate the total volume of requests served per five minutes over the past hour by a specific host: SELECT INTEGRAL_TRAPEZOIDAL(CREATE_TIME_SERIES(time, measure_value::double), 5m) AS result FROM sample.DevOps WHERE measure_name = 'request' AND hostname = 'host-Hovjv' AND time > ago (1h) GROUP BY hostname, measure_name Correlation functions Given two similar length time series, correlation functions provide a correlation coefficient, which explains how the two time series trend over time. The correlation coefficient ranges from -1.0 to 1.0. -1.0 indicates that the two time series trend in opposite directions at the same rate. whereas 1.0 indicates that the two timeseries trend in the same direction at the same rate. A value of 0 Built-in time series functionality 710 Amazon Timestream Developer Guide indicates no correlation between the two time series. For example, if the price of oil increases, and the stock price of an oil company increases, the trend of the price increase of oil and the price increase of the oil company will have a positive correlation coefficient. A high positive correlation coefficient would indicate that the two prices trend at a similar rate. Similarly, the correlation coefficient between bond prices and bond yields is negative, indicating that these two values trends in the opposite direction over time. Amazon Timestream supports two variants of correlation functions. This section provides usage information for the Timestream for LiveAnalytics correlation functions, as well as sample queries. Usage information Function Output data type Description correlate_pearson( double timeseries, timeseries) correlate_spearman
timestream-185
timestream.pdf
185
increases, the trend of the price increase of oil and the price increase of the oil company will have a positive correlation coefficient. A high positive correlation coefficient would indicate that the two prices trend at a similar rate. Similarly, the correlation coefficient between bond prices and bond yields is negative, indicating that these two values trends in the opposite direction over time. Amazon Timestream supports two variants of correlation functions. This section provides usage information for the Timestream for LiveAnalytics correlation functions, as well as sample queries. Usage information Function Output data type Description correlate_pearson( double timeseries, timeseries) correlate_spearman double (timeseries, timeseries) Calculates Pearson's correlati on coefficient for the two timeseries . The timeserie s must have the same timestamps. Calculates Spearman's correlation coefficient for the two timeseries . The timeseries must have the same timestamps. Query examples Example WITH cte_1 AS ( SELECT INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, measure_value::double), SEQUENCE(min(time), max(time), 10m)) AS result FROM sample.DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(1h) GROUP BY hostname, measure_name Built-in time series functionality 711 Developer Guide Amazon Timestream ), cte_2 AS ( SELECT INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, measure_value::double), SEQUENCE(min(time), max(time), 10m)) AS result FROM sample.DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(1h) GROUP BY hostname, measure_name ) SELECT correlate_pearson(cte_1.result, cte_2.result) AS result FROM cte_1, cte_2 Filter and reduce functions Amazon Timestream supports functions for performing filter and reduce operations on time series data. This section provides usage information for the Timestream for LiveAnalytics filter and reduce functions, as well as sample queries. Usage information Function Output data type Description filter(timeseries( timeseries(T) T), function(T, Boolean)) reduce(timeseries( R T), initialState S, inputFunction(S, T, S), outputFunction(S, R)) Constructs a time series from an the input time series, using values for which the passed function returns true. Returns a single value, reduced from the time series. The inputFunction will be invoked on each element in timeseries in order. In addition to taking the current element, inputFunction takes the current state (initiall y initialState ) and returns the new state. The outputFunction will be Built-in time series functionality 712 Amazon Timestream Developer Guide Function Output data type Description invoked to turn the final state into the result value. The outputFunction can be an identity function. Query examples Example Construct a time series of CPU utilization of a host and filter points with measurement greater than 70: WITH time_series_view AS ( SELECT INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, ROUND(measure_value::double,2)), SEQUENCE(ago(15m), ago(1m), 10s)) AS cpu_user FROM sample.DevOps WHERE hostname = 'host-Hovjv' and measure_name = 'cpu_utilization' AND time > ago(30m) GROUP BY hostname ) SELECT FILTER(cpu_user, x -> x.value > 70.0) AS cpu_above_threshold from time_series_view Example Construct a time series of CPU utilization of a host and determine the sum squared of the measurements: WITH time_series_view AS ( SELECT INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, ROUND(measure_value::double,2)), SEQUENCE(ago(15m), ago(1m), 10s)) AS cpu_user FROM sample.DevOps WHERE hostname = 'host-Hovjv' and measure_name = 'cpu_utilization' AND time > ago(30m) GROUP BY hostname ) SELECT REDUCE(cpu_user, Built-in time series functionality 713 Amazon Timestream DOUBLE '0.0', (s, x) -> x.value * x.value + s, s -> s) from time_series_view Example Developer Guide Construct a time series of CPU utilization of a host and determine the fraction of samples that are above the CPU threshold: WITH time_series_view AS ( SELECT INTERPOLATE_LINEAR( CREATE_TIME_SERIES(time, ROUND(measure_value::double,2)), SEQUENCE(ago(15m), ago(1m), 10s)) AS cpu_user FROM sample.DevOps WHERE hostname = 'host-Hovjv' and measure_name = 'cpu_utilization' AND time > ago(30m) GROUP BY hostname ) SELECT ROUND( REDUCE(cpu_user, -- initial state CAST(ROW(0, 0) AS ROW(count_high BIGINT, count_total BIGINT)), -- function to count the total points and points above a certain threshold (s, x) -> CAST(ROW(s.count_high + IF(x.value > 70.0, 1, 0), s.count_total + 1) AS ROW(count_high BIGINT, count_total BIGINT)), -- output function converting the counts to fraction above threshold s -> IF(s.count_total = 0, NULL, CAST(s.count_high AS DOUBLE) / s.count_total)), 4) AS fraction_cpu_above_threshold from time_series_view SQL support Timestream for LiveAnalytics supports some common SQL constructs. You can read more below. Topics • SELECT • Subquery support • SHOW statements • DESCRIBE statements SQL support 714 Amazon Timestream • UNLOAD SELECT Developer Guide SELECT statements can be used to retrieve data from one or more tables. Timestream's query language supports the following syntax for SELECT statements: [ WITH with_query [, ...] ] SELECT [ ALL | DISTINCT ] select_expr [, ...] [ function (expression) OVER ( [ PARTITION BY partition_expr_list ] [ ORDER BY order_list ] [ frame_clause ] ) [ FROM from_item [, ...] ] [ WHERE condition ] [ GROUP BY [ ALL | DISTINCT ] grouping_element [, ...] ] [ HAVING condition] [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ] [ ORDER BY order_list ] [ LIMIT [ count | ALL ] ] where • function (expression) is one of the supported window functions. • partition_expr_list is: expression
timestream-186
timestream.pdf
186
with_query [, ...] ] SELECT [ ALL | DISTINCT ] select_expr [, ...] [ function (expression) OVER ( [ PARTITION BY partition_expr_list ] [ ORDER BY order_list ] [ frame_clause ] ) [ FROM from_item [, ...] ] [ WHERE condition ] [ GROUP BY [ ALL | DISTINCT ] grouping_element [, ...] ] [ HAVING condition] [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ] [ ORDER BY order_list ] [ LIMIT [ count | ALL ] ] where • function (expression) is one of the supported window functions. • partition_expr_list is: expression | column_name [, expr_list ] • order_list is: expression | column_name [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ] [, order_list ] • frame_clause is: ROWS | RANGE { UNBOUNDED PRECEDING | expression PRECEDING | CURRENT ROW } | {BETWEEN { UNBOUNDED PRECEDING | expression { PRECEDING | FOLLOWING } | CURRENT ROW} SQL support 715 Amazon Timestream AND { UNBOUNDED FOLLOWING | expression { PRECEDING | FOLLOWING } | CURRENT ROW }} • from_item is one of: Developer Guide table_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ] from_item join_type from_item [ ON join_condition | USING ( join_column [, ...] ) ] • join_type is one of: [ INNER ] JOIN LEFT [ OUTER ] JOIN RIGHT [ OUTER ] JOIN FULL [ OUTER ] JOIN • grouping_element is one of: () expression Subquery support Timestream supports subqueries in EXISTS and IN predicates. The EXISTS predicate determines if a subquery returns any rows. The IN predicate determines if values produced by the subquery match the values or expression of in IN clause. The Timestream query language supports correlated and other subqueries. SELECT t.c1 FROM (VALUES 1, 2, 3, 4, 5) AS t(c1) WHERE EXISTS (SELECT t.c2 FROM (VALUES 1, 2, 3) AS t(c2) WHERE t.c1= t.c2 ) ORDER BY t.c1 c1 1 SQL support 716 Amazon Timestream Developer Guide c1 2 3 SELECT t.c1 FROM (VALUES 1, 2, 3, 4, 5) AS t(c1) WHERE t.c1 IN (SELECT t.c2 FROM (VALUES 2, 3, 4) AS t(c2) ) ORDER BY t.c1 c1 2 3 4 SHOW statements You can view all the databases in an account by using the SHOW DATABASES statement. The syntax is as follows: SHOW DATABASES [LIKE pattern] where the LIKE clause can be used to filter database names. You can view all the tables in an account by using the SHOW TABLES statement. The syntax is as follows: SHOW TABLES [FROM database] [LIKE pattern] where the FROM clause can be used to filter database names and the LIKE clause can be used to filter table names. SQL support 717 Amazon Timestream Developer Guide You can view all the measures for a table by using the SHOW MEASURES statement. The syntax is as follows: SHOW MEASURES FROM database.table [LIKE pattern] where the FROM clause will be used to specify the database and table name and the LIKE clause can be used to filter measure names. DESCRIBE statements You can view the metadata for a table by using the DESCRIBE statement. The syntax is as follows: DESCRIBE database.table where table contains the table name. The describe statement returns the column names and data types for the table. UNLOAD Timestream for LiveAnalytics supports an UNLOAD command as an extension to its SQL support. Data types supported by UNLOAD are described in Supported data types. The time and unknown types do not apply to UNLOAD. UNLOAD (SELECT statement) TO 's3://bucket-name/folder' WITH ( option = expression [, ...] ) where option is { partitioned_by = ARRAY[ col_name[,…] ] | format = [ '{ CSV | PARQUET }' ] | compression = [ '{ GZIP | NONE }' ] | encryption = [ '{ SSE_KMS | SSE_S3 }' ] | kms_key = '<string>' | field_delimiter ='<character>' | escaped_by = '<character>' | include_header = ['{true, false}'] | max_file_size = '<value>' } SQL support 718 Amazon Timestream SELECT statement Developer Guide The query statement used to select and retrieve data from one or more Timestream for LiveAnalytics tables. (SELECT column 1, column 2, column 3 from database.table where measure_name = "ABC" and timestamp between ago (1d) and now() ) TO clause TO 's3://bucket-name/folder' or TO 's3://access-point-alias/folder' The TO clause in the UNLOAD statement specifies the destination for the output of the query results. You need to provide the full path, including either Amazon S3 bucket-name or Amazon S3 access-point-alias with folder location on Amazon S3 where Timestream for LiveAnalytics writes the output file objects. The S3 bucket should be owned by the same account and in the same region. In addition to the query result set, Timestream for LiveAnalytics writes the manifest and metadata files to specified destination folder. PARTITIONED_BY clause partitioned_by = ARRAY [col_name[,…] ,
timestream-187
timestream.pdf
187
and now() ) TO clause TO 's3://bucket-name/folder' or TO 's3://access-point-alias/folder' The TO clause in the UNLOAD statement specifies the destination for the output of the query results. You need to provide the full path, including either Amazon S3 bucket-name or Amazon S3 access-point-alias with folder location on Amazon S3 where Timestream for LiveAnalytics writes the output file objects. The S3 bucket should be owned by the same account and in the same region. In addition to the query result set, Timestream for LiveAnalytics writes the manifest and metadata files to specified destination folder. PARTITIONED_BY clause partitioned_by = ARRAY [col_name[,…] , (default: none) The partitioned_by clause is used in queries to group and analyze data at a granular level. When you export your query results to the S3 bucket, you can choose to partition the data based on one or more columns in the select query. When partitioning the data, the exported data is divided into subsets based on the partition column and each subset is stored in a separate folder. Within the results folder that contains your exported data, a sub-folder folder/results/partition column = partition value/ is automatically created. However, note that partitioned columns are not included in the output file. partitioned_by is not a mandatory clause in the syntax. If you choose to export the data without any partitioning, you can exclude the clause in the syntax. SQL support 719 Amazon Timestream Example Developer Guide Assuming you are monitoring clickstream data of your website and have 5 channels of traffic namely direct, Social Media, Organic Search, Other, and Referral. When exporting the data, you can choose to partition the data using the column Channel. Within your data folder, s3://bucketname/results, you will have five folders each with their respective channel name, for instance, s3://bucketname/results/channel=Social Media/. Within this folder you will find the data of all the customers that landed on your website through the Social Media channel. Similarly, you will have other folders for the remaining channels. Exported data partitioned by Channel column FORMAT format = [ '{ CSV | PARQUET }' , default: CSV The keywords to specify the format of the query results written to your S3 bucket. You can export the data either as a comma separated value (CSV) using a comma (,) as the default delimiter or in the Apache Parquet format, an efficient open columnar storage format for analytics. COMPRESSION compression = [ '{ GZIP | NONE }' ], default: GZIP You can compress the exported data using compression algorithm GZIP or have it uncompressed by specifying the NONE option. SQL support 720 Amazon Timestream ENCRYPTION Developer Guide encryption = [ '{ SSE_KMS | SSE_S3 }' ], default: SSE_S3 The output files on Amazon S3 are encrypted using your selected encryption option. In addition to your data, the manifest and metadata files are also encrypted based on your selected encryption option. We currently support SSE_S3 and SSE_KMS encryption. SSE_S3 is a server- side encryption with Amazon S3 encrypting the data using 256-bit advanced encryption standard (AES) encryption. SSE_KMS is a server-side encryption to encrypt data using customer- managed keys. KMS_KEY kms_key = '<string>' KMS Key is a customer-defined key to encrypt exported query results. KMS Key is securely managed by AWS Key Management Service (AWS KMS) and used to encrypt data files on Amazon S3. FIELD_DELIMITER field_delimiter ='<character>' , default: (,) When exporting the data in CSV format, this field specifies a single ASCII character that is used to separate fields in the output file, such as pipe character (|), a comma (,), or tab (/t). The default delimiter for CSV files is a comma character. If a value in your data contains the chosen delimiter, the delimiter will be quoted with a quote character. For instance, if the value in your data contains Time,stream, then this value will be quoted as "Time,stream" in the exported data. The quote character used by Timestream for LiveAnalytics is double quotes ("). Avoid specifying the carriage return character (ASCII 13, hex 0D, text '\r') or the line break character (ASCII 10, hex 0A, text '\n') as the FIELD_DELIMITER if you want to include headers in the CSV, since that will prevent many parsers from being able to parse the headers correctly in the resulting CSV output. ESCAPED_BY escaped_by = '<character>', default: (\) SQL support 721 Amazon Timestream Developer Guide When exporting the data in CSV format, this field specifies the character that should be treated as an escape character in the data file written to S3 bucket. Escaping happens in the following scenarios: 1. If the value itself contains the quote character (") then it will be escaped using an escape character. For example, if the value is Time"stream, where (\) is the configured escape character, then it will be escaped as Time\"stream. 2. If the value contains
timestream-188
timestream.pdf
188
the headers correctly in the resulting CSV output. ESCAPED_BY escaped_by = '<character>', default: (\) SQL support 721 Amazon Timestream Developer Guide When exporting the data in CSV format, this field specifies the character that should be treated as an escape character in the data file written to S3 bucket. Escaping happens in the following scenarios: 1. If the value itself contains the quote character (") then it will be escaped using an escape character. For example, if the value is Time"stream, where (\) is the configured escape character, then it will be escaped as Time\"stream. 2. If the value contains the configured escape character, it will be escaped. For example, if the value is Time\stream, then it will be escaped as Time\\stream. Note If the exported output contains complex data type in the like Arrays, Rows or Timeseries, it will be serialized as a JSON string. Following is an example. Data type Actual value How the value is escaped in CSV format [serialized JSON string] Array Row [ 23,24,25 ] "[23,24,25]" ( x=23.0, y=hello ) "{\"x\":23.0,\"y\": \"hello\"}" Timeseries [ ( time=1970-01-01 "[{\"time\":\"1970 00:00:00.000000010 -01-01 00:00:00. , value=100.0 ), 000000010Z\",\"val ( time=1970-01-01 ue\":100.0},{\"tim 00:00:00.000000012, e\":\"1970-01-01 value=120.0 ) ] 00:00:00.000000012 Z\",\"value\":120. 0}]" INCLUDE_HEADER include_header = 'true' , default: 'false' SQL support 722 Amazon Timestream Developer Guide When exporting the data in CSV format, this field lets you include column names as the first row of the exported CSV data files. The accepted values are 'true' and 'false' and the default value is 'false'. Text transformation options such as escaped_by and field_delimiter apply to headers as well. Note When including headers, it is important that you not select a carriage return character (ASCII 13, hex 0D, text '\r') or a line break character (ASCII 10, hex 0A, text '\n') as the FIELD_DELIMITER, since that will prevent many parsers from being able to parse the headers correctly in the resulting CSV output. MAX_FILE_SIZE max_file_size = 'X[MB|GB]' , default: '78GB' This field specifies the maximum size of the files that the UNLOAD statement creates in Amazon S3. The UNLOAD statement can create multiple files but the maximum size of each file written to Amazon S3 will be approximately what is specified in this field. The value of the field must be between 16 MB and 78 GB, inclusive. You can specify it in integer such as 12GB, or in decimals such as 0.5GB or 24.7MB. The default value is 78 GB. The actual file size is approximated when the file is being written, so the actual maximum size may not be exactly equal to the number you specify. Logical operators Timestream for LiveAnalytics supports the following logical operators. Operator Description Example AND OR NOT True if both values are true a AND b True if either value is true a OR b True if the value is false NOT a Logical operators 723 Amazon Timestream Developer Guide • The result of an AND comparison may be NULL if one or both sides of the expression are NULL. • If at least one side of an AND operator is FALSE the expression evaluates to FALSE. • The result of an OR comparison may be NULL if one or both sides of the expression are NULL. • If at least one side of an OR operator is TRUE the expression evaluates to TRUE. • The logical complement of NULL is NULL. The following truth table demonstrates the handling of NULL in AND and OR: A null false null true null false true false true B null null false null true false false true true A and b A or b null false false null null false false false true null null null true true false true true true The following truth table demonstrates the handling of NULL in NOT: A null true false Not a null false true Logical operators 724 Amazon Timestream Comparison operators Developer Guide Timestream for LiveAnalytics supports the following comparison operators. Operator < > <= >= = <> != Note Description Less than Greater than Less than or equal to Greater than or equal to Equal Not equal Not equal • The BETWEEN operator tests if a value is within a specified range. The syntax is as follows: BETWEEN min AND max The presence of NULL in a BETWEEN or NOT BETWEEN statement will result in the statement evaluating to NULL. • IS NULL and IS NOT NULL operators test whether a value is null (undefined). Using NULL with IS NULL evaluates to true. • In SQL, a NULL value signifies an unknown value. Comparison functions Timestream for LiveAnalytics supports the following comparison functions. Topics Comparison operators 725 Amazon Timestream • greatest() • least() • ALL(), ANY() and SOME() greatest() Developer Guide The greatest() function returns the largest of the provided values. It
timestream-189
timestream.pdf
189
The syntax is as follows: BETWEEN min AND max The presence of NULL in a BETWEEN or NOT BETWEEN statement will result in the statement evaluating to NULL. • IS NULL and IS NOT NULL operators test whether a value is null (undefined). Using NULL with IS NULL evaluates to true. • In SQL, a NULL value signifies an unknown value. Comparison functions Timestream for LiveAnalytics supports the following comparison functions. Topics Comparison operators 725 Amazon Timestream • greatest() • least() • ALL(), ANY() and SOME() greatest() Developer Guide The greatest() function returns the largest of the provided values. It returns NULL if any of the provided values are NULL. The syntax is as follows. greatest(value1, value2, ..., valueN) least() The least() function returns the smallest of the provided values. It returns NULL if any of the provided values are NULL. The syntax is as follows. least(value1, value2, ..., valueN) ALL(), ANY() and SOME() The ALL, ANY and SOME quantifiers can be used together with comparison operators in the following way. Expression A = ALL(...) A <> ALL(...) A < ALL(...) A = ANY(...) A <> ANY(...) Meaning Evaluates to true when A is equal to all values. Evaluates to true when A does not match any value. Evaluates to true when A is smaller than the smallest value. Evaluates to true when A is equal to any of the values. Evaluates to true when A does not match one or more values. Comparison functions 726 Amazon Timestream Expression A < ANY(...) Examples and usage notes Note Developer Guide Meaning Evaluates to true when A is smaller than the biggest value. When using ALL, ANY or SOME, the keyword VALUES should be used if the comparison values are a list of literals. Example: ANY() An example of ANY() in a query statement as follows. SELECT 11.7 = ANY (VALUES 12.0, 13.5, 11.7) An alternative syntax for the same operation is as follows. SELECT 11.7 = ANY (SELECT 12.0 UNION ALL SELECT 13.5 UNION ALL SELECT 11.7) In this case, ANY() evaluates to True. Example: ALL() An example of ALL() in a query statement as follows. SELECT 17 < ALL (VALUES 19, 20, 15); An alternative syntax for the same operation is as follows. SELECT 17 < ALL (SELECT 19 UNION ALL SELECT 20 UNION ALL SELECT 15); In this case, ALL() evaluates to False. Example: SOME() An example of SOME() in a query statement as follows. Comparison functions 727 Amazon Timestream Developer Guide SELECT 50 >= SOME (VALUES 53, 77, 27); An alternative syntax for the same operation is as follows. SELECT 50 >= SOME (SELECT 53 UNION ALL SELECT 77 UNION ALL SELECT 27); In this case, SOME() evaluates to True. Conditional expressions Timestream for LiveAnalytics supports the following conditional expressions. Topics • The CASE statement • The IF statement • The COALESCE statement • The NULLIF statement • The TRY statement The CASE statement The CASE statement searches each value expression from left to right until it finds one that equals expression. If it finds a match, the result for the matching value is returned. If no match is found, the result from the ELSE clause is returned if it exists; otherwise null is returned. The syntax is as follows: CASE expression WHEN value THEN result [ WHEN ... ] [ ELSE result ] END Timestream also supports the following syntax for CASE statements. In this syntax, the "searched" form evaluates each boolean condition from left to right until one is true and returns the matching result. If no conditions are true, the result from the ELSE clause is returned if it exists; otherwise null is returned. See below for the alternate syntax: Conditional expressions 728 Amazon Timestream Developer Guide CASE WHEN condition THEN result [ WHEN ... ] [ ELSE result ] END The IF statement The IF statement evaluates a condition to be true or false and returns the appropriate value. Timestream supports the following two syntax representations for IF: if(condition, true_value) This syntax evaluates and returns true_value if condition is true; otherwise null is returned and true_value is not evaluated. if(condition, true_value, false_value) This syntax evaluates and returns true_value if condition is true, otherwise evaluates and returns false_value. Examples SELECT if(true, 'example 1'), if(false, 'example 2'), if(true, 'example 3 true', 'example 3 false'), if(false, 'example 4 true', 'example 4 false') _col0 example 1 _col1 - null The COALESCE statement _col2 _col3 example 3 true example 4 false COALESCE returns the first non-null value in an argument list. The syntax is as follows: Conditional expressions 729 Amazon Timestream Developer Guide coalesce(value1, value2[,...]) The NULLIF statement The IF statement evaluates a condition to be true or false and returns the appropriate value. Timestream supports the following two syntax representations for IF: NULLIF returns null if value1 equals
timestream-190
timestream.pdf
190
and returns false_value. Examples SELECT if(true, 'example 1'), if(false, 'example 2'), if(true, 'example 3 true', 'example 3 false'), if(false, 'example 4 true', 'example 4 false') _col0 example 1 _col1 - null The COALESCE statement _col2 _col3 example 3 true example 4 false COALESCE returns the first non-null value in an argument list. The syntax is as follows: Conditional expressions 729 Amazon Timestream Developer Guide coalesce(value1, value2[,...]) The NULLIF statement The IF statement evaluates a condition to be true or false and returns the appropriate value. Timestream supports the following two syntax representations for IF: NULLIF returns null if value1 equals value2; otherwise it returns value1. The syntax is as follows: nullif(value1, value2) The TRY statement The TRY function evaluates an expression and handles certain types of errors by returning null. The syntax is as follows: try(expression) Conversion functions Timestream for LiveAnalytics supports the following conversion functions. Topics • cast() • try_cast() cast() The syntax of the cast function to explicitly cast a value as a type is as follows. cast(value AS type) try_cast() Timestream for LiveAnalytics also supports the try_cast function that is similar to cast but returns null if cast fails. The syntax is as follows. Conversion functions 730 Amazon Timestream Developer Guide try_cast(value AS type) Mathematical operators Timestream for LiveAnalytics supports the following mathematical operators. Operator + - * / % Description Addition Subtraction Multiplication Division (integer division performs truncation) Modulus (remainder) Mathematical functions Timestream for LiveAnalytics supports the following mathematical functions. Function abs(x) Output data type Description [same as input] Returns the absolute value of x. cbrt(x) double Returns the cube root of x. ceiling(x) or ceil(x) [same as input] degrees(x) e() double double Returns x rounded up to the nearest integer. Converts angle x in radians to degrees. Returns the constant Euler's number. Mathematical operators 731 Amazon Timestream Function exp(x) double Output data type Description Developer Guide floor(x) [same as input] from_base(string,radix) bigint ln(x) log2(x) log10(x) double double double mod(n,m) [same as input] Returns Euler's number raised to the power of x. Returns x rounded down to the nearest integer. Returns the value of string interpreted as a base-radix number. Returns the natural logarithm of x. Returns the base 2 logarithm of x. Returns the base 10 logarithm of x. Returns the modulus (remainder) of n divided by m. pi() double Returns the constant Pi. pow(x, p) or power(x, p) double radians(x) double rand() or random() double random(n) [same as input] Returns x raised to the power of p. Converts angle x in degrees to radians. Returns a pseudo-random value in the range 0.0 1.0. Returns a pseudo-random number between 0 and n (exclusive). Mathematical functions 732 Amazon Timestream Function round(x) Output data type Description Developer Guide [same as input] round(x,d) [same as input] sign(x) [same as input] sqrt(x) to_base(x, radix) double varchar truncate(x) double acos(x) asin(x) double double Returns x rounded to the nearest integer. Returns x rounded to d decimal places. Returns the signum function of x, that is: • 0 if the argument is 0 • 1 if the argument is greater than 0 • -1 if the argument is less than 0. For double arguments, the function additionally returns: • NaN if the argument is NaN • 1 if the argument is +Infinity • -1 if the argument is - Infinity. Returns the square root of x. Returns the base-radi x representation of x. Returns x rounded to integer by dropping digits after decimal point. Returns the arc cosine of x. Returns the arc sine of x. Mathematical functions 733 Amazon Timestream Function atan(x) atan2(y, x) cos(x) cosh(x) sin(x) tan(x) tanh(x) infinity() is_finite(x) is_infinite(x) is_nan(x) nan() Output data type Description Developer Guide double double double double double double double double boolean boolean boolean double Returns the arc tangent of x. Returns the arc tangent of y / x. Returns the cosine of x. Returns the hyperbolic cosine of x. Returns the sine of x. Returns the tangent of x. Returns the hyperbolic tangent of x. Returns the constant representing positive infinity. Determine if x is finite. Determine if x is infinite. Determine if x is not-a-num ber. Returns the constant representing not-a-number. String operators Timestream for LiveAnalytics supports the || operator for concatenating one or more strings. String operators 734 Amazon Timestream String functions Note Developer Guide The input data type of these functions is assumed to be varchar unless otherwise specified. Function chr(n) varchar Output data type Description codepoint(x) integer concat(x1, ..., xN) varchar hamming_distance(x1,x2) bigint length(x) bigint levenshtein_distance(x1, x2) bigint Returns the Unicode code point n as a varchar. Returns the Unicode code point of the only character of str. Returns the concatenation of x1, x2, ..., xN. Returns the Hamming distance of x1 and x2, i.e. the number of positions at which the corresponding character s are different. Note
timestream-191
timestream.pdf
191
concatenating one or more strings. String operators 734 Amazon Timestream String functions Note Developer Guide The input data type of these functions is assumed to be varchar unless otherwise specified. Function chr(n) varchar Output data type Description codepoint(x) integer concat(x1, ..., xN) varchar hamming_distance(x1,x2) bigint length(x) bigint levenshtein_distance(x1, x2) bigint Returns the Unicode code point n as a varchar. Returns the Unicode code point of the only character of str. Returns the concatenation of x1, x2, ..., xN. Returns the Hamming distance of x1 and x2, i.e. the number of positions at which the corresponding character s are different. Note that the two varchar inputs must have the same length. Returns the length of x in characters. Returns the Levenshtein edit distance of x1 and x2, i.e. the minimum number of single- character edits (insertions, deletions or substitutions) needed to change x1 into x2. lower(x) varchar Converts x to lowercase. String functions 735 Amazon Timestream Developer Guide Function Output data type Description lpad(x1, bigint size, x2) varchar ltrim(x) varchar replace(x1, x2) varchar replace(x1, x2, x3) varchar Reverse(x) varchar rpad(x1, bigint size, x2) varchar rtrim(x) varchar split(x1, x2) array(varchar) Left pads x1 to size character s with x2. If size is less than the length of x1, the result is truncated to size characters. size must not be negative and x2 must be non-empty. Removes leading whitespace from x. Removes all instances of x2 from x1. Replaces all instances of x2 with x3 in x1. Returns x with the characters in reverse order. Right pads x1 to size characters with x2. If size is less than the length of x1, the result is truncated to size characters. size must not be negative and x2 must be non- empty. Removes trailing whitespace from x. Splits x1 on delimiter x2 and returns an array. String functions 736 Amazon Timestream Developer Guide Function Output data type Description split(x1, x2, bigint limit) array(varchar) split_part(x1, x2, bigint pos) varchar strpos(x1, x2) bigint strpos(x1, x2,bigint instance) bigint strrpos(x1, x2) bigint strrpos(x1, x2, bigint instance) bigint Splits x1 on delimiter x2 and returns an array. The last element in the array always contain everything left in the x1. limit must be a positive number. Splits x1 on delimiter x2 and returns the varchar field at pos. Field indexes start with 1. If pos is larger than the number of fields, then null is returned. Returns the starting position of the first instance of x2 in x1. Positions start with 1. If not found, 0 is returned. Returns the position of the Nth instance of x2 in x1. Instance must be a positive number. Positions start with 1. If not found, 0 is returned. Returns the starting position of the last instance of x2 in x1. Positions start with 1. If not found, 0 is returned. Returns the position of the Nth instance of x2 in x1 starting from the end of x1. instance must be a positive number. Positions start with 1. If not found, 0 is returned. String functions 737 Amazon Timestream Developer Guide Function Output data type Description position(x2 IN x1) bigint substr(x, bigint start) varchar substr(x, bigint start, bigint len) varchar trim(x) upper(x) varchar varchar Array operators Returns the starting position of the first instance of x2 in x1. Positions start with 1. If not found, 0 is returned. Returns the rest of x from the starting position start. Positions start with 1. A negative starting position is interpreted as being relative to the end of x. Returns a substring from x of length len from the starting position start. Positions start with 1. A negative starting position is interpreted as being relative to the end of x. Removes leading and trailing whitespace from x. Converts x to uppercase. Timestream for LiveAnalytics supports the following array operators. Operator Description [] || Access an element of an array where the first index starts at 1. Concatenate an array with another array or element of the same type. Array operators 738 Amazon Timestream Array functions Developer Guide Timestream for LiveAnalytics supports the following array functions. Function Output data type Description array_distinct(x) array array_intersect(x, y) array array_union(x, y) array array_except(x, y) array Remove duplicate values from the array x. SELECT array_dis tinct(ARRAY[1,2,2,3]) Example result: [ 1,2,3 ] Returns an array of the elements in the intersection of x and y, without duplicates. SELECT array_int ersect(ARRAY[1,2,3], ARRAY[3,4,5]) Example result: [ 3 ] Returns an array of the elements in the union of x and y, without duplicates. SELECT array_uni on(ARRAY[1,2,3], ARRAY[3,4,5]) Example result: [ 1,2,3,4,5 ] Returns an array of elements in x but not in y, without duplicates. Array functions 739 Amazon Timestream Developer Guide Function Output data type Description array_join(x, delimiter, null_replacement) varchar array_max(x) same as array elements array_min(x) same as array elements SELECT array_exc ept(ARRAY[1,2,3], ARRAY[3,4,5])
timestream-192
timestream.pdf
192
x. SELECT array_dis tinct(ARRAY[1,2,2,3]) Example result: [ 1,2,3 ] Returns an array of the elements in the intersection of x and y, without duplicates. SELECT array_int ersect(ARRAY[1,2,3], ARRAY[3,4,5]) Example result: [ 3 ] Returns an array of the elements in the union of x and y, without duplicates. SELECT array_uni on(ARRAY[1,2,3], ARRAY[3,4,5]) Example result: [ 1,2,3,4,5 ] Returns an array of elements in x but not in y, without duplicates. Array functions 739 Amazon Timestream Developer Guide Function Output data type Description array_join(x, delimiter, null_replacement) varchar array_max(x) same as array elements array_min(x) same as array elements SELECT array_exc ept(ARRAY[1,2,3], ARRAY[3,4,5]) Example result: [ 1,2 ] Concatenates the elements of the given array using the delimiter and an optional string to replace nulls. SELECT array_joi n(ARRAY[1,2,3], ';', '') Example result: 1;2;3 Returns the maximum value of input array. SELECT array_max (ARRAY[1,2,3]) Example result: 3 Returns the minimum value of input array. SELECT array_min (ARRAY[1,2,3]) Example result: 1 Array functions 740 Amazon Timestream Developer Guide Function Output data type Description array_position(x, element) bigint array_remove(x, element) array array_sort(x) array Returns the position of the first occurrence of the element in array x (or 0 if not found). SELECT array_pos ition(ARRAY[3,4,5,9], 5) Example result: 3 Remove all elements that equal element from array x. SELECT array_rem ove(ARRAY[3,4,5,9], 4) Example result: [ 3,5,9 ] Sorts and returns the array x. The elements of x must be orderable. Null elements will be placed at the end of the returned array. SELECT array_sor t(ARRAY[6,8,2,9,3]) Example result: [ 2,3,6,8,9 ] Array functions 741 Amazon Timestream Developer Guide Function Output data type Description arrays_overlap(x, y) boolean cardinality(x) bigint concat(array1, array2, ..., arrayN) array Tests if arrays x and y have any non-null elements in common. Returns null if there are no non-null elements in common but either array contains null. SELECT arrays_ov erlap(ARRAY[6,8,2, 9,3], ARRAY[6,8]) Example result: true Returns the size of the array x. SELECT cardinali ty(ARRAY[6,8,2,9,3]) Example result: 5 Concatenates the arrays array1, array2, ..., arrayN. SELECT concat(AR RAY[6,8,2,9,3], ARRAY[11,32], ARRAY[6,8,2,0,14]) Example result: [ 6,8,2,9,3,11,32,6, 8,2,0,14 ] Array functions 742 Amazon Timestream Developer Guide Function Output data type Description element_at(array(E), index) E repeat(element, count) array reverse(x) array Returns element of array at given index. If index < 0, element_at accesses elements from the last to the first. SELECT element_a t(ARRAY[6,8,2,9,3], 1) Example result: 6 Repeat element for count times. SELECT repeat(1, 3) Example result: [ 1,1,1 ] Returns an array which has the reversed order of array x. SELECT reverse(A RRAY[6,8,2,9,3]) Example result: [ 3,9,2,8,6 ] Array functions 743 Amazon Timestream Developer Guide Function Output data type Description sequence(start, stop) array(bigint) sequence(start, stop, step) array(bigint) Generate a sequence of integers from start to stop, incrementing by 1 if start is less than or equal to stop, otherwise -1. SELECT sequence(3, 8) Example result: [ 3,4,5,6,7,8 ] Generate a sequence of integers from start to stop, incrementing by step. SELECT sequence(3, 15, 2) Example result: [ 3,5,7,9,11,13,15 ] Array functions 744 Amazon Timestream Developer Guide Function Output data type Description sequence(start, stop) array(timestamp) Generate a sequence of timestamps from start date to stop date, incrementing by 1 day. SELECT sequence( '2023-04-02 19:26:12. 941000000', '2023-04- 06 19:26:12.941000000 ', 1d) Example result: [ 2023-04-02 19:26:12.941000000 ,2023-04-03 19:26:12.941000000 ,2023-04-04 19:26:12.941000000 ,2023-04-05 19:26:12.941000000 ,2023-04-06 19:26:12.941000000 ] Array functions 745 Amazon Timestream Developer Guide Function Output data type Description sequence(start, stop, step) array(timestamp) Generate a sequence of timestamps from start to stop, incrementing by step. The data type of step is interval. SELECT sequence( '2023-04-02 19:26:12. 941000000', '2023-04- 10 19:26:12.941000000 ', 2d) Example result: [ 2023-04-02 19:26:12.941000000 ,2023-04-04 19:26:12.941000000 ,2023-04-06 19:26:12.941000000 ,2023-04-08 19:26:12.941000000 ,2023-04-10 19:26:12.941000000 ] shuffle(x) array Generate a random permutati on of the given array x. SELECT shuffle(A RRAY[6,8,2,9,3]) Example result: [ 6,3,2,9,8 ] Array functions 746 Amazon Timestream Developer Guide Function Output data type Description slice(x, start, length) array zip(array1, array2[, ...]) array(row) Subsets array x starting from index start (or starting from the end if start is negative) with a length of length. SELECT slice(ARR AY[6,8,2,9,3], 1, 3) Example result: [ 6,8,2 ] Merges the given arrays, element-wise, into a single array of rows. If the arguments have an uneven length, missing values are filled with NULL. SELECT zip(ARRAY [6,8,2,9,3], ARRAY[15, 24]) Example result: [ ( 6, 15 ),( 8, 24 ),( 2, - ),( 9, - ),( 3, - ) ] Bitwise functions Timestream for LiveAnalytics supports the following bitwise functions. Function Output data type Description bit_count(bigint, bigint) bigint (two's complement) Returns the count of bits in the first bigint parameter where the second parameter Bitwise functions 747 Amazon Timestream Developer Guide Function Output data type Description is a bit signed integer such as 8 or 64. SELECT bit_count(19, 8) Example result: 3 SELECT bit_count(19, 2) Example result: Number must be represent able with the bits specified. 19
timestream-193
timestream.pdf
193
[6,8,2,9,3], ARRAY[15, 24]) Example result: [ ( 6, 15 ),( 8, 24 ),( 2, - ),( 9, - ),( 3, - ) ] Bitwise functions Timestream for LiveAnalytics supports the following bitwise functions. Function Output data type Description bit_count(bigint, bigint) bigint (two's complement) Returns the count of bits in the first bigint parameter where the second parameter Bitwise functions 747 Amazon Timestream Developer Guide Function Output data type Description is a bit signed integer such as 8 or 64. SELECT bit_count(19, 8) Example result: 3 SELECT bit_count(19, 2) Example result: Number must be represent able with the bits specified. 19 can not be represented with 2 bits Returns the bitwise AND of the bigint parameters. SELECT bitwise_and(12, 7) Example result: 4 Returns the bitwise NOT of the bigint parameter. SELECT bitwise_not(12) Example result: -13 bitwise_and(bigint, bigint) bigint (two's complement) bitwise_not(bigint) bigint (two's complement) Bitwise functions 748 Amazon Timestream Developer Guide Function Output data type Description bitwise_or(bigint, bigint) bigint (two's complement) bitwise_xor(bigint, bigint) bigint (two's complement) Returns the bitwise OR of the bigint parameters. SELECT bitwise_or(12, 7) Example result: 15 Returns the bitwise XOR of the bigint parameters. SELECT bitwise_xor(12, 7) Example result: 11 Regular expression functions The regular expression functions in Timestream for LiveAnalytics support the Java pattern syntax. Timestream for LiveAnalytics supports the following regular expression functions. Function Output data type Description regexp_extract_all(string, pattern) array(varchar) Returns the substring(s) matched by the regular expression pattern in string. SELECT regexp_ex tract_all('example expect complex', 'ex \w') Example result: [ exa,exp ] Regular expression functions 749 Amazon Timestream Developer Guide Function Output data type Description regexp_extract_all(string, pattern, group) array(varchar) regexp_extract(string, pattern) varchar regexp_extract(string, pattern, group) varchar Finds all occurrences of the regular expression pattern in string and returns the capturing group number group. SELECT regexp_ex tract_all('example expect complex', '(ex) (\w)', 2) Example result: [ a,p ] Returns the first substring matched by the regular expression pattern in string. SELECT regexp_ex tract('example expect', 'ex\w') Example result: exa Finds the first occurrence of the regular expression pattern in string and returns the capturing group number group. SELECT regexp_ex tract('example expect', '(ex)(\w)', 2) Example result: a Regular expression functions 750 Amazon Timestream Developer Guide Function Output data type Description regexp_like(string, pattern) boolean regexp_replace(string, pattern) varchar Evaluates the regular expression pattern and determines if it is contained within string. This function is similar to the LIKE operator, except that the pattern only needs to be contained within string, rather than needing to match all of string. In other words, this performs a contains operation rather than a match operation. You can match the entire string by anchoring the pattern using ^ and $. SELECT regexp_li ke('example', 'ex') Example result: true Removes every instance of the substring matched by the regular expression pattern from string. SELECT regexp_re place('example expect', 'expect') Example result: example Regular expression functions 751 Amazon Timestream Developer Guide Function Output data type Description regexp_replace(string, pattern, replacement) varchar Replaces every instance of the substring matched by the regex pattern in string with replacement. Capturing groups can be referenced in replacement using $g for a numbered group or ${name} for a named group. A dollar sign ($) may be included in the replacement by escaping it with a backslash (\$). SELECT regexp_re place('example expect', 'expect', 'surprise') Example result: example surprise Regular expression functions 752 Amazon Timestream Developer Guide Function Output data type Description regexp_replace(string, pattern, function) varchar regexp_split(string, pattern) array(varchar) Replaces every instance of the substring matched by the regular expression pattern in string using function. The lambda expression function is invoked for each match with the capturing groups passed as an array. Capturing group numbers start at one; there is no group for the entire match (if you need this, surround the entire expression with parenthesis). SELECT regexp_re place('example', '(\w)', x -> upper(x[1 ])) Example result: EXAMPLE Splits string using the regular expression pattern and returns an array. Trailing empty strings are preserved. SELECT regexp_sp lit('example', 'x') Example result: [ e,ample ] Regular expression functions 753 Amazon Timestream Date / time operators Note Developer Guide Timestream for LiveAnalytics does not support negative time values. Any operation resulting in negative time results in error. Timestream for LiveAnalytics supports the following operations on timestamps, dates, and intervals. Description Addition Subtraction Operator + - Topics • Operations • Addition • Subtraction Operations The result type of an operation is based on the operands. Interval literals such as 1day and 3s can be used. SELECT date '2022-05-21' + interval '2' day SELECT date '2022-05-21' + 2d SELECT date '2022-05-21' + 2day Example result for each: 2022-05-23 Date / time operators 754 Amazon Timestream Developer Guide Interval units include second, minute, hour, day, week, month, and year. But in some cases not all are applicable. For example seconds, minutes, and hours can not be added to or subtracted from a date.
timestream-194
timestream.pdf
194
Addition Subtraction Operator + - Topics • Operations • Addition • Subtraction Operations The result type of an operation is based on the operands. Interval literals such as 1day and 3s can be used. SELECT date '2022-05-21' + interval '2' day SELECT date '2022-05-21' + 2d SELECT date '2022-05-21' + 2day Example result for each: 2022-05-23 Date / time operators 754 Amazon Timestream Developer Guide Interval units include second, minute, hour, day, week, month, and year. But in some cases not all are applicable. For example seconds, minutes, and hours can not be added to or subtracted from a date. SELECT interval '4' year + interval '2' month Example result: 4-2 SELECT typeof(interval '4' year + interval '2' month) Example result: interval year to month Result type of interval operations may be 'interval year to month' or 'interval day to second' depending on the operands. Intervals can be added to or subtracted from dates and timestamps. But a date or timestamp cannot be added to or subtracted from a date or timestamp. To find intervals or durations related to dates or timestamps, see date_diff and related functions in Interval and duration. Addition Example SELECT date '2022-05-21' + interval '2' day Example result: 2022-05-23 Example SELECT typeof(date '2022-05-21' + interval '2' day) Example result: date Example SELECT interval '2' year + interval '4' month Example result: 2-4 Date / time operators 755 Amazon Timestream Example Developer Guide SELECT typeof(interval '2' year + interval '4' month) Example result: interval year to month Subtraction Example SELECT timestamp '2022-06-17 01:00' - interval '7' hour Example result: 2022-06-16 18:00:00.000000000 Example SELECT typeof(timestamp '2022-06-17 01:00' - interval '7' hour) Example result: timestamp Example SELECT interval '6' day - interval '4' hour Example result: 5 20:00:00.000000000 Example SELECT typeof(interval '6' day - interval '4' hour) Example result: interval day to second Date / time functions Note Timestream for LiveAnalytics does not support negative time values. Any operation resulting in negative time results in error. Date / time functions 756 Amazon Timestream Developer Guide Timestream for LiveAnalytics uses UTC timezone for date and time. Timestream supports the following functions for date and time. Topics • General and conversion • Interval and duration • Formatting and parsing • Extraction General and conversion Timestream for LiveAnalytics supports the following general and conversion functions for date and time. Function Output data type Description current_date date current_time time Returns current date in UTC. No parentheses used. SELECT current_date Example result: 2022-07-0 7 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. Returns current time in UTC. No parentheses used. SELECT current_time Date / time functions 757 Amazon Timestream Developer Guide Function Output data type Description current_timestamp or now() timestamp Example result: 17:41:52. 827000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. Returns current timestamp in UTC. SELECT current_t imestamp Example result: 2022-07-0 7 17:42:32.939000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. Date / time functions 758 Amazon Timestream Developer Guide Function Output data type Description current_timezone() varchar The value will be 'UTC.' date(varchar(x)), date(time stamp) date last_day_of_month( timestamp), last_day_ of_month(date) date from_iso8601_timestamp(stri ng) timestamp Timestream uses UTC timezone for date and time. SELECT current_t imezone() Example result: UTC SELECT date(TIMESTAMP '2022-07-07 17:44:43. 771000000') Example result: 2022-07-0 7 SELECT last_day_ of_month(TIMESTAMP '2022-07-07 17:44:43. 771000000') Example result: 2022-07-3 1 Parses the ISO 8601 timestamp into internal timestamp format. SELECT from_iso8 601_timestamp('202 2-06-17T08:04:05.0 00000000+05:00') Example result: 2022-06-1 7 03:04:05.000000000 Date / time functions 759 Amazon Timestream Developer Guide Function Output data type Description from_iso8601_date(string) date to_iso8601(timestamp), to_iso8601(date) varchar from_milliseconds(bigint) timestamp Parses the ISO 8601 date string into internal timestamp format for UTC 00:00:00 of the specified date. SELECT from_iso8 601_date('2022-07- 17') Example result: 2022-07-1 7 Returns an ISO 8601 formatted string for the input. SELECT to_iso860 1(from_iso8601_dat e('2022-06-17')) Example result: 2022-06-1 7 SELECT from_mill iseconds(1) Example result: 1970-01-0 1 00:00:00.001000000 Date / time functions 760 Amazon Timestream Developer Guide Function Output data type Description from_nanoseconds(bigint) timestamp from_unixtime(double) timestamp localtime time select from_nano seconds(300000001) Example result: 1970-01-0 1 00:00:00.300000001 Returns a timestamp which corresponds to the provided unixtime. SELECT from_unixtime(1) Example result: 1970-01-0 1 00:00:01.000000000 Returns current time in UTC. No parentheses used. SELECT localtime Example result: 17:58:22. 654000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. Date / time functions 761 Amazon Timestream Developer Guide Function Output data type Description localtimestamp timestamp to_milliseconds(interval day to second), to_milliseconds(ti bigint mestamp) Returns current timestamp in UTC. No parentheses used. SELECT localtimestamp Example result: 2022-07-0 7 17:59:04.368000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. SELECT to_millis econds(INTERVAL '2' DAY + INTERVAL '3' HOUR) Example result: 183600000 SELECT
timestream-195
timestream.pdf
195
1970-01-0 1 00:00:01.000000000 Returns current time in UTC. No parentheses used. SELECT localtime Example result: 17:58:22. 654000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. Date / time functions 761 Amazon Timestream Developer Guide Function Output data type Description localtimestamp timestamp to_milliseconds(interval day to second), to_milliseconds(ti bigint mestamp) Returns current timestamp in UTC. No parentheses used. SELECT localtimestamp Example result: 2022-07-0 7 17:59:04.368000000 Note This is also a reserved keyword. For a list of reserved keywords, see Reserved keywords. SELECT to_millis econds(INTERVAL '2' DAY + INTERVAL '3' HOUR) Example result: 183600000 SELECT to_millis econds(TIMESTAMP '2022-06-17 17:44:43. 771000000') Example result: 165548788 3771 Date / time functions 762 Amazon Timestream Developer Guide Function Output data type Description to_nanoseconds(interval day to second), to_nanose conds(timestamp) bigint to_unixtime(timestamp) double SELECT to_nanose conds(INTERVAL '2' DAY + INTERVAL '3' HOUR) Example result: 183600000 000000 SELECT to_nanose conds(TIMESTAMP '2022-06-17 17:44:43. 771000678') Example result: 165548788 3771000678 Returns unixtime for the provided timestamp. SELECT to_unixti me('2022-06-17 17:44:43.771000000') Example result: 1.6554878 837710001E9 Date / time functions 763 Amazon Timestream Developer Guide Function Output data type Description date_trunc(unit, timestamp) timestamp Returns the timestamp truncated to unit, where unit is one of [second, minute, hour, day, week, month, quarter, or year]. SELECT date_trun c('minute', TIMESTAMP '2022-06-17 17:44:43. 771000000') Example result: 2022-06-1 7 17:44:00.000000000 Interval and duration Timestream for LiveAnalytics supports the following interval and duration functions for date and time. Function Output data type Description date_add(unit, bigint, date), date_add(unit, bigint, time), date_add(varchar(x), bigint, timestamp) timestamp Adds a bigint of units, where unit is one of [second, minute, hour, day, week, month, quarter, or year]. SELECT date_add('hour', 9, TIMESTAMP '2022-06- 17 00:00:00') Example result: 2022-06-1 7 09:00:00.000000000 Date / time functions 764 Amazon Timestream Developer Guide Function Output data type Description date_diff(unit, date, date) , date_diff(unit, time, time) , date_diff(unit, timestamp, timestamp) bigint parse_duration(string) interval Returns a difference, where unit is one of [second, minute, hour, day, week, month, quarter, or year]. SELECT date_diff('day', DATE '2020-03-01', DATE '2020-03-02') Example result: 1 Parses the input string to return an interval equivalent. SELECT parse_dur ation('42.8ms') Example result: 0 00:00:00.042800000 SELECT typeof(pa rse_duration('42.8 ms')) Example result: interval day to second Date / time functions 765 Amazon Timestream Developer Guide Function Output data type Description bin(timestamp, interval) timestamp Rounds down the timestamp parameter's integer value to the nearest multiple of the interval parameter's integer value. The meaning of this return value may not be obvious. It is calculated using integer arithmetic first by dividing the timestamp integer by the interval integer and then by multiplying the result by the interval integer. Keeping in mind that a timestamp specifies a UTC point in time as a number of fractions of a second elapsed since the POSIX epoch (January 1, 1970), the return value will seldom align with calendar units. For example, if you specify an interval of 30 days, all the days since the epoch are divided into 30-day increments, and the start of the most recent 30- day increment is returned, which has no relationship to calendar months. Here are some examples: bin(TIMESTAMP '2022-06- 17 10:15:20', 5m) Date / time functions 766 Amazon Timestream Developer Guide Function Output data type Description ago(interval) timestamp interval literals such as 1h, 1d, and 30m interval ==> 2022-06-17 10:15:00.000000000 bin(TIMESTAMP '2022-06- 17 10:15:20', 1d) ==> 2022-06-17 00:00:00.000000000 bin(TIMESTAMP '2022-06- 17 10:15:20', 10day) ==> 2022-06-17 00:00:00.000000000 bin(TIMESTAMP '2022-06- 17 10:15:20', 30day) ==> 2022-05-28 00:00:00.000000000 Returns the value correspon ding to current_timestamp interval. SELECT ago(1d) Example result: 2022-07-0 6 21:08:53.245000000 Interval literals are a convenience for parse_dur ation(string). For example, 1d is the same as parse_dur ation('1d') . This allows the use of the literals wherever an interval is used. For example, ago(1d) and bin(<timestamp> , 1m). Date / time functions 767 Amazon Timestream Developer Guide Some interval literals act as shorthand for parse_duration. For example, parse_duration('1day'), 1day, parse_duration('1d'), and 1d each return 1 00:00:00.000000000 where the type is interval day to second. Space is allowed in the format provided to parse_duration. For example parse_duration('1day') also returns 00:00:00.000000000. But 1 day is not an interval literal. The units related to interval day to second are ns, nanosecond, us, microsecond, ms, millisecond, s, second, m, minute, h, hour, d, and day. There is also interval year to month. The units related to interval year to month are y, year, and month. For example, SELECT 1year returns 1-0. SELECT 12month also returns 1-0. SELECT 8month returns 0-8. Although the unit of quarter is also available for some functions such as date_trunc and date_add, quarter is not available as part of an interval literal. Formatting and parsing Timestream for LiveAnalytics supports the following formatting and parsing functions for date and time. Function Output data type Description date_format(timestamp, varchar(x)) varchar For more information about the format specifiers used by this
timestream-196
timestream.pdf
196
hour, d, and day. There is also interval year to month. The units related to interval year to month are y, year, and month. For example, SELECT 1year returns 1-0. SELECT 12month also returns 1-0. SELECT 8month returns 0-8. Although the unit of quarter is also available for some functions such as date_trunc and date_add, quarter is not available as part of an interval literal. Formatting and parsing Timestream for LiveAnalytics supports the following formatting and parsing functions for date and time. Function Output data type Description date_format(timestamp, varchar(x)) varchar For more information about the format specifiers used by this function, see https:// trino.io/docs/current/fu nctions/datetime.html#mysq l-date-functions SELECT date_form at(TIMESTAMP '2019-10- 20 10:20:20', '%Y-%m- %d %H:%i:%s') Example result: 2019-10-2 0 10:20:20 Date / time functions 768 Amazon Timestream Developer Guide Function Output data type Description date_parse(varchar(x), varchar(y)) timestamp format_datetime(timestamp, varchar(x)) varchar For more information about the format specifiers used by this function, see https:// trino.io/docs/current/fu nctions/datetime.html#mysq l-date-functions SELECT date_pars e('2019-10-20 10:20:20', '%Y-%m-%d %H:%i:%s') Example result: 2019-10-2 0 10:20:20.000000000 For more information about the format string used by this function, see http://j oda-time.sourceforge.net/ apidocs/org/joda/time/fo rmat/DateTimeFormat.html SELECT format_da tetime(parse_datet ime('1968-01-13 12', 'yyyy-MM-dd HH'), 'yyyy-MM-dd HH') Example result: 1968-01-1 3 12 Date / time functions 769 Amazon Timestream Developer Guide Function Output data type Description parse_datetime(varchar(x), varchar(y)) timestamp For more information about the format string used by this function, see http://j oda-time.sourceforge.net/ apidocs/org/joda/time/fo rmat/DateTimeFormat.html SELECT parse_dat etime('2019-12-29 10:10 PST', 'uuuu-LL- dd HH:mm z') Example result: 2019-12-2 9 18:10:00.000000000 Extraction Timestream for LiveAnalytics supports the following extraction functions for date and time. The extract function is the basis for the remaining convenience functions. Function extract Output data type Description bigint Extracts a field from a timestamp, where field is one of [YEAR, QUARTER, MONTH, WEEK, DAY, DAY_OF_MO NTH, DAY_OF_WEEK, DOW, DAY_OF_YEAR, DOY, YEAR_OF_WEEK, YOW, HOUR, MINUTE, or SECOND]. SELECT extract(YEAR FROM '2019-10-12 23:10:34.000000000') Date / time functions 770 Amazon Timestream Developer Guide Function Output data type Description day(timestamp), day(date), day(interval day to second) bigint day_of_month(timestamp), day_of_month(date), day_of_month(interval day to bigint second) day_of_week(timestamp), day_of_week(date) bigint day_of_year(timestamp), day_of_year(date) bigint Example result: 2019 SELECT day('2019-10-12 23:10:34.000000000') Example result: 12 SELECT day_of_mo nth('2019-10-12 23:10:34.000000000') Example result: 12 SELECT day_of_we ek('2019-10-12 23:10:34.000000000') Example result: 6 SELECT day_of_ye ar('2019-10-12 23:10:34.000000000') Example result: 285 dow(timestamp), dow(date) bigint Alias for day_of_week doy(timestamp), doy(date) bigint Alias for day_of_year hour(timestamp), hour(time), hour(interval day to second) bigint SELECT hour('2019-10-12 23:10:34.000000000') Example result: 23 Date / time functions 771 Amazon Timestream Developer Guide Function Output data type Description millisecond(timestamp), millisecond(time), milliseco nd(interval day to second) bigint minute(timestamp), minute(ti me), minute(interval day to bigint second) month(timestamp), month(date), month(interval bigint year to month) nanosecond(timestamp), nanosecond(time), nanosecon bigint d(interval day to second) quarter(timestamp), quarter(d ate) bigint second(timestamp), second(ti me), second(interval day to second) bigint SELECT milliseco nd('2019-10-12 23:10:34.000000000') Example result: 0 SELECT minute('2 019-10-12 23:10:34. 000000000') Example result: 10 SELECT month('20 19-10-12 23:10:34. 000000000') Example result: 10 SELECT nanosecon d(current_timestamp) Example result: 162000000 SELECT quarter(' 2019-10-12 23:10:34. 000000000') Example result: 4 SELECT second('2 019-10-12 23:10:34. 000000000') Example result: 34 Date / time functions 772 Amazon Timestream Developer Guide Function Output data type Description week(timestamp), week(date) bigint week_of_year(timestamp), week_of_year(date) year(timestamp), year(date), year(interval year to month) bigint bigint year_of_week(timestamp), year_of_week(date) bigint SELECT week('2019-10-12 23:10:34.000000000') Example result: 41 Alias for week SELECT year('2019-10-12 23:10:34.000000000') Example result: 2019 SELECT year_of_w eek('2019-10-12 23:10:34.000000000') Example result: 2019 yow(timestamp), yow(date) bigint Alias for year_of_week Aggregate functions Timestream for LiveAnalytics supports the following aggregate functions. Function arbitrary(x) Output data type Description [same as input] Returns an arbitrary non-null value of x, if one exists. SELECT arbitrary(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 1 Aggregate functions 773 Amazon Timestream Developer Guide Function Output data type Description array_agg(x) array<[same as input] avg(x) double bool_and(boolean) every(boo lean) boolean Returns an array created from the input x elements. SELECT array_agg(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: [ 1,2,3,4 ] Returns the average (arithmet ic mean) of all input values. SELECT avg(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 2.5 Returns TRUE if every input value is TRUE, otherwise FALSE. SELECT bool_and(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: false Aggregate functions 774 Amazon Timestream Developer Guide Function Output data type Description bool_or(boolean) boolean count(*) count(x) bigint count_if(x) bigint Returns TRUE if any input value is TRUE, otherwise FALSE. SELECT bool_or(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: true count(*) returns the number of input rows. count(x) returns the number of non-null input values. SELECT count(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: 4 Returns the number of TRUE input values. SELECT count_if(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: 3 Aggregate functions 775 Amazon Timestream Developer Guide Function Output data type Description geometric_mean(x) double max_by(x, y) [same as x]
timestream-197
timestream.pdf
197
data type Description bool_or(boolean) boolean count(*) count(x) bigint count_if(x) bigint Returns TRUE if any input value is TRUE, otherwise FALSE. SELECT bool_or(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: true count(*) returns the number of input rows. count(x) returns the number of non-null input values. SELECT count(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: 4 Returns the number of TRUE input values. SELECT count_if(t.c) FROM (VALUES true, true, false, true) AS t(c) Example result: 3 Aggregate functions 775 Amazon Timestream Developer Guide Function Output data type Description geometric_mean(x) double max_by(x, y) [same as x] max_by(x, y, n) array<[same as x]> Returns the geometric mean of all input values. SELECT geometric _mean(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 2.2133638 39400643 Returns the value of x associated with the maximum value of y over all input values. SELECT max_by(t.c1, t.c2) FROM (VALUES (('a', 1)), (('b', 2)), (('c', 3)), (('d', 4))) AS t(c1, c2) Example result: d Returns n values of x associated with the n largest of all input values of y in descending order of y. SELECT max_by(t.c1, t.c2, 2) FROM (VALUES (('a', 1)), (('b', 2)), (('c', 3)), (('d', 4))) AS t(c1, c2) Example result: [ d,c ] Aggregate functions 776 Amazon Timestream Developer Guide Function Output data type Description min_by(x, y) [same as x] min_by(x, y, n) array<[same as x]> max(x) [same as input] Returns the value of x associated with the minimum value of y over all input values. SELECT min_by(t.c1, t.c2) FROM (VALUES (('a', 1)), (('b', 2)), (('c', 3)), (('d', 4))) AS t(c1, c2) Example result: a Returns n values of x associated with the n smallest of all input values of y in ascending order of y. SELECT min_by(t.c1, t.c2, 2) FROM (VALUES (('a', 1)), (('b', 2)), (('c', 3)), (('d', 4))) AS t(c1, c2) Example result: [ a,b ] Returns the maximum value of all input values. SELECT max(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 4 Aggregate functions 777 Amazon Timestream Function max(x, n) Developer Guide Output data type Description array<[same as x]> Returns n largest values of all input values of x. min(x) [same as input] min(x, n) array<[same as x]> sum(x) [same as input] SELECT max(t.c, 2) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: [ 4,3 ] Returns the minimum value of all input values. SELECT min(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 1 Returns n smallest values of all input values of x. SELECT min(t.c, 2) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: [ 1,2 ] Returns the sum of all input values. SELECT sum(t.c) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 10 Aggregate functions 778 Amazon Timestream Developer Guide Function Output data type Description bitwise_and_agg(x) bigint bitwise_or_agg(x) bigint Returns the bitwise AND of all input values in 2s complemen t representation. SELECT bitwise_a nd_agg(t.c) FROM (VALUES 1, -3) AS t(c) Example result: 1 Returns the bitwise OR of all input values in 2s complemen t representation. SELECT bitwise_o r_agg(t.c) FROM (VALUES 1, -3) AS t(c) Example result: -3 Aggregate functions 779 Amazon Timestream Developer Guide Function Output data type Description approx_distinct(x) bigint Returns the approxima te number of distinct input values. This function provides an approximation of count(DISTINCT x). Zero is returned if all input values are null. This function should produce a standard error of 2.3%, which is the standard deviation of the (approxim ately normal) error distribut ion over all possible sets. It does not guarantee an upper bound on the error for any specific input set. SELECT approx_di stinct(t.c) FROM (VALUES 1, 2, 3, 4, 8) AS t(c) Example result: 5 Aggregate functions 780 Amazon Timestream Developer Guide Function Output data type Description approx_distinct(x, e) bigint Returns the approxima te number of distinct input values. This function provides an approximation of count(DISTINCT x). Zero is returned if all input values are null. This function should produce a standard error of no more than e, which is the standard deviation of the (approximately normal) error distribution over all possible sets. It does not guarantee an upper bound on the error for any specific input set. The current implementation of this function requires that e be in the range of [0.004062 5, 0.26000]. SELECT approx_di stinct(t.c, 0.2) FROM (VALUES 1, 2, 3, 4, 8) AS t(c) Example result: 5 Aggregate functions 781 Amazon Timestream Developer Guide Function Output data type Description approx_percentile(x, percentage) [same as x] approx_percentile(x, percentages) array<[same as x]> Returns the approximate percentile for all input values of x at the given percentage. The value of percentage must be between zero and one and must be constant for all input rows. SELECT approx_pe rcentile(t.c, 0.4) FROM (VALUES 1, 2, 3, 4) AS t(c)
timestream-198
timestream.pdf
198
set. The current implementation of this function requires that e be in the range of [0.004062 5, 0.26000]. SELECT approx_di stinct(t.c, 0.2) FROM (VALUES 1, 2, 3, 4, 8) AS t(c) Example result: 5 Aggregate functions 781 Amazon Timestream Developer Guide Function Output data type Description approx_percentile(x, percentage) [same as x] approx_percentile(x, percentages) array<[same as x]> Returns the approximate percentile for all input values of x at the given percentage. The value of percentage must be between zero and one and must be constant for all input rows. SELECT approx_pe rcentile(t.c, 0.4) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 2 Returns the approximate percentile for all input values of x at each of the specified percentages. Each element of the percentages array must be between zero and one, and the array must be constant for all input rows. SELECT approx_pe rcentile(t.c, ARRAY[0.1, 0.8, 0.8]) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: [ 1,4,4 ] Aggregate functions 782 Amazon Timestream Developer Guide Function Output data type Description approx_percentile(x, w, percentage) [same as x] Returns the approximate weighed percentile for all input values of x using the per-item weight w at the percentage p. The weight must be an integer value of at least one. It is effectively a replication count for the value x in the percentile set. The value of p must be between zero and one and must be constant for all input rows. SELECT approx_pe rcentile(t.c, 1, 0.1) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 1 Aggregate functions 783 Amazon Timestream Developer Guide Function Output data type Description approx_percentile(x, w, percentages) array<[same as x]> Returns the approximate weighed percentile for all input values of x using the per-item weight w at each of the given percentages specified in the array. The weight must be an integer value of at least one. It is effectively a replication count for the value x in the percentil e set. Each element of the array must be between zero and one, and the array must be constant for all input rows. SELECT approx_pe rcentile(t.c, 1, ARRAY[0.1, 0.8, 0.8]) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: [ 1,4,4 ] Aggregate functions 784 Amazon Timestream Developer Guide Function Output data type Description approx_percentile(x, w, percentage, accuracy) [same as x] corr(y, x) double Returns the approxima te weighed percentile for all input values of x using the per-item weight w at the percentage p, with a maximum rank error of accuracy. The weight must be an integer value of at least one. It is effectively a replicati on count for the value x in the percentile set. The value of p must be between zero and one and must be constant for all input rows. The accuracy must be a value greater than zero and less than one, and it must be constant for all input rows. SELECT approx_pe rcentile(t.c, 1, 0.1, 0.5) FROM (VALUES 1, 2, 3, 4) AS t(c) Example result: 1 Returns correlation coefficient of input values. SELECT corr(t.c1, t.c2) FROM (VALUES ((1, 1)), ((2, 2)), ((3, 3)), ((4, 4))) AS t(c1, c2) Example result: 1.0 Aggregate functions 785 Amazon Timestream Developer Guide Function Output data type Description covar_pop(y, x) double covar_samp(y, x) double regr_intercept(y, x) double Returns the population covariance of input values. SELECT covar_pop(t.c1, t.c2) FROM (VALUES ((1, 1)), ((2, 2)), ((3, 3)), ((4, 4))) AS t(c1, c2) Example result: 1.25 Returns the sample covarianc e of input values. SELECT covar_samp(t.c1, t.c2) FROM (VALUES ((1, 1)), ((2, 2)), ((3, 3)), ((4, 4))) AS t(c1, c2) Example result: 1.6666666 666666667 Returns linear regression intercept of input values. y is the dependent value. x is the independent value. SELECT regr_inte rcept(t.c1, t.c2) FROM (VALUES ((1, 1)), ((2, 2)), ((3, 3)), ((4, 4))) AS t(c1, c2) Example result: 0.0 Aggregate functions 786 Amazon Timestream Developer Guide Function Output data type Description regr_slope(y, x) double skewness(x) double stddev_pop(x) double Returns linear regression slope of input values. y is the dependent value. x is the independent value. SELECT regr_slope(t.c1, t.c2) FROM (VALUES ((1, 1)), ((2, 2)), ((3, 3)), ((4, 4))) AS t(c1, c2) Example result: 1.0 Returns the skewness of all input values. SELECT skewness(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 0.8978957 037987335 Returns the population standard deviation of all input values. SELECT stddev_pop(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 2.4166091 947189146 Aggregate functions 787 Amazon Timestream Developer Guide Function Output data type Description stddev_samp(x) stddev(x) double var_pop(x) double var_samp(x) variance(x) double Returns the sample standard deviation of all input values. SELECT stddev_sa mp(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 2.7018512 17221259 Returns the population variance of all input values. SELECT var_pop(t.c1) FROM (VALUES 1, 2, 3, 4,
timestream-199
timestream.pdf
199
SELECT skewness(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 0.8978957 037987335 Returns the population standard deviation of all input values. SELECT stddev_pop(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 2.4166091 947189146 Aggregate functions 787 Amazon Timestream Developer Guide Function Output data type Description stddev_samp(x) stddev(x) double var_pop(x) double var_samp(x) variance(x) double Returns the sample standard deviation of all input values. SELECT stddev_sa mp(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 2.7018512 17221259 Returns the population variance of all input values. SELECT var_pop(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 5.8400000 00000001 Returns the sample variance of all input values. SELECT var_samp(t.c1) FROM (VALUES 1, 2, 3, 4, 8) AS t(c1) Example result: 7.3000000 00000001 Window functions Window functions perform calculations across rows of the query result. They run after the HAVING clause but before the ORDER BY clause. Invoking a window function requires special syntax using the OVER clause to specify the window. A window has three components: Window functions 788 Amazon Timestream Developer Guide • The partition specification, which separates the input rows into different partitions. This is analogous to how the GROUP BY clause separates rows into different groups for aggregate functions. • The ordering specification, which determines the order in which input rows will be processed by the window function. • The window frame, which specifies a sliding window of rows to be processed by the function for a given row. If the frame is not specified, it defaults to RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. This frame contains all rows from the start of the partition up to the last peer of the current row. All Aggregate Functions can be used as window functions by adding the OVER clause. The aggregate function is computed for each row over the rows within the current row's window frame. In addition to aggregate functions, Timestream for LiveAnalytics supports the following ranking and value functions. Function cume_dist() bigint Output data type Description Returns the cumulative distribution of a value in a group of values. The result is the number of rows preceding or peer with the row in the window ordering of the window partition divided by the total number of rows in the window partition. Thus, any tie values in the ordering will evaluate to the same distribution value. Returns the rank of a value in a group of values. This is similar to rank(), except that tie values do not produce gaps in the sequence. dense_rank() bigint Window functions 789 Amazon Timestream Function ntile(n) bigint Output data type Description Developer Guide percent_rank() double rank() bigint Divides the rows for each window partition into n buckets ranging from 1 to at most n. Bucket values will differ by at most 1. If the number of rows in the partition does not divide evenly into the number of buckets, then the remainder values are distributed one per bucket, starting with the first bucket. Returns the percentage ranking of a value in group of values. The result is (r - 1) / (n - 1) where r is the rank() of the row and n is the total number of rows in the window partition. Returns the rank of a value in a group of values. The rank is one plus the number of rows preceding the row that are not peer with the row. Thus, tie values in the ordering will produce gaps in the sequence. The ranking is performed for each window partition. Window functions 790 Amazon Timestream Developer Guide Function Output data type Description row_number() bigint first_value(x) [same as input] last_value(x) [same as input] nth_value(x, offset) [same as input] Returns a unique, sequential number for each row, starting with one, according to the ordering of rows within the window partition. Returns the first value of the window. This function is scoped to the window frame. The function takes an expression or target as its parameter. Returns the last value of the window. This function is scoped to the window frame. The function takes an expression or target as its parameter. Returns the value at the specified offset from beginning the window. Offsets start at 1. The offset can be any scalar expression. If the offset is null or greater than the number of values in the window, null is returned. It is an error for the offset to be zero or negative. The function takes an expression or target as its first parameter . Window functions 791 Amazon Timestream Developer Guide Function Output data type Description lead(x[, offset[, default_v alue]]) [same as input] lag(x[, offset[, default_v alue]]) [same as input] Returns the value at offset rows after the current row in the window.
timestream-200
timestream.pdf
200
at the specified offset from beginning the window. Offsets start at 1. The offset can be any scalar expression. If the offset is null or greater than the number of values in the window, null is returned. It is an error for the offset to be zero or negative. The function takes an expression or target as its first parameter . Window functions 791 Amazon Timestream Developer Guide Function Output data type Description lead(x[, offset[, default_v alue]]) [same as input] lag(x[, offset[, default_v alue]]) [same as input] Returns the value at offset rows after the current row in the window. Offsets start at 0, which is the current row. The offset can be any scalar expression. The default offset is 1. If the offset is null or larger than the window, the default_value is returned, or if it is not specified null is returned. The function takes an expression or target as its first parameter. Returns the value at offset rows before the current row in the window Offsets start at 0, which is the current row. The offset can be any scalar expression. The default offset is 1. If the offset is null or larger than the window, the default_value is returned, or if it is not specified null is returned. The function takes an expression or target as its first parameter. Sample queries This section includes example use cases of Timestream for LiveAnalytics's query language. Topics • Simple queries Sample queries 792 Amazon Timestream Developer Guide • Queries with time series functions • Queries with aggregate functions Simple queries The following gets the 10 most recently added data points for a table. SELECT * FROM <database_name>.<table_name> ORDER BY time DESC LIMIT 10 The following gets the 5 oldest data points for a specific measure. SELECT * FROM <database_name>.<table_name> WHERE measure_name = '<measure_name>' ORDER BY time ASC LIMIT 5 The following works with nanosecond granularity timestamps. SELECT now() AS time_now , now() - (INTERVAL '12' HOUR) AS twelve_hour_earlier -- Compatibility with ANSI SQL , now() - 12h AS also_twelve_hour_earlier -- Convenient time interval literals , ago(12h) AS twelve_hours_ago -- More convenience with time functionality , bin(now(), 10m) AS time_binned -- Convenient time binning support , ago(50ns) AS fifty_ns_ago -- Nanosecond support , now() + (1h + 50ns) AS hour_fifty_ns_future Measure values for multi-measure records are identified by column name. Measure values for single-measure records are identified by measure_value::<data_type>, where <data_type> is one of double, bigint, boolean, or varchar as described in Supported data types. For more information about how measure values are modeled, see Single table vs. multiple tables. The following retrieves values for a measure called speed from multi-measure records with a measure_name of IoTMulti-stats. SELECT speed FROM <database_name>.<table_name> where measure_name = 'IoTMulti-stats' The following retrieves double values from single-measure records with a measure_name of load. Sample queries 793 Amazon Timestream Developer Guide SELECT measure_value::double FROM <database_name>.<table_name> WHERE measure_name = 'load' Queries with time series functions Topics • Example dataset and queries Example dataset and queries You can use Timestream for LiveAnalytics to understand and improve the performance and availability of your services and applications. Below is an example table and sample queries run on that table. The table ec2_metrics stores telemetry data, such as CPU utilization and other metrics from EC2 instances. You can view the table below. Time region az Hostname measure_n ame us-east-1 us-east-1a frontend0 1 cpu_utili zation measure_v alue::dou measure_v alue::big ble 35.1 int null 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 us-east-1 us-east-1a frontend0 1 memory_ut ilization 55.3 null us-east-1 us-east-1a frontend0 1 network_b ytes_in null 1,500 us-east-1 us-east-1a frontend0 1 network_b ytes_out null 6,700 Sample queries 794 Amazon Timestream Developer Guide Time region az Hostname measure_n ame measure_v alue::dou measure_v alue::big ble 38.5 int null us-east-1 us-east-1b frontend0 2 cpu_utili zation 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:00. 000000000 2019-12-0 4 19:00:05. 000000000 us-east-1 us-east-1b frontend0 2 memory_ut ilization 58.4 null us-east-1 us-east-1b frontend0 2 network_b ytes_in null 23,000 us-east-1 us-east-1b frontend0 2 network_b ytes_out null 12,000 us-east-1 us-east-1c frontend0 3 cpu_utili zation 45.0 null us-east-1 us-east-1c frontend0 3 memory_ut ilization 65.8 null us-east-1 us-east-1c frontend0 3 network_b ytes_in null 15,000 us-east-1 us-east-1c frontend0 3 network_b ytes_out null 836,000 us-east-1 us-east-1a frontend0 1 cpu_utili zation 55.2 null Sample queries 795 Amazon Timestream Developer Guide Time region az Hostname measure_n ame measure_v alue::dou measure_v alue::big ble 75.0 int null us-east-1 us-east-1a frontend0 1 memory_ut ilization 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:20. 000000000 2019-12-0 4 19:00:20.
timestream-201
timestream.pdf
201
frontend0 3 cpu_utili zation 45.0 null us-east-1 us-east-1c frontend0 3 memory_ut ilization 65.8 null us-east-1 us-east-1c frontend0 3 network_b ytes_in null 15,000 us-east-1 us-east-1c frontend0 3 network_b ytes_out null 836,000 us-east-1 us-east-1a frontend0 1 cpu_utili zation 55.2 null Sample queries 795 Amazon Timestream Developer Guide Time region az Hostname measure_n ame measure_v alue::dou measure_v alue::big ble 75.0 int null us-east-1 us-east-1a frontend0 1 memory_ut ilization 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:05. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:08. 000000000 2019-12-0 4 19:00:20. 000000000 2019-12-0 4 19:00:20. 000000000 us-east-1 us-east-1a frontend0 1 network_b ytes_in null 1,245 us-east-1 us-east-1a frontend0 1 network_b ytes_out null 68,432 us-east-1 us-east-1b frontend0 2 cpu_utili zation 65.6 null us-east-1 us-east-1b frontend0 2 memory_ut ilization 85.3 null us-east-1 us-east-1b frontend0 2 network_b ytes_in null 1,245 us-east-1 us-east-1b frontend0 2 network_b ytes_out null 68,432 us-east-1 us-east-1c frontend0 3 cpu_utili zation 12.1 null us-east-1 us-east-1c frontend0 3 memory_ut ilization 32.0 null Sample queries 796 Amazon Timestream Developer Guide Time region az Hostname measure_n ame measure_v alue::dou measure_v alue::big ble null int 1,400 us-east-1 us-east-1c frontend0 3 network_b ytes_in 2019-12-0 4 19:00:20. 000000000 2019-12-0 4 19:00:20. 000000000 2019-12-0 4 19:00:10. 000000000 2019-12-0 4 19:00:10. 000000000 2019-12-0 4 19:00:10. 000000000 2019-12-0 4 19:00:10. 000000000 2019-12-0 4 19:00:16. 000000000 2019-12-0 4 19:00:16. 000000000 2019-12-0 4 19:00:16. 000000000 us-east-1 us-east-1c frontend0 3 network_b ytes_out null 345 us-east-1 us-east-1a frontend0 1 cpu_utili zation 15.3 null us-east-1 us-east-1a frontend0 1 memory_ut ilization 35.4 null us-east-1 us-east-1a frontend0 1 network_b ytes_in null 23 us-east-1 us-east-1a frontend0 1 network_b ytes_out null 0 us-east-1 us-east-1b frontend0 2 cpu_utili zation 44.0 null us-east-1 us-east-1b frontend0 2 memory_ut ilization 64.2 null us-east-1 us-east-1b frontend0 2 network_b ytes_in null 1,450 Sample queries 797 Amazon Timestream Developer Guide Time region az Hostname measure_n ame measure_v alue::dou measure_v alue::big ble null int 200 us-east-1 us-east-1b frontend0 2 network_b ytes_out 2019-12-0 4 19:00:16. 000000000 2019-12-0 4 19:00:40. 000000000 2019-12-0 4 19:00:40. 000000000 2019-12-0 4 19:00:40. 000000000 2019-12-0 4 19:00:40. 000000000 us-east-1 us-east-1c frontend0 3 cpu_utili zation 66.4 null us-east-1 us-east-1c frontend0 3 memory_ut ilization 86.3 null us-east-1 us-east-1c frontend0 3 network_b ytes_in null 300 us-east-1 us-east-1c frontend0 3 network_b ytes_out null 423 Find the average, p90, p95, and p99 CPU utilization for a specific EC2 host over the past 2 hours: SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.9), 2) AS p90_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.95), 2) AS p95_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.99), 2) AS p99_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY region, hostname, az, BIN(time, 15s) ORDER BY binned_timestamp ASC Sample queries 798 Amazon Timestream Developer Guide Identify EC2 hosts with CPU utilization that is higher by 10 % or more compared to the average CPU utilization of the entire fleet for the past 2 hours: WITH avg_fleet_utilization AS ( SELECT COUNT(DISTINCT hostname) AS total_host_count, AVG(measure_value::double) AS fleet_avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND time > ago(2h) ), avg_per_host_cpu AS ( SELECT region, az, hostname, AVG(measure_value::double) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND time > ago(2h) GROUP BY region, az, hostname ) SELECT region, az, hostname, avg_cpu_utilization, fleet_avg_cpu_utilization FROM avg_fleet_utilization, avg_per_host_cpu WHERE avg_cpu_utilization > 1.1 * fleet_avg_cpu_utilization ORDER BY avg_cpu_utilization DESC Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours: SELECT BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ORDER BY binned_timestamp ASC Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using linear interpolation: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' Sample queries 799 Amazon Timestream Developer Guide AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LINEAR( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Find the average CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using interpolation based on the last observation carried forward: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LOCF( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Queries with aggregate functions Below
timestream-202
timestream.pdf
202
CPU utilization binned at 30 second intervals for a specific EC2 host over the past 2 hours, filling in the missing values using interpolation based on the last observation carried forward: WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LOCF( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization) Queries with aggregate functions Below is an example IoT scenario example data set to illustrate queries with aggregate functions. Sample queries 800 Amazon Timestream Topics • Example data • Example queries Example data Developer Guide Timestream enables you to store and analyze IoT sensor data such as the location, fuel consumption, speed, and load capacity of one or more fleets of trucks to enable effective fleet management. Below is the schema and some of the data of a table iot_trucks that stores telemetry such as location, fuel consumption, speed, and load capacity of trucks. Time truck_id Make Model Fleet fuel_capa city load_capa city measure_n ame measure_v alue::dou ble measure_v alue::var char 2019-12-0 4 123456781GMC 19:00:00. 000000000 2019-12-0 4 123456781GMC 19:00:00. 000000000 2019-12-0 4 123456781GMC 19:00:00. 000000000 123456781GMC 2019-12-0 4 19:00:00. 000000000 Astro Alpha 100 500 fuel_read ing 65.2 null Astro Alpha 100 500 load 400.0 null Astro Alpha 100 500 speed 90.2 null Astro Alpha 100 500 location null 47.6062 N, 122.3321 W 2019-12-0 4 123456782KenworthW900 Alpha 150 1000 fuel_read ing 10.1 null 19:00:00. Sample queries 801 Amazon Timestream Developer Guide Time truck_id Make Model Fleet fuel_capa city load_capa city measure_n ame measure_v alue::dou measure_v alue::var ble char Alpha 150 1000 load 950.3 null Alpha 150 1000 speed 50.8 null 000000000 2019-12-0 4 123456782KenworthW900 19:00:00. 000000000 2019-12-0 4 123456782KenworthW900 19:00:00. 000000000 2019-12-0 4 123456782KenworthW900 Alpha 150 1000 location null 19:00:00. 000000000 Example queries 40.7128 degrees N, 74.0060 degrees W Get a list of all the sensor attributes and values being monitored for each truck in the fleet. SELECT truck_id, fleet, fuel_capacity, model, load_capacity, make, measure_name FROM "sampleDB".IoT GROUP BY truck_id, fleet, fuel_capacity, model, load_capacity, make, measure_name Get the most recent fuel reading of each truck in the fleet in the past 24 hours. Sample queries 802 Amazon Timestream Developer Guide WITH latest_recorded_time AS ( SELECT truck_id, max(time) as latest_time FROM "sampleDB".IoT WHERE measure_name = 'fuel-reading' AND time >= ago(24h) GROUP BY truck_id ) SELECT b.truck_id, b.fleet, b.make, b.model, b.time, b.measure_value::double as last_reported_fuel_reading FROM latest_recorded_time a INNER JOIN "sampleDB".IoT b ON a.truck_id = b.truck_id AND b.time = a.latest_time WHERE b.measure_name = 'fuel-reading' AND b.time > ago(24h) ORDER BY b.truck_id Identify trucks that have been running on low fuel(less than 10 %) in the past 48 hours: WITH low_fuel_trucks AS ( SELECT time, truck_id, fleet, make, model, (measure_value::double/ cast(fuel_capacity as double)*100) AS fuel_pct FROM "sampleDB".IoT WHERE time >= ago(48h) AND (measure_value::double/cast(fuel_capacity as double)*100) < 10 AND measure_name = 'fuel-reading' ), other_trucks AS ( SELECT time, truck_id, (measure_value::double/cast(fuel_capacity as double)*100) as remaining_fuel FROM "sampleDB".IoT WHERE time >= ago(48h) AND truck_id IN (SELECT truck_id FROM low_fuel_trucks) AND (measure_value::double/cast(fuel_capacity as double)*100) >= 10 AND measure_name = 'fuel-reading' ), trucks_that_refuelled AS ( Sample queries 803 Developer Guide Amazon Timestream SELECT a.truck_id FROM low_fuel_trucks a JOIN other_trucks b ON a.truck_id = b.truck_id AND b.time >= a.time ) SELECT DISTINCT truck_id, fleet, make, model, fuel_pct FROM low_fuel_trucks WHERE truck_id NOT IN ( SELECT truck_id FROM trucks_that_refuelled ) Find the average load and max speed for each truck for the past week: SELECT bin(time, 1d) as binned_time, fleet, truck_id, make, model, AVG( CASE WHEN measure_name = 'load' THEN measure_value::double ELSE NULL END ) AS avg_load_tons, MAX( CASE WHEN measure_name = 'speed' THEN measure_value::double ELSE NULL END ) AS max_speed_mph FROM "sampleDB".IoT WHERE time >= ago(7d) AND measure_name IN ('load', 'speed') GROUP BY fleet, truck_id, make, model, bin(time, 1d) ORDER BY truck_id Get the load efficiency for each truck for the past week: WITH average_load_per_truck AS ( SELECT truck_id, avg(measure_value::double) AS avg_load FROM "sampleDB".IoT WHERE measure_name = 'load' AND time >= ago(7d) GROUP BY truck_id, fleet, load_capacity, make, model ), truck_load_efficiency AS ( SELECT Sample queries 804 Developer Guide Amazon Timestream a.truck_id, fleet, load_capacity, make, model, avg_load, measure_value::double, time, (measure_value::double*100)/avg_load as load_efficiency -- , approx_percentile(avg_load_pct, DOUBLE '0.9') FROM "sampleDB".IoT a JOIN average_load_per_truck b ON a.truck_id = b.truck_id WHERE a.measure_name = 'load' ) SELECT truck_id, time, load_efficiency FROM truck_load_efficiency ORDER BY truck_id, time API reference This section contains the API Reference documentation for Amazon Timestream. Timestream has two APIs: Query and Write. • The Write API allows you to perform operations like table creation, resource tagging, and writing of records to Timestream. • The Query API allows you to perform query operations. Note Both APIs include
timestream-203
timestream.pdf
203
queries 804 Developer Guide Amazon Timestream a.truck_id, fleet, load_capacity, make, model, avg_load, measure_value::double, time, (measure_value::double*100)/avg_load as load_efficiency -- , approx_percentile(avg_load_pct, DOUBLE '0.9') FROM "sampleDB".IoT a JOIN average_load_per_truck b ON a.truck_id = b.truck_id WHERE a.measure_name = 'load' ) SELECT truck_id, time, load_efficiency FROM truck_load_efficiency ORDER BY truck_id, time API reference This section contains the API Reference documentation for Amazon Timestream. Timestream has two APIs: Query and Write. • The Write API allows you to perform operations like table creation, resource tagging, and writing of records to Timestream. • The Query API allows you to perform query operations. Note Both APIs include the DescribeEndpoints action. For both Query and Write, the DescribeEndpoints action are identical. You can read more about each API below, along with data types, common errors and parameters. API reference 805 Amazon Timestream Note Developer Guide For error codes common to all AWS services, see the AWS Support section. Topics • Actions • Data Types • Common Errors • Common Parameters Actions The following actions are supported by Amazon Timestream Write: • CreateBatchLoadTask • CreateDatabase • CreateTable • DeleteDatabase • DeleteTable • DescribeBatchLoadTask • DescribeDatabase • DescribeEndpoints • DescribeTable • ListBatchLoadTasks • ListDatabases • ListTables • ListTagsForResource • ResumeBatchLoadTask • TagResource • UntagResource • UpdateDatabase Actions 806 Amazon Timestream • UpdateTable • WriteRecords Developer Guide The following actions are supported by Amazon Timestream Query: • CancelQuery • CreateScheduledQuery • DeleteScheduledQuery • DescribeAccountSettings • DescribeEndpoints • DescribeScheduledQuery • ExecuteScheduledQuery • ListScheduledQueries • ListTagsForResource • PrepareQuery • Query • TagResource • UntagResource • UpdateAccountSettings • UpdateScheduledQuery Amazon Timestream Write The following actions are supported by Amazon Timestream Write: • CreateBatchLoadTask • CreateDatabase • CreateTable • DeleteDatabase • DeleteTable • DescribeBatchLoadTask • DescribeDatabase Actions 807 Developer Guide Amazon Timestream • DescribeEndpoints • DescribeTable • ListBatchLoadTasks • ListDatabases • ListTables • ListTagsForResource • ResumeBatchLoadTask • TagResource • UntagResource • UpdateDatabase • UpdateTable • WriteRecords Actions 808 Amazon Timestream Developer Guide CreateBatchLoadTask Service: Amazon Timestream Write Creates a new Timestream batch load task. A batch load task processes data from a CSV source in an S3 location and writes to a Timestream table. A mapping from source to target is defined in a batch load task. Errors and events are written to a report at an S3 location. For the report, if the AWS KMS key is not specified, the report will be encrypted with an S3 managed key when SSE_S3 is the option. Otherwise an error is thrown. For more information, see AWS managed keys. Service quotas apply. For details, see code sample. Request Syntax { "ClientToken": "string", "DataModelConfiguration": { "DataModel": { "DimensionMappings": [ { "DestinationColumn": "string", "SourceColumn": "string" } ], "MeasureNameColumn": "string", "MixedMeasureMappings": [ { "MeasureName": "string", "MeasureValueType": "string", "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "SourceColumn": "string", "TargetMeasureName": "string" } ], "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", Actions 809 Amazon Timestream Developer Guide "TargetMultiMeasureAttributeName": "string" } ], "TargetMultiMeasureName": "string" }, "TimeColumn": "string", "TimeUnit": "string" }, "DataModelS3Configuration": { "BucketName": "string", "ObjectKey": "string" } }, "DataSourceConfiguration": { "CsvConfiguration": { "ColumnSeparator": "string", "EscapeChar": "string", "NullValue": "string", "QuoteChar": "string", "TrimWhiteSpace": boolean }, "DataFormat": "string", "DataSourceS3Configuration": { "BucketName": "string", "ObjectKeyPrefix": "string" } }, "RecordVersion": number, "ReportConfiguration": { "ReportS3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } }, "TargetDatabaseName": "string", "TargetTableName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. Actions 810 Amazon Timestream Developer Guide The request accepts the following data in JSON format. ClientToken Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Required: No DataModelConfiguration Type: DataModelConfiguration object Required: No DataSourceConfiguration Defines configuration details about the data source for a batch load task. Type: DataSourceConfiguration object Required: Yes RecordVersion Type: Long Required: No ReportConfiguration Report configuration for a batch load task. This contains details about where error reports are stored. Type: ReportConfiguration object Required: Yes TargetDatabaseName Target Timestream database for a batch load task. Actions 811 Developer Guide Amazon Timestream Type: String Pattern: [a-zA-Z0-9_.-]+ Required: Yes TargetTableName Target Timestream table for a batch load task. Type: String Pattern: [a-zA-Z0-9_.-]+ Required: Yes Response Syntax { "TaskId": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. TaskId The ID of the batch load task. Type: String Length Constraints: Minimum length of 3. Maximum length of 32. Pattern: [A-Z0-9]+ Errors For information about the errors that are common to all actions, see Common Errors. Actions 812 Amazon Timestream AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 ConflictException Developer Guide Timestream was unable to process this request because it contains resource that already exists. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error.
timestream-204
timestream.pdf
204
following data is returned in JSON format by the service. TaskId The ID of the batch load task. Type: String Length Constraints: Minimum length of 3. Maximum length of 32. Pattern: [A-Z0-9]+ Errors For information about the errors that are common to all actions, see Common Errors. Actions 812 Amazon Timestream AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 ConflictException Developer Guide Timestream was unable to process this request because it contains resource that already exists. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. Actions 813 Amazon Timestream HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 814 Amazon Timestream Developer Guide CreateDatabase Service: Amazon Timestream Write Creates a new Timestream database. If the AWS KMS key is not specified, the database will be encrypted with a Timestream managed AWS KMS key located in your account. For more information, see AWS managed keys. Service quotas apply. For details, see code sample. Request Syntax { "DatabaseName": "string", "KmsKeyId": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Pattern: [a-zA-Z0-9_.-]+ Required: Yes KmsKeyId The AWS KMS key for the database. If the AWS KMS key is not specified, the database will be encrypted with a Timestream managed AWS KMS key located in your account. For more information, see AWS managed keys. Actions 815 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No Tags A list of key-value pairs to label the table. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Required: No Response Syntax { "Database": { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "KmsKeyId": "string", "LastUpdatedTime": number, "TableCount": number } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Database The newly created Timestream database. Type: Database object Errors For information about the errors that are common to all actions, see Common Errors. Actions 816 Amazon Timestream AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 ConflictException Developer Guide Timestream was unable to process this request because it contains resource that already exists. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 Actions 817 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 818 Amazon Timestream Developer Guide CreateTable Service: Amazon Timestream Write Adds a new table to an existing database in your account. In an AWS account, table names must be at least unique within each Region if they are in the same database. You might have identical table
timestream-205
timestream.pdf
205
Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 818 Amazon Timestream Developer Guide CreateTable Service: Amazon Timestream Write Adds a new table to an existing database in your account. In an AWS account, table names must be at least unique within each Region if they are in the same database. You might have identical table names in the same Region if the tables are in separate databases. While creating the table, you must specify the table name, database name, and the retention properties. Service quotas apply. See code sample for details. Request Syntax { "DatabaseName": "string", "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, "TableName": "string", "Tags": [ { "Key": "string", "Value": "string" } Actions 819 Amazon Timestream ] } Request Parameters Developer Guide For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Pattern: [a-zA-Z0-9_.-]+ Required: Yes MagneticStoreWriteProperties Contains properties to set on the table when enabling magnetic store writes. Type: MagneticStoreWriteProperties object Required: No RetentionProperties The duration for which your time-series data must be stored in the memory store and the magnetic store. Type: RetentionProperties object Required: No Schema The schema of the table. Type: Schema object Required: No Actions 820 Amazon Timestream TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Developer Guide Pattern: [a-zA-Z0-9_.-]+ Required: Yes Tags A list of key-value pairs to label the table. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Required: No Response Syntax { "Table": { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "LastUpdatedTime": number, "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, Actions 821 Amazon Timestream Developer Guide "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, "TableName": "string", "TableStatus": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Table The newly created Timestream table. Type: Table object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 ConflictException Timestream was unable to process this request because it contains resource that already exists. HTTP Status Code: 400 Actions 822 Amazon Timestream InternalServerException Developer Guide Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 823 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 824 Amazon Timestream Developer Guide DeleteDatabase Service: Amazon Timestream Write Deletes a given Timestream database. This is an irreversible operation. After a database is deleted, the time-series data from its tables cannot be recovered. Note All tables in the database must be deleted first, or a ValidationException error will be thrown. Due to the nature of distributed retries, the operation can return either success or a ResourceNotFoundException. Clients should consider them equivalent. See code sample for details. Request Syntax { "DatabaseName": "string" } Request Parameters For information about the parameters that are common
timestream-206
timestream.pdf
206
• AWS SDK for Ruby V3 Actions 824 Amazon Timestream Developer Guide DeleteDatabase Service: Amazon Timestream Write Deletes a given Timestream database. This is an irreversible operation. After a database is deleted, the time-series data from its tables cannot be recovered. Note All tables in the database must be deleted first, or a ValidationException error will be thrown. Due to the nature of distributed retries, the operation can return either success or a ResourceNotFoundException. Clients should consider them equivalent. See code sample for details. Request Syntax { "DatabaseName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database to be deleted. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Actions 825 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 826 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 827 Amazon Timestream Developer Guide DeleteTable Service: Amazon Timestream Write Deletes a given Timestream table. This is an irreversible operation. After a Timestream database table is deleted, the time-series data stored in the table cannot be recovered. Note Due to the nature of distributed retries, the operation can return either success or a ResourceNotFoundException. Clients should consider them equivalent. See code sample for details. Request Syntax { "DatabaseName": "string", "TableName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the database where the Timestream database is to be deleted. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes TableName The name of the Timestream table to be deleted. Type: String Actions 828 Amazon Timestream Developer Guide Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. Actions 829 Amazon Timestream HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 830 Amazon Timestream Developer Guide DescribeBatchLoadTask Service: Amazon Timestream Write Returns information about the batch load task, including configurations, mappings, progress, and other details. Service quotas apply. See code sample for details. Request Syntax { "TaskId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON
timestream-207
timestream.pdf
207
for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 830 Amazon Timestream Developer Guide DescribeBatchLoadTask Service: Amazon Timestream Write Returns information about the batch load task, including configurations, mappings, progress, and other details. Service quotas apply. See code sample for details. Request Syntax { "TaskId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. TaskId The ID of the batch load task. Type: String Length Constraints: Minimum length of 3. Maximum length of 32. Pattern: [A-Z0-9]+ Required: Yes Response Syntax { "BatchLoadTaskDescription": { "CreationTime": number, "DataModelConfiguration": { "DataModel": { "DimensionMappings": [ { "DestinationColumn": "string", "SourceColumn": "string" } ], Actions 831 Amazon Timestream Developer Guide "MeasureNameColumn": "string", "MixedMeasureMappings": [ { "MeasureName": "string", "MeasureValueType": "string", "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "SourceColumn": "string", "TargetMeasureName": "string" } ], "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "TargetMultiMeasureName": "string" }, "TimeColumn": "string", "TimeUnit": "string" }, "DataModelS3Configuration": { "BucketName": "string", "ObjectKey": "string" } }, "DataSourceConfiguration": { "CsvConfiguration": { "ColumnSeparator": "string", "EscapeChar": "string", "NullValue": "string", "QuoteChar": "string", "TrimWhiteSpace": boolean }, "DataFormat": "string", "DataSourceS3Configuration": { Actions 832 Amazon Timestream Developer Guide "BucketName": "string", "ObjectKeyPrefix": "string" } }, "ErrorMessage": "string", "LastUpdatedTime": number, "ProgressReport": { "BytesMetered": number, "FileFailures": number, "ParseFailures": number, "RecordIngestionFailures": number, "RecordsIngested": number, "RecordsProcessed": number }, "RecordVersion": number, "ReportConfiguration": { "ReportS3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } }, "ResumableUntil": number, "TargetDatabaseName": "string", "TargetTableName": "string", "TaskId": "string", "TaskStatus": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. BatchLoadTaskDescription Description of the batch load task. Type: BatchLoadTaskDescription object Actions 833 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ Actions 834 Amazon Timestream • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 835 Amazon Timestream Developer Guide DescribeDatabase Service: Amazon Timestream Write Returns information about the database, including the database name, time that the database was created, and the total number of tables found within the database. Service quotas apply. See code sample for details. Request Syntax { "DatabaseName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes Response Syntax { "Database": { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "KmsKeyId": "string", "LastUpdatedTime": number, "TableCount": number } } Actions 836 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Database The name of the Timestream table. Type: Database object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. Actions 837 Amazon Timestream HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information
timestream-208
timestream.pdf
208
Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. Actions 837 Amazon Timestream HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 838 Amazon Timestream Developer Guide DescribeEndpoints Service: Amazon Timestream Write Returns a list of available endpoints to make Timestream API calls against. This API operation is available through both the Write and Query APIs. Because the Timestream SDKs are designed to transparently work with the service’s architecture, including the management and mapping of the service endpoints, we don't recommend that you use this API operation unless: • You are using VPC endpoints (AWS PrivateLink) with Timestream • Your application uses a programming language that does not yet have SDK support • You require better control over the client-side implementation For detailed information on how and when to use and implement DescribeEndpoints, see The Endpoint Discovery Pattern. Response Syntax { "Endpoints": [ { "Address": "string", "CachePeriodInMinutes": number } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Endpoints An Endpoints object is returned when a DescribeEndpoints request is made. Type: Array of Endpoint objects Actions 839 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 840 Amazon Timestream Developer Guide DescribeTable Service: Amazon Timestream Write Returns information about the table, including the table name, database name, retention duration of the memory store and the magnetic store. Service quotas apply. See code sample for details. Request Syntax { "DatabaseName": "string", "TableName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes Response Syntax { "Table": { Actions 841 Amazon Timestream Developer Guide "Arn": "string", "CreationTime": number, "DatabaseName": "string", "LastUpdatedTime": number, "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, "TableName": "string", "TableStatus": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Table The Timestream table. Type: Table object Actions 842 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by
timestream-209
timestream.pdf
209
Table object Actions 842 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 843 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 844 Amazon Timestream Developer Guide ListBatchLoadTasks Service: Amazon Timestream Write Provides a list of batch load tasks, along with the name, status, when the task is resumable until, and other details. See code sample for details. Request Syntax { "MaxResults": number, "NextToken": "string", "TaskStatus": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MaxResults The total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: Integer Valid Range: Minimum value of 1. Maximum value of 100. Required: No NextToken A token to specify where to start paginating. This is the NextToken from a previously truncated response. Type: String Required: No TaskStatus Status of the batch load task. Actions 845 Amazon Timestream Type: String Developer Guide Valid Values: CREATED | IN_PROGRESS | FAILED | SUCCEEDED | PROGRESS_STOPPED | PENDING_RESUME Required: No Response Syntax { "BatchLoadTasks": [ { "CreationTime": number, "DatabaseName": "string", "LastUpdatedTime": number, "ResumableUntil": number, "TableName": "string", "TaskId": "string", "TaskStatus": "string" } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. BatchLoadTasks A list of batch load task details. Type: Array of BatchLoadTask objects NextToken A token to specify where to start paginating. Provide the next ListBatchLoadTasksRequest. Type: String Actions 846 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 Actions 847 Amazon Timestream • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 848 Amazon Timestream Developer Guide ListDatabases Service: Amazon Timestream Write Returns a list of your Timestream databases. Service quotas apply. See code sample for details. Request Syntax { "MaxResults": number, "NextToken": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MaxResults The total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: Integer Valid Range: Minimum value of 1. Maximum value of 20. Required: No NextToken The pagination token. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: String Required: No Response Syntax { Actions 849 Developer Guide Amazon Timestream "Databases": [ { "Arn": "string", "CreationTime":
timestream-210
timestream.pdf
210
data in JSON format. MaxResults The total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: Integer Valid Range: Minimum value of 1. Maximum value of 20. Required: No NextToken The pagination token. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: String Required: No Response Syntax { Actions 849 Developer Guide Amazon Timestream "Databases": [ { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "KmsKeyId": "string", "LastUpdatedTime": number, "TableCount": number } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Databases A list of database names. Type: Array of Database objects NextToken The pagination token. This parameter is returned when the response is truncated. Type: String Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. Actions 850 Amazon Timestream HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ThrottlingException Developer Guide Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 851 Amazon Timestream Developer Guide ListTables Service: Amazon Timestream Write Provides a list of tables, along with the name, status, and retention properties of each table. See code sample for details. Request Syntax { "DatabaseName": "string", "MaxResults": number, "NextToken": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: No MaxResults The total number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: Integer Valid Range: Minimum value of 1. Maximum value of 20. Required: No Actions 852 Amazon Timestream NextToken Developer Guide The pagination token. To resume pagination, provide the NextToken value as argument of a subsequent API invocation. Type: String Required: No Response Syntax { "NextToken": "string", "Tables": [ { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "LastUpdatedTime": number, "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, Actions 853 Amazon Timestream Developer Guide "TableName": "string", "TableStatus": "string" } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. NextToken A token to specify where to start paginating. This is the NextToken from a previously truncated response. Type: String Tables A list of tables. Type: Array of Table objects Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. Actions 854 Amazon Timestream HTTP Status Code: 400 ResourceNotFoundException Developer Guide The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go
timestream-211
timestream.pdf
211
Developer Guide The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 855 Amazon Timestream Developer Guide ListTagsForResource Service: Amazon Timestream Write Lists all tags on a Timestream resource. Request Syntax { "ResourceARN": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ResourceARN The Timestream resource with tags to be listed. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 1011. Required: Yes Response Syntax { "Tags": [ { "Key": "string", "Value": "string" } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. Actions 856 Amazon Timestream Developer Guide The following data is returned in JSON format by the service. Tags The tags currently associated with the Timestream resource. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 857 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 858 Amazon Timestream Developer Guide ResumeBatchLoadTask Service: Amazon Timestream Write Request Syntax { "TaskId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. TaskId The ID of the batch load task to resume. Type: String Length Constraints: Minimum length of 3. Maximum length of 32. Pattern: [A-Z0-9]+ Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 Actions 859 Amazon Timestream InternalServerException Developer Guide Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 Actions 860 Amazon Timestream • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 861 Amazon Timestream Developer Guide TagResource Service: Amazon Timestream Write Associates a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. Request Syntax { "ResourceARN": "string", "Tags": [ { "Key": "string", "Value": "string" } ]
timestream-212
timestream.pdf
212
SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 Actions 860 Amazon Timestream • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 861 Amazon Timestream Developer Guide TagResource Service: Amazon Timestream Write Associates a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. Request Syntax { "ResourceARN": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ResourceARN Identifies the Timestream resource to which tags should be added. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 1011. Required: Yes Tags The tags to be assigned to the Timestream resource. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Required: Yes Actions 862 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 863 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 864 Amazon Timestream Developer Guide UntagResource Service: Amazon Timestream Write Removes the association of tags from a Timestream resource. Request Syntax { "ResourceARN": "string", "TagKeys": [ "string" ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ResourceARN The Timestream resource that the tags will be removed from. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 1011. Required: Yes TagKeys A list of tags keys. Existing tags of the resource whose keys are members of this list will be removed from the Timestream resource. Type: Array of strings Array Members: Minimum number of 0 items. Maximum number of 200 items. Length Constraints: Minimum length of 1. Maximum length of 128. Required: Yes Actions 865 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 866 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 867 Amazon Timestream Developer Guide UpdateDatabase Service: Amazon Timestream Write Modifies the AWS KMS key for an existing database. While updating the database, you must specify the database name and the identifier of the new AWS KMS key to be used (KmsKeyId). If there are any concurrent UpdateDatabase requests, first writer wins. See code sample for details. Request
timestream-213
timestream.pdf
213
C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 867 Amazon Timestream Developer Guide UpdateDatabase Service: Amazon Timestream Write Modifies the AWS KMS key for an existing database. While updating the database, you must specify the database name and the identifier of the new AWS KMS key to be used (KmsKeyId). If there are any concurrent UpdateDatabase requests, first writer wins. See code sample for details. Request Syntax { "DatabaseName": "string", "KmsKeyId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes KmsKeyId The identifier of the new AWS KMS key (KmsKeyId) to be used to encrypt the data stored in the database. If the KmsKeyId currently registered with the database is the same as the KmsKeyId in the request, there will not be any update. You can specify the KmsKeyId using any of the following: • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab • Key ARN: arn:aws:kms:us- east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab • Alias name: alias/ExampleAlias Actions 868 Amazon Timestream Developer Guide • Alias ARN: arn:aws:kms:us-east-1:111122223333:alias/ExampleAlias Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Response Syntax { "Database": { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "KmsKeyId": "string", "LastUpdatedTime": number, "TableCount": number } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Database A top-level container for a table. Databases and tables are the fundamental management concepts in Amazon Timestream. All tables in a database are encrypted with the same AWS KMS key. Type: Database object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. Actions 869 Amazon Timestream HTTP Status Code: 400 InternalServerException Developer Guide Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ServiceQuotaExceededException The instance quota of resource exceeded for this account. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface Actions 870 Developer Guide Amazon Timestream • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 871 Amazon Timestream Developer Guide UpdateTable Service: Amazon Timestream Write Modifies the retention duration of the memory store and magnetic store for your Timestream table. Note that the change in retention duration takes effect immediately. For example, if the retention period of the memory store was initially set to 2 hours and then changed to 24 hours, the memory store will be capable of holding 24 hours of data, but will be populated with 24 hours of data 22 hours after this change was made. Timestream does not retrieve data from the magnetic store to populate the memory store. See code sample for details. Request Syntax { "DatabaseName": "string", "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, "TableName": "string" } Actions 872 Amazon Timestream Request Parameters Developer Guide For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes MagneticStoreWriteProperties Contains properties to set on the table when enabling magnetic store writes. Type: MagneticStoreWriteProperties object Required: No RetentionProperties The retention duration of the memory store and the magnetic store. Type: RetentionProperties object Required: No Schema The schema of the table. Type: Schema object Required: No TableName The name of the Timestream table. Type: String
timestream-214
timestream.pdf
214
Guide For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes MagneticStoreWriteProperties Contains properties to set on the table when enabling magnetic store writes. Type: MagneticStoreWriteProperties object Required: No RetentionProperties The retention duration of the memory store and the magnetic store. Type: RetentionProperties object Required: No Schema The schema of the table. Type: Schema object Required: No TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Actions 873 Developer Guide Amazon Timestream Required: Yes Response Syntax { "Table": { "Arn": "string", "CreationTime": number, "DatabaseName": "string", "LastUpdatedTime": number, "MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": boolean, "MagneticStoreRejectedDataLocation": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "KmsKeyId": "string", "ObjectKeyPrefix": "string" } } }, "RetentionProperties": { "MagneticStoreRetentionPeriodInDays": number, "MemoryStoreRetentionPeriodInHours": number }, "Schema": { "CompositePartitionKey": [ { "EnforcementInRecord": "string", "Name": "string", "Type": "string" } ] }, "TableName": "string", "TableStatus": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. Actions 874 Amazon Timestream Developer Guide The following data is returned in JSON format by the service. Table The updated Timestream table. Type: Table object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 InternalServerException Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. HTTP Status Code: 400 ValidationException An invalid or malformed request. Actions 875 Amazon Timestream HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 876 Amazon Timestream Developer Guide WriteRecords Service: Amazon Timestream Write Enables you to write your time-series data into Timestream. You can specify a single data point or a batch of data points to be inserted into the system. Timestream offers you a flexible schema that auto detects the column names and data types for your Timestream tables based on the dimension names and data types of the data points you specify when invoking writes into the database. Timestream supports eventual consistency read semantics. This means that when you query data immediately after writing a batch of data into Timestream, the query results might not reflect the results of a recently completed write operation. The results may also include some stale data. If you repeat the query request after a short time, the results should return the latest data. Service quotas apply. See code sample for details. Upserts You can use the Version parameter in a WriteRecords request to update data points. Timestream tracks a version number with each record. Version defaults to 1 when it's not specified for the record in the request. Timestream updates an existing record’s measure value along with its Version when it receives a write request with a higher Version number for that record. When it receives an update request where the measure value is the same as that of the existing record, Timestream still updates Version, if it is greater than the existing value of Version. You can update a data point as many times as desired, as long as the value of Version continuously increases. For example, suppose you write a new record without indicating Version in the request. Timestream stores this record, and set Version to 1. Now, suppose you try to update this record with a WriteRecords request of the same record with a different measure value but, like before, do not provide Version. In this case, Timestream will reject this update with a RejectedRecordsException since the updated record’s version is not greater than the existing value of Version. However, if you were to resend the update request with Version set to 2, Timestream would then succeed in updating the record’s value, and the Version would be set to 2. Next, suppose you sent a WriteRecords request with this same record and an identical measure value,
timestream-215
timestream.pdf
215
Now, suppose you try to update this record with a WriteRecords request of the same record with a different measure value but, like before, do not provide Version. In this case, Timestream will reject this update with a RejectedRecordsException since the updated record’s version is not greater than the existing value of Version. However, if you were to resend the update request with Version set to 2, Timestream would then succeed in updating the record’s value, and the Version would be set to 2. Next, suppose you sent a WriteRecords request with this same record and an identical measure value, but with Version set to 3. In this case, Timestream would only update Version to 3. Any further updates would need to send a version number greater than 3, or the update requests would receive a RejectedRecordsException. Actions 877 Amazon Timestream Request Syntax { "CommonAttributes": { "Dimensions": [ { "DimensionValueType": "string", "Name": "string", "Value": "string" } ], "MeasureName": "string", "MeasureValue": "string", "MeasureValues": [ { "Name": "string", "Type": "string", "Value": "string" } ], "MeasureValueType": "string", "Time": "string", "TimeUnit": "string", "Version": number }, "DatabaseName": "string", "Records": [ { "Dimensions": [ { "DimensionValueType": "string", "Name": "string", "Value": "string" } ], "MeasureName": "string", "MeasureValue": "string", "MeasureValues": [ { "Name": "string", "Type": "string", "Value": "string" } ], Actions Developer Guide 878 Amazon Timestream Developer Guide "MeasureValueType": "string", "Time": "string", "TimeUnit": "string", "Version": number } ], "TableName": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. CommonAttributes A record that contains the common measure, dimension, time, and version attributes shared across all the records in the request. The measure and dimension attributes specified will be merged with the measure and dimension attributes in the records object when the data is written into Timestream. Dimensions may not overlap, or a ValidationException will be thrown. In other words, a record must contain dimensions with unique names. Type: Record object Required: No DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: Yes Records An array of records that contain the unique measure, dimension, time, and version attributes for each time-series data point. Type: Array of Record objects Array Members: Minimum number of 1 item. Maximum number of 100 items. Actions 879 Amazon Timestream Required: Yes TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Developer Guide Required: Yes Response Syntax { "RecordsIngested": { "MagneticStore": number, "MemoryStore": number, "Total": number } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. RecordsIngested Information on the records ingested by this request. Type: RecordsIngested object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You are not authorized to perform this action. HTTP Status Code: 400 Actions 880 Amazon Timestream InternalServerException Developer Guide Timestream was unable to fully process this request because of an internal server error. HTTP Status Code: 500 InvalidEndpointException The requested endpoint was not valid. HTTP Status Code: 400 RejectedRecordsException WriteRecords would throw this exception in the following cases: • Records with duplicate data where there are multiple records with the same dimensions, timestamps, and measure names but: • Measure values are different • Version is not present in the request or the value of version in the new record is equal to or lower than the existing value In this case, if Timestream rejects data, the ExistingVersion field in the RejectedRecords response will indicate the current record’s version. To force an update, you can resend the request with a version for the record set to a value greater than the ExistingVersion. • Records with timestamps that lie outside the retention duration of the memory store. • Records with dimensions or measures that exceed the Timestream defined limits. For more information, see Quotas in the Amazon Timestream Developer Guide. HTTP Status Code: 400 ResourceNotFoundException The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE. HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. Actions 881 Amazon Timestream HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin
timestream-216
timestream.pdf
216
HTTP Status Code: 400 ThrottlingException Too many requests were made by a user and they exceeded the service quotas. The request was throttled. Actions 881 Amazon Timestream HTTP Status Code: 400 ValidationException An invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Amazon Timestream Query The following actions are supported by Amazon Timestream Query: • CancelQuery • CreateScheduledQuery • DeleteScheduledQuery • DescribeAccountSettings • DescribeEndpoints • DescribeScheduledQuery • ExecuteScheduledQuery Actions 882 Developer Guide Amazon Timestream • ListScheduledQueries • ListTagsForResource • PrepareQuery • Query • TagResource • UntagResource • UpdateAccountSettings • UpdateScheduledQuery Actions 883 Amazon Timestream Developer Guide CancelQuery Service: Amazon Timestream Query Cancels a query that has been issued. Cancellation is provided only if the query has not completed running before the cancellation request was issued. Because cancellation is an idempotent operation, subsequent cancellation requests will return a CancellationMessage, indicating that the query has already been canceled. See code sample for details. Request Syntax { "QueryId": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. QueryId The ID of the query that needs to be cancelled. QueryID is returned as part of the query result. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9]+ Required: Yes Response Syntax { "CancellationMessage": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Actions 884 Amazon Timestream CancellationMessage Developer Guide A CancellationMessage is returned when a CancelQuery request for the query specified by QueryId has already been issued. Type: String Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Actions 885 Developer Guide Amazon Timestream • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 886 Amazon Timestream Developer Guide CreateScheduledQuery Service: Amazon Timestream Query Create a scheduled query that will be run on your behalf at the configured schedule. Timestream assumes the execution role provided as part of the ScheduledQueryExecutionRoleArn parameter to run the query. You can use the NotificationConfiguration parameter to configure notification for your scheduled query operations. Request Syntax { "ClientToken": "string", "ErrorReportConfiguration": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "ObjectKeyPrefix": "string" } }, "KmsKeyId": "string", "Name": "string", "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "string" } }, "QueryString": "string", "ScheduleConfiguration": { "ScheduleExpression": "string" }, "ScheduledQueryExecutionRoleArn": "string", "Tags": [ { "Key": "string", "Value": "string" } ], "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "string", "DimensionMappings": [ { "DimensionValueType": "string", Actions 887 Amazon Timestream Developer Guide "Name": "string" } ], "MeasureNameColumn": "string", "MixedMeasureMappings": [ { "MeasureName": "string", "MeasureValueType": "string", "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "SourceColumn": "string", "TargetMeasureName": "string" } ], "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "TargetMultiMeasureName": "string" }, "TableName": "string", "TimeColumn": "string" } } } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. Actions 888 Amazon Timestream ClientToken Developer Guide Using a ClientToken makes the call to CreateScheduledQuery idempotent, in other words, making the same request repeatedly will produce the same result. Making multiple identical CreateScheduledQuery requests has the same effect as making a single request. • If CreateScheduledQuery is called without a ClientToken, the Query SDK generates a ClientToken on your behalf. • After 8 hours, any request with the same ClientToken is treated as a new request. Type: String Length Constraints: Minimum length of 32. Maximum length of 128. Required: No ErrorReportConfiguration Configuration
timestream-217
timestream.pdf
217
request accepts the following data in JSON format. Actions 888 Amazon Timestream ClientToken Developer Guide Using a ClientToken makes the call to CreateScheduledQuery idempotent, in other words, making the same request repeatedly will produce the same result. Making multiple identical CreateScheduledQuery requests has the same effect as making a single request. • If CreateScheduledQuery is called without a ClientToken, the Query SDK generates a ClientToken on your behalf. • After 8 hours, any request with the same ClientToken is treated as a new request. Type: String Length Constraints: Minimum length of 32. Maximum length of 128. Required: No ErrorReportConfiguration Configuration for error reporting. Error reports will be generated when a problem is encountered when writing the query results. Type: ErrorReportConfiguration object Required: Yes KmsKeyId The Amazon KMS key used to encrypt the scheduled query resource, at-rest. If the Amazon KMS key is not specified, the scheduled query resource will be encrypted with a Timestream owned Amazon KMS key. To specify a KMS key, use the key ID, key ARN, alias name, or alias ARN. When using an alias name, prefix the name with alias/ If ErrorReportConfiguration uses SSE_KMS as encryption type, the same KmsKeyId is used to encrypt the error report at rest. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No Name Name of the scheduled query. Actions 889 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: Yes NotificationConfiguration Notification configuration for the scheduled query. A notification is sent by Timestream when a query run finishes, when the state is updated or when you delete it. Type: NotificationConfiguration object Required: Yes QueryString The query string to run. Parameter names can be specified in the query string @ character followed by an identifier. The named Parameter @scheduled_runtime is reserved and can be used in the query to get the time at which the query is scheduled to run. The timestamp calculated according to the ScheduleConfiguration parameter, will be the value of @scheduled_runtime paramater for each query run. For example, consider an instance of a scheduled query executing on 2021-12-01 00:00:00. For this instance, the @scheduled_runtime parameter is initialized to the timestamp 2021-12-01 00:00:00 when invoking the query. Type: String Length Constraints: Minimum length of 1. Maximum length of 262144. Required: Yes ScheduleConfiguration The schedule configuration for the query. Type: ScheduleConfiguration object Required: Yes ScheduledQueryExecutionRoleArn The ARN for the IAM role that Timestream will assume when running the scheduled query. Actions 890 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Tags A list of key-value pairs to label the scheduled query. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Required: No TargetConfiguration Configuration used for writing the result of a query. Type: TargetConfiguration object Required: No Response Syntax { "Arn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Arn ARN for the created scheduled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Actions 891 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 ConflictException Unable to poll results for a cancelled query. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ServiceQuotaExceededException You have exceeded the service quota. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 Actions 892 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 893 Amazon Timestream Developer Guide DeleteScheduledQuery Service: Amazon Timestream Query Deletes a given scheduled query. This is an irreversible operation. Request Syntax { "ScheduledQueryArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ScheduledQueryArn The ARN of the scheduled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Response Elements If the action is successful, the
timestream-218
timestream.pdf
218
for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 893 Amazon Timestream Developer Guide DeleteScheduledQuery Service: Amazon Timestream Query Deletes a given scheduled query. This is an irreversible operation. Request Syntax { "ScheduledQueryArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ScheduledQueryArn The ARN of the scheduled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. Actions 894 Developer Guide Amazon Timestream HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 895 Amazon Timestream Developer Guide Actions 896 Amazon Timestream Developer Guide DescribeAccountSettings Service: Amazon Timestream Query Describes the settings for your account that include the query pricing model and the configured maximum TCUs the service can use for your query workload. You're charged only for the duration of compute units used for your workloads. Response Syntax { "MaxQueryTCU": number, "QueryCompute": { "ComputeMode": "string", "ProvisionedCapacity": { "ActiveQueryTCU": number, "LastUpdate": { "Status": "string", "StatusMessage": "string", "TargetQueryTCU": number }, "NotificationConfiguration": { "RoleArn": "string", "SnsConfiguration": { "TopicArn": "string" } } } }, "QueryPricingModel": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. MaxQueryTCU The maximum number of Timestream compute units (TCUs) the service will use at any point in time to serve your queries. To run queries, you must set a minimum capacity of 4 TCU. You Actions 897 Amazon Timestream Developer Guide can set the maximum number of TCU in multiples of 4, for example, 4, 8, 16, 32, and so on. This configuration is applicable only for on-demand usage of (TCUs). Type: Integer QueryCompute An object that contains the usage settings for Timestream Compute Units (TCUs) in your account for the query workload. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: QueryComputeResponse object QueryPricingModel The pricing model for queries in your account. Note The QueryPricingModel parameter is used by several Timestream operations; however, the UpdateAccountSettings API operation doesn't recognize any values other than COMPUTE_UNITS. Type: String Valid Values: BYTES_SCANNED | COMPUTE_UNITS Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 Actions 898 Developer Guide Amazon Timestream InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 899 Amazon Timestream Developer Guide DescribeEndpoints Service: Amazon Timestream Query DescribeEndpoints returns a list of available endpoints to make Timestream API calls against. This API is available through both Write and Query. Because the Timestream SDKs are designed to transparently work with the service’s architecture, including the management and mapping of the service endpoints, it is not recommended that you use this API unless: • You are using VPC endpoints (AWS PrivateLink) with Timestream • Your application uses a programming language that does not yet have SDK support • You require better control over the client-side implementation For detailed information on how and when to use
timestream-219
timestream.pdf
219
Service: Amazon Timestream Query DescribeEndpoints returns a list of available endpoints to make Timestream API calls against. This API is available through both Write and Query. Because the Timestream SDKs are designed to transparently work with the service’s architecture, including the management and mapping of the service endpoints, it is not recommended that you use this API unless: • You are using VPC endpoints (AWS PrivateLink) with Timestream • Your application uses a programming language that does not yet have SDK support • You require better control over the client-side implementation For detailed information on how and when to use and implement DescribeEndpoints, see The Endpoint Discovery Pattern. Response Syntax { "Endpoints": [ { "Address": "string", "CachePeriodInMinutes": number } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Endpoints An Endpoints object is returned when a DescribeEndpoints request is made. Type: Array of Endpoint objects Actions 900 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 901 Amazon Timestream Developer Guide DescribeScheduledQuery Service: Amazon Timestream Query Provides detailed information about a scheduled query. Request Syntax { "ScheduledQueryArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ScheduledQueryArn The ARN of the scheduled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Response Syntax { "ScheduledQuery": { "Arn": "string", "CreationTime": number, "ErrorReportConfiguration": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "ObjectKeyPrefix": "string" } }, "KmsKeyId": "string", "LastRunSummary": { "ErrorReportLocation": { Actions 902 Amazon Timestream Developer Guide "S3ReportLocation": { "BucketName": "string", "ObjectKey": "string" } }, "ExecutionStats": { "BytesMetered": number, "CumulativeBytesScanned": number, "DataWrites": number, "ExecutionTimeInMillis": number, "QueryResultRows": number, "RecordsIngested": number }, "FailureReason": "string", "InvocationTime": number, "QueryInsightsResponse": { "OutputBytes": number, "OutputRows": number, "QuerySpatialCoverage": { "Max": { "PartitionKey": [ "string" ], "TableArn": "string", "Value": number } }, "QueryTableCount": number, "QueryTemporalRange": { "Max": { "TableArn": "string", "Value": number } } }, "RunStatus": "string", "TriggerTime": number }, "Name": "string", "NextInvocationTime": number, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "string" } }, "PreviousInvocationTime": number, Actions 903 Amazon Timestream Developer Guide "QueryString": "string", "RecentlyFailedRuns": [ { "ErrorReportLocation": { "S3ReportLocation": { "BucketName": "string", "ObjectKey": "string" } }, "ExecutionStats": { "BytesMetered": number, "CumulativeBytesScanned": number, "DataWrites": number, "ExecutionTimeInMillis": number, "QueryResultRows": number, "RecordsIngested": number }, "FailureReason": "string", "InvocationTime": number, "QueryInsightsResponse": { "OutputBytes": number, "OutputRows": number, "QuerySpatialCoverage": { "Max": { "PartitionKey": [ "string" ], "TableArn": "string", "Value": number } }, "QueryTableCount": number, "QueryTemporalRange": { "Max": { "TableArn": "string", "Value": number } } }, "RunStatus": "string", "TriggerTime": number } ], "ScheduleConfiguration": { "ScheduleExpression": "string" }, Actions 904 Amazon Timestream Developer Guide "ScheduledQueryExecutionRoleArn": "string", "State": "string", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "string", "DimensionMappings": [ { "DimensionValueType": "string", "Name": "string" } ], "MeasureNameColumn": "string", "MixedMeasureMappings": [ { "MeasureName": "string", "MeasureValueType": "string", "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "SourceColumn": "string", "TargetMeasureName": "string" } ], "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ { "MeasureValueType": "string", "SourceColumn": "string", "TargetMultiMeasureAttributeName": "string" } ], "TargetMultiMeasureName": "string" }, "TableName": "string", "TimeColumn": "string" } } } } Actions 905 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. ScheduledQuery The scheduled query. Type: ScheduledQueryDescription object Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 Actions 906 Amazon Timestream ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET
timestream-220
timestream.pdf
220
the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 Actions 906 Amazon Timestream ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 907 Amazon Timestream Developer Guide ExecuteScheduledQuery Service: Amazon Timestream Query You can use this API to run a scheduled query manually. If you enabled QueryInsights, this API also returns insights and metrics related to the query that you executed as part of an Amazon SNS notification. QueryInsights helps with performance tuning of your query. For more information about QueryInsights, see Using query insights to optimize queries in Amazon Timestream. Request Syntax { "ClientToken": "string", "InvocationTime": number, "QueryInsights": { "Mode": "string" }, "ScheduledQueryArn": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ClientToken Not used. Type: String Length Constraints: Minimum length of 32. Maximum length of 128. Required: No InvocationTime The timestamp in UTC. Query will be run as if it was invoked at this timestamp. Type: Timestamp Required: Yes Actions 908 Amazon Timestream QueryInsights Developer Guide Encapsulates settings for enabling QueryInsights. Enabling QueryInsights returns insights and metrics as a part of the Amazon SNS notification for the query that you executed. You can use QueryInsights to tune your query performance and cost. Type: ScheduledQueryInsights object Required: No ScheduledQueryArn ARN of the scheduled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. Actions 909 Developer Guide Amazon Timestream HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 Examples Scheduled query notification message for the ENABLED_WITH_RATE_CONTROL mode The following example shows a successful scheduled query notification message for the ENABLED_WITH_RATE_CONTROL mode of the QueryInsights parameter. "SuccessNotificationMessage": { "type": "MANUAL_TRIGGER_SUCCESS", "arn": "arn:aws:timestream:<Region>:<Account>:scheduled-query/sq-test-49c6ed55- c2e7-4cc2-9956-4a0ecea13420-80e05b035236a4c3", "scheduledQueryRunSummary": { "invocationEpochSecond": 1723710546, "triggerTimeMillis": 1723710547490, "runStatus": "MANUAL_TRIGGER_SUCCESS", "executionStats": { "executionTimeInMillis": 17343, "dataWrites": 1024, "bytesMetered": 0, "cumulativeBytesScanned": 600, "recordsIngested": 1, "queryResultRows": 1 }, "queryInsightsResponse": { "querySpatialCoverage": { Actions 910 Amazon Timestream "max": { Developer Guide "value": 1.0, "tableArn": "arn:aws:timestream:<Region>:<Account>:database/BaseDb/ table/BaseTable", "partitionKey": [ "measure_name" ] } }, "queryTemporalRange": { "max": { "value": 2399999999999, "tableArn": "arn:aws:timestream:<Region>:<Account>:database/BaseDb/ table/BaseTable" } }, "queryTableCount": 1, "outputRows": 1, "outputBytes": 59 } } } Scheduled query notification message for the DISABLED mode The following example shows a successful scheduled query notification message for the DISABLED mode of the QueryInsights parameter. "SuccessNotificationMessage": { "type": "MANUAL_TRIGGER_SUCCESS", "arn": "arn:aws:timestream:<Region>:<Account>:scheduled-query/sq-test- fa109d9e-6528-4a0d-ac40-482fa05e657f-140faaeecdc5b2a7", "scheduledQueryRunSummary": { "invocationEpochSecond": 1723711401, "triggerTimeMillis": 1723711402144, "runStatus": "MANUAL_TRIGGER_SUCCESS", "executionStats": { "executionTimeInMillis": 17992, "dataWrites": 1024, "bytesMetered": 0, "cumulativeBytesScanned": 600, "recordsIngested": 1, "queryResultRows": 1 Actions 911 Amazon Timestream } } } Developer Guide Failure notification message for the ENABLED_WITH_RATE_CONTROL mode The following example shows a failed scheduled query notification message for the ENABLED_WITH_RATE_CONTROL mode of the QueryInsights parameter. "FailureNotificationMessage": { "type": "MANUAL_TRIGGER_FAILURE", "arn": "arn:aws:timestream:<Region>:<Account>:scheduled-query/sq-test- b261670d-790c-4116-9db5-0798071b18b1-b7e27a1d79be226d", "scheduledQueryRunSummary": { "invocationEpochSecond": 1727915513, "triggerTimeMillis": 1727915513894, "runStatus": "MANUAL_TRIGGER_FAILURE", "executionStats": { "executionTimeInMillis": 10777, "dataWrites": 0, "bytesMetered": 0, "cumulativeBytesScanned": 0, "recordsIngested": 0, "queryResultRows": 4 }, "errorReportLocation": { "s3ReportLocation": { "bucketName": "amzn-s3-demo-bucket", "objectKey": "4my-organization-f7a3c5d065a1a95e/1727915513/ MANUAL/1727915513894/5e14b3df-b147-49f4-9331-784f749b68ae" } }, "failureReason": "Schedule encountered some errors and is incomplete. Please take a look at error report for further details" } } Failure notification message for the DISABLED mode The following example shows a failed scheduled query notification message for the DISABLED mode of the QueryInsights parameter. "FailureNotificationMessage": { Actions 912 Amazon Timestream Developer Guide "type": "MANUAL_TRIGGER_FAILURE", "arn": "arn:aws:timestream:<Region>:<Account>:scheduled-query/sq-test- b261670d-790c-4116-9db5-0798071b18b1-b7e27a1d79be226d", "scheduledQueryRunSummary": { "invocationEpochSecond": 1727915194, "triggerTimeMillis": 1727915195119, "runStatus": "MANUAL_TRIGGER_FAILURE", "executionStats": { "executionTimeInMillis": 10777, "dataWrites": 0, "bytesMetered": 0, "cumulativeBytesScanned": 0, "recordsIngested": 0, "queryResultRows": 4 }, "errorReportLocation": { "s3ReportLocation": { "bucketName": "amzn-s3-demo-bucket", "objectKey": "4my-organization-b7e27a1d79be226d/1727915194/ MANUAL/1727915195119/08dea9f5-9a0a-4e63-a5f7-ded23247bb98" } }, "failureReason":
timestream-221
timestream.pdf
221
"bucketName": "amzn-s3-demo-bucket", "objectKey": "4my-organization-f7a3c5d065a1a95e/1727915513/ MANUAL/1727915513894/5e14b3df-b147-49f4-9331-784f749b68ae" } }, "failureReason": "Schedule encountered some errors and is incomplete. Please take a look at error report for further details" } } Failure notification message for the DISABLED mode The following example shows a failed scheduled query notification message for the DISABLED mode of the QueryInsights parameter. "FailureNotificationMessage": { Actions 912 Amazon Timestream Developer Guide "type": "MANUAL_TRIGGER_FAILURE", "arn": "arn:aws:timestream:<Region>:<Account>:scheduled-query/sq-test- b261670d-790c-4116-9db5-0798071b18b1-b7e27a1d79be226d", "scheduledQueryRunSummary": { "invocationEpochSecond": 1727915194, "triggerTimeMillis": 1727915195119, "runStatus": "MANUAL_TRIGGER_FAILURE", "executionStats": { "executionTimeInMillis": 10777, "dataWrites": 0, "bytesMetered": 0, "cumulativeBytesScanned": 0, "recordsIngested": 0, "queryResultRows": 4 }, "errorReportLocation": { "s3ReportLocation": { "bucketName": "amzn-s3-demo-bucket", "objectKey": "4my-organization-b7e27a1d79be226d/1727915194/ MANUAL/1727915195119/08dea9f5-9a0a-4e63-a5f7-ded23247bb98" } }, "failureReason": "Schedule encountered some errors and is incomplete. Please take a look at error report for further details" } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 Actions 913 Amazon Timestream • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 914 Amazon Timestream Developer Guide ListScheduledQueries Service: Amazon Timestream Query Gets a list of all scheduled queries in the caller's Amazon account and Region. ListScheduledQueries is eventually consistent. Request Syntax { "MaxResults": number, "NextToken": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MaxResults The maximum number of items to return in the output. If the total number of items available is more than the value specified, a NextToken is provided in the output. To resume pagination, provide the NextToken value as the argument to the subsequent call to ListScheduledQueriesRequest. Type: Integer Valid Range: Minimum value of 1. Maximum value of 1000. Required: No NextToken A pagination token to resume pagination. Type: String Required: No Response Syntax { Actions 915 Amazon Timestream Developer Guide "NextToken": "string", "ScheduledQueries": [ { "Arn": "string", "CreationTime": number, "ErrorReportConfiguration": { "S3Configuration": { "BucketName": "string", "EncryptionOption": "string", "ObjectKeyPrefix": "string" } }, "LastRunStatus": "string", "Name": "string", "NextInvocationTime": number, "PreviousInvocationTime": number, "State": "string", "TargetDestination": { "TimestreamDestination": { "DatabaseName": "string", "TableName": "string" } } } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. NextToken A token to specify where to start paginating. This is the NextToken from a previously truncated response. Type: String ScheduledQueries A list of scheduled queries. Type: Array of ScheduledQuery objects Actions 916 Amazon Timestream Errors Developer Guide For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 Actions 917 Amazon Timestream • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 918 Amazon Timestream Developer Guide ListTagsForResource Service: Amazon Timestream Query List all tags on a Timestream query resource. Request Syntax { "MaxResults": number, "NextToken": "string", "ResourceARN": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MaxResults The maximum number of tags to return. Type: Integer Valid Range: Minimum value of 1. Maximum value of 200. Required: No NextToken A pagination token to resume pagination. Type: String Required: No ResourceARN The Timestream resource with tags to be listed. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Actions 919 Developer Guide Amazon Timestream Required: Yes Response Syntax { "NextToken": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. NextToken A pagination token to resume pagination with a subsequent call to ListTagsForResourceResponse. Type: String Tags The tags currently associated with the Timestream resource. Type: Array of Tag
timestream-222
timestream.pdf
222
to be listed. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Actions 919 Developer Guide Amazon Timestream Required: Yes Response Syntax { "NextToken": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. NextToken A pagination token to resume pagination with a subsequent call to ListTagsForResourceResponse. Type: String Tags The tags currently associated with the Timestream resource. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint is invalid. Actions 920 Developer Guide Amazon Timestream HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 921 Amazon Timestream Developer Guide PrepareQuery Service: Amazon Timestream Query A synchronous operation that allows you to submit a query with parameters to be stored by Timestream for later running. Timestream only supports using this operation with ValidateOnly set to true. Request Syntax { "QueryString": "string", "ValidateOnly": boolean } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. QueryString The Timestream query string that you want to use as a prepared statement. Parameter names can be specified in the query string @ character followed by an identifier. Type: String Length Constraints: Minimum length of 1. Maximum length of 262144. Required: Yes ValidateOnly By setting this value to true, Timestream will only validate that the query string is a valid Timestream query, and not store the prepared query for later use. Type: Boolean Required: No Response Syntax { "Columns": [ Actions 922 Amazon Timestream Developer Guide { "Aliased": boolean, "DatabaseName": "string", "Name": "string", "TableName": "string", "Type": { "ArrayColumnInfo": { "Name": "string", "Type": "Type" }, "RowColumnInfo": [ { "Name": "string", "Type": "Type" } ], "ScalarType": "string", "TimeSeriesMeasureValueColumnInfo": { "Name": "string", "Type": "Type" } } } ], "Parameters": [ { "Name": "string", "Type": { "ArrayColumnInfo": { "Name": "string", "Type": "Type" }, "RowColumnInfo": [ { "Name": "string", "Type": "Type" } ], "ScalarType": "string", "TimeSeriesMeasureValueColumnInfo": { "Name": "string", "Type": "Type" } } Actions 923 Amazon Timestream } ], "QueryString": "string" } Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Columns A list of SELECT clause columns of the submitted query string. Type: Array of SelectColumn objects Parameters A list of parameters used in the submitted query string. Type: Array of ParameterMapping objects QueryString The query string that you want prepare. Type: String Length Constraints: Minimum length of 1. Maximum length of 262144. Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. Actions 924 Developer Guide Amazon Timestream HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 925 Amazon Timestream Developer Guide Query Service: Amazon Timestream Query Query is a synchronous operation that enables you to run a query against your Amazon Timestream data. If you enabled QueryInsights, this API also returns insights and metrics related to the query that you executed. QueryInsights helps with performance tuning of your query. For more information about QueryInsights, see Using query insights to optimize queries
timestream-223
timestream.pdf
223
• AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 925 Amazon Timestream Developer Guide Query Service: Amazon Timestream Query Query is a synchronous operation that enables you to run a query against your Amazon Timestream data. If you enabled QueryInsights, this API also returns insights and metrics related to the query that you executed. QueryInsights helps with performance tuning of your query. For more information about QueryInsights, see Using query insights to optimize queries in Amazon Timestream. Note The maximum number of Query API requests you're allowed to make with QueryInsights enabled is 1 query per second (QPS). If you exceed this query rate, it might result in throttling. Query will time out after 60 seconds. You must update the default timeout in the SDK to support a timeout of 60 seconds. See the code sample for details. Your query request will fail in the following cases: • If you submit a Query request with the same client token outside of the 5-minute idempotency window. • If you submit a Query request with the same client token, but change other parameters, within the 5-minute idempotency window. • If the size of the row (including the query metadata) exceeds 1 MB, then the query will fail with the following error message: Query aborted as max page response size has been exceeded by the output result row • If the IAM principal of the query initiator and the result reader are not the same and/or the query initiator and the result reader do not have the same query string in the query requests, the query will fail with an Invalid pagination token error. Actions 926 Developer Guide Amazon Timestream Request Syntax { "ClientToken": "string", "MaxRows": number, "NextToken": "string", "QueryInsights": { "Mode": "string" }, "QueryString": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ClientToken Unique, case-sensitive string of up to 64 ASCII characters specified when a Query request is made. Providing a ClientToken makes the call to Query idempotent. This means that running the same query repeatedly will produce the same result. In other words, making multiple identical Query requests has the same effect as making a single request. When using ClientToken in a query, note the following: • If the Query API is instantiated without a ClientToken, the Query SDK generates a ClientToken on your behalf. • If the Query invocation only contains the ClientToken but does not include a NextToken, that invocation of Query is assumed to be a new query run. • If the invocation contains NextToken, that particular invocation is assumed to be a subsequent invocation of a prior call to the Query API, and a result set is returned. • After 4 hours, any request with the same ClientToken is treated as a new request. Type: String Length Constraints: Minimum length of 32. Maximum length of 128. Required: No Actions 927 Amazon Timestream MaxRows Developer Guide The total number of rows to be returned in the Query output. The initial run of Query with a MaxRows value specified will return the result set of the query in two cases: • The size of the result is less than 1MB. • The number of rows in the result set is less than the value of maxRows. Otherwise, the initial invocation of Query only returns a NextToken, which can then be used in subsequent calls to fetch the result set. To resume pagination, provide the NextToken value in the subsequent command. If the row size is large (e.g. a row has many columns), Timestream may return fewer rows to keep the response size from exceeding the 1 MB limit. If MaxRows is not provided, Timestream will send the necessary number of rows to meet the 1 MB limit. Type: Integer Valid Range: Minimum value of 1. Maximum value of 1000. Required: No NextToken A pagination token used to return a set of results. When the Query API is invoked using NextToken, that particular invocation is assumed to be a subsequent invocation of a prior call to Query, and a result set is returned. However, if the Query invocation only contains the ClientToken, that invocation of Query is assumed to be a new query run. Note the following when using NextToken in a query: • A pagination token can be used for up to five Query invocations, OR for a duration of up to 1 hour – whichever comes first. • Using the same NextToken will return the same set of records. To keep paginating through the result set, you must to use the most recent
timestream-224
timestream.pdf
224
assumed to be a subsequent invocation of a prior call to Query, and a result set is returned. However, if the Query invocation only contains the ClientToken, that invocation of Query is assumed to be a new query run. Note the following when using NextToken in a query: • A pagination token can be used for up to five Query invocations, OR for a duration of up to 1 hour – whichever comes first. • Using the same NextToken will return the same set of records. To keep paginating through the result set, you must to use the most recent nextToken. • Suppose a Query invocation returns two NextToken values, TokenA and TokenB. If TokenB is used in a subsequent Query invocation, then TokenA is invalidated and cannot be reused. • To request a previous result set from a query after pagination has begun, you must re-invoke the Query API. • The latest NextToken should be used to paginate until null is returned, at which point a new NextToken should be used. Actions 928 Amazon Timestream Developer Guide • If the IAM principal of the query initiator and the result reader are not the same and/or the query initiator and the result reader do not have the same query string in the query requests, the query will fail with an Invalid pagination token error. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No QueryInsights Encapsulates settings for enabling QueryInsights. Enabling QueryInsights returns insights and metrics in addition to query results for the query that you executed. You can use QueryInsights to tune your query performance. Type: QueryInsights object Required: No QueryString The query to be run by Timestream. Type: String Length Constraints: Minimum length of 1. Maximum length of 262144. Required: Yes Response Syntax { "ColumnInfo": [ { "Name": "string", "Type": { "ArrayColumnInfo": "ColumnInfo", "RowColumnInfo": [ "ColumnInfo" ], "ScalarType": "string", "TimeSeriesMeasureValueColumnInfo": "ColumnInfo" } Actions 929 Developer Guide Amazon Timestream } ], "NextToken": "string", "QueryId": "string", "QueryInsightsResponse": { "OutputBytes": number, "OutputRows": number, "QuerySpatialCoverage": { "Max": { "PartitionKey": [ "string" ], "TableArn": "string", "Value": number } }, "QueryTableCount": number, "QueryTemporalRange": { "Max": { "TableArn": "string", "Value": number } }, "UnloadPartitionCount": number, "UnloadWrittenBytes": number, "UnloadWrittenRows": number }, "QueryStatus": { "CumulativeBytesMetered": number, "CumulativeBytesScanned": number, "ProgressPercentage": number }, "Rows": [ { "Data": [ { "ArrayValue": [ "Datum" ], "NullValue": boolean, "RowValue": "Row", "ScalarValue": "string", "TimeSeriesValue": [ { "Time": "string", "Value": "Datum" Actions 930 Amazon Timestream } ] } ] } ] } Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. ColumnInfo The column data types of the returned result set. Type: Array of ColumnInfo objects NextToken A pagination token that can be used again on a Query call to get the next set of results. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. QueryId A unique ID for the given query. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9]+ QueryInsightsResponse Encapsulates QueryInsights containing insights and metrics related to the query that you executed. Type: QueryInsightsResponse object Actions 931 Amazon Timestream QueryStatus Developer Guide Information about the status of the query, including progress and bytes scanned. Type: QueryStatus object Rows The result set rows returned by the query. Type: Array of Row objects Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 ConflictException Unable to poll results for a cancelled query. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 QueryExecutionException Timestream was unable to run the query successfully. HTTP Status Code: 400 Actions 932 Amazon Timestream ThrottlingException The request was throttled due to excessive requests. Developer Guide HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 933 Amazon Timestream Developer Guide TagResource Service: Amazon Timestream Query Associate a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. Request Syntax { "ResourceARN": "string", "Tags": [ { "Key": "string", "Value": "string"
timestream-225
timestream.pdf
225
.NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 933 Amazon Timestream Developer Guide TagResource Service: Amazon Timestream Query Associate a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. Request Syntax { "ResourceARN": "string", "Tags": [ { "Key": "string", "Value": "string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ResourceARN Identifies the Timestream resource to which tags should be added. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Tags The tags to be assigned to the Timestream resource. Type: Array of Tag objects Array Members: Minimum number of 0 items. Maximum number of 200 items. Required: Yes Actions 934 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ServiceQuotaExceededException You have exceeded the service quota. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET Actions 935 Developer Guide Amazon Timestream • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 936 Amazon Timestream Developer Guide UntagResource Service: Amazon Timestream Query Removes the association of tags from a Timestream query resource. Request Syntax { "ResourceARN": "string", "TagKeys": [ "string" ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ResourceARN The Timestream resource that the tags will be removed from. This value is an Amazon Resource Name (ARN). Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes TagKeys A list of tags keys. Existing tags of the resource whose keys are members of this list will be removed from the Timestream resource. Type: Array of strings Array Members: Minimum number of 0 items. Maximum number of 200 items. Length Constraints: Minimum length of 1. Maximum length of 128. Required: Yes Actions 937 Amazon Timestream Response Elements Developer Guide If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 Actions 938 Amazon Timestream • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Developer Guide Actions 939 Amazon Timestream Developer Guide UpdateAccountSettings Service: Amazon Timestream Query Transitions your account to use TCUs for query pricing and modifies the maximum query compute units that you've configured. If you reduce the value of MaxQueryTCU to a desired configuration, the new value can take up to 24 hours to be effective. Note After you've transitioned your account to use TCUs for query pricing, you can't transition to using bytes scanned for query pricing. Request Syntax { "MaxQueryTCU": number, "QueryCompute": { "ComputeMode": "string", "ProvisionedCapacity": { "NotificationConfiguration": { "RoleArn": "string", "SnsConfiguration": { "TopicArn": "string" } }, "TargetQueryTCU": number } }, "QueryPricingModel": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON
timestream-226
timestream.pdf
226
that you've configured. If you reduce the value of MaxQueryTCU to a desired configuration, the new value can take up to 24 hours to be effective. Note After you've transitioned your account to use TCUs for query pricing, you can't transition to using bytes scanned for query pricing. Request Syntax { "MaxQueryTCU": number, "QueryCompute": { "ComputeMode": "string", "ProvisionedCapacity": { "NotificationConfiguration": { "RoleArn": "string", "SnsConfiguration": { "TopicArn": "string" } }, "TargetQueryTCU": number } }, "QueryPricingModel": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. MaxQueryTCU The maximum number of compute units the service will use at any point in time to serve your queries. To run queries, you must set a minimum capacity of 4 TCU. You can set the maximum Actions 940 Amazon Timestream Developer Guide number of TCU in multiples of 4, for example, 4, 8, 16, 32, and so on. The maximum value supported for MaxQueryTCU is 1000. To request an increase to this soft limit, contact AWS Support. For information about the default quota for maxQueryTCU, see Default quotas. This configuration is applicable only for on-demand usage of Timestream Compute Units (TCUs). The maximum value supported for MaxQueryTCU is 1000. To request an increase to this soft limit, contact AWS Support. For information about the default quota for maxQueryTCU, see Default quotas. Type: Integer Required: No QueryCompute Modifies the query compute settings configured in your account, including the query pricing model and provisioned Timestream Compute Units (TCUs) in your account. QueryCompute is available only in the Asia Pacific (Mumbai) region. Note This API is idempotent, meaning that making the same request multiple times will have the same effect as making the request once. Type: QueryComputeRequest object Required: No QueryPricingModel The pricing model for queries in an account. Note The QueryPricingModel parameter is used by several Timestream operations; however, the UpdateAccountSettings API operation doesn't recognize any values other than COMPUTE_UNITS. Type: String Actions 941 Amazon Timestream Developer Guide Valid Values: BYTES_SCANNED | COMPUTE_UNITS Required: No Response Syntax { "MaxQueryTCU": number, "QueryCompute": { "ComputeMode": "string", "ProvisionedCapacity": { "ActiveQueryTCU": number, "LastUpdate": { "Status": "string", "StatusMessage": "string", "TargetQueryTCU": number }, "NotificationConfiguration": { "RoleArn": "string", "SnsConfiguration": { "TopicArn": "string" } } } }, "QueryPricingModel": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. MaxQueryTCU The configured maximum number of compute units the service will use at any point in time to serve your queries. Type: Integer Actions 942 Amazon Timestream QueryCompute Developer Guide Confirms the updated account settings for querying data in your account. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: QueryComputeResponse object QueryPricingModel The pricing model for an account. Type: String Valid Values: BYTES_SCANNED | COMPUTE_UNITS Errors For information about the errors that are common to all actions, see Common Errors. AccessDeniedException You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. Actions 943 Amazon Timestream HTTP Status Code: 400 See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Actions 944 Amazon Timestream Developer Guide UpdateScheduledQuery Service: Amazon Timestream Query Update a scheduled query. Request Syntax { "ScheduledQueryArn": "string", "State": "string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. ScheduledQueryArn ARN of the scheuled query. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes State State of the scheduled query. Type: String Valid Values: ENABLED | DISABLED Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. Actions 945 Amazon Timestream AccessDeniedException Developer Guide You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException
timestream-227
timestream.pdf
227
Required: Yes State State of the scheduled query. Type: String Valid Values: ENABLED | DISABLED Required: Yes Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. Actions 945 Amazon Timestream AccessDeniedException Developer Guide You do not have the necessary permissions to access the account settings. HTTP Status Code: 400 InternalServerException An internal server error occurred while processing the request. HTTP Status Code: 400 InvalidEndpointException The requested endpoint is invalid. HTTP Status Code: 400 ResourceNotFoundException The requested resource could not be found. HTTP Status Code: 400 ThrottlingException The request was throttled due to excessive requests. HTTP Status Code: 400 ValidationException Invalid or malformed request. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS Command Line Interface • AWS SDK for .NET • AWS SDK for C++ • AWS SDK for Go v2 Actions 946 Amazon Timestream • AWS SDK for Java V2 • AWS SDK for JavaScript V3 • AWS SDK for Kotlin • AWS SDK for PHP V3 • AWS SDK for Python • AWS SDK for Ruby V3 Data Types The following data types are supported by Amazon Timestream Write: • BatchLoadProgressReport • BatchLoadTask • BatchLoadTaskDescription • CsvConfiguration • Database • DataModel • DataModelConfiguration • DataModelS3Configuration • DataSourceConfiguration • DataSourceS3Configuration • Dimension • DimensionMapping • Endpoint • MagneticStoreRejectedDataLocation • MagneticStoreWriteProperties • MeasureValue • MixedMeasureMapping • MultiMeasureAttributeMapping • MultiMeasureMappings • PartitionKey • Record Data Types Developer Guide 947 Developer Guide Amazon Timestream • RecordsIngested • RejectedRecord • ReportConfiguration • ReportS3Configuration • RetentionProperties • S3Configuration • Schema • Table • Tag The following data types are supported by Amazon Timestream Query: • AccountSettingsNotificationConfiguration • ColumnInfo • Datum • DimensionMapping • Endpoint • ErrorReportConfiguration • ErrorReportLocation • ExecutionStats • LastUpdate • MixedMeasureMapping • MultiMeasureAttributeMapping • MultiMeasureMappings • NotificationConfiguration • ParameterMapping • ProvisionedCapacityRequest • ProvisionedCapacityResponse • QueryComputeRequest • QueryComputeResponse • QueryInsights Data Types 948 Developer Guide Amazon Timestream • QueryInsightsResponse • QuerySpatialCoverage • QuerySpatialCoverageMax • QueryStatus • QueryTemporalRange • QueryTemporalRangeMax • Row • S3Configuration • S3ReportLocation • ScheduleConfiguration • ScheduledQuery • ScheduledQueryDescription • ScheduledQueryInsights • ScheduledQueryInsightsResponse • ScheduledQueryRunSummary • SelectColumn • SnsConfiguration • Tag • TargetConfiguration • TargetDestination • TimeSeriesDataPoint • TimestreamConfiguration • TimestreamDestination • Type Amazon Timestream Write The following data types are supported by Amazon Timestream Write: • BatchLoadProgressReport • BatchLoadTask • BatchLoadTaskDescription Data Types 949 Developer Guide Amazon Timestream • CsvConfiguration • Database • DataModel • DataModelConfiguration • DataModelS3Configuration • DataSourceConfiguration • DataSourceS3Configuration • Dimension • DimensionMapping • Endpoint • MagneticStoreRejectedDataLocation • MagneticStoreWriteProperties • MeasureValue • MixedMeasureMapping • MultiMeasureAttributeMapping • MultiMeasureMappings • PartitionKey • Record • RecordsIngested • RejectedRecord • ReportConfiguration • ReportS3Configuration • RetentionProperties • S3Configuration • Schema • Table • Tag Data Types 950 Amazon Timestream Developer Guide BatchLoadProgressReport Service: Amazon Timestream Write Details about the progress of a batch load task. Contents BytesMetered Type: Long Required: No FileFailures Type: Long Required: No ParseFailures Type: Long Required: No RecordIngestionFailures Type: Long Required: No RecordsIngested Type: Long Required: No RecordsProcessed Type: Long Required: No Data Types 951 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 952 Amazon Timestream Developer Guide BatchLoadTask Service: Amazon Timestream Write Details about a batch load task. Contents CreationTime The time when the Timestream batch load task was created. Type: Timestamp Required: No DatabaseName Database name for the database into which a batch load task loads data. Type: String Required: No LastUpdatedTime The time when the Timestream batch load task was last updated. Type: Timestamp Required: No ResumableUntil Type: Timestamp Required: No TableName Table name for the table into which a batch load task loads data. Type: String Required: No Data Types 953 Amazon Timestream TaskId The ID of the batch load task. Type: String Length Constraints: Minimum length of 3. Maximum length of 32. Developer Guide Pattern: [A-Z0-9]+ Required: No TaskStatus Status of the batch load task. Type: String Valid Values: CREATED | IN_PROGRESS | FAILED | SUCCEEDED | PROGRESS_STOPPED | PENDING_RESUME Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 954 Amazon Timestream Developer Guide BatchLoadTaskDescription Service: Amazon Timestream Write Details about a batch load task. Contents CreationTime The time when the Timestream batch load task was created. Type: Timestamp Required: No DataModelConfiguration Data model configuration for a batch load task. This contains details about where
timestream-228
timestream.pdf
228
String Valid Values: CREATED | IN_PROGRESS | FAILED | SUCCEEDED | PROGRESS_STOPPED | PENDING_RESUME Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 954 Amazon Timestream Developer Guide BatchLoadTaskDescription Service: Amazon Timestream Write Details about a batch load task. Contents CreationTime The time when the Timestream batch load task was created. Type: Timestamp Required: No DataModelConfiguration Data model configuration for a batch load task. This contains details about where a data model for a batch load task is stored. Type: DataModelConfiguration object Required: No DataSourceConfiguration Configuration details about the data source for a batch load task. Type: DataSourceConfiguration object Required: No ErrorMessage Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No LastUpdatedTime The time when the Timestream batch load task was last updated. Type: Timestamp Data Types 955 Developer Guide Amazon Timestream Required: No ProgressReport Type: BatchLoadProgressReport object Required: No RecordVersion Type: Long Required: No ReportConfiguration Report configuration for a batch load task. This contains details about where error reports are stored. Type: ReportConfiguration object Required: No ResumableUntil Type: Timestamp Required: No TargetDatabaseName Type: String Required: No TargetTableName Type: String Required: No TaskId The ID of the batch load task. Data Types 956 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 3. Maximum length of 32. Pattern: [A-Z0-9]+ Required: No TaskStatus Status of the batch load task. Type: String Valid Values: CREATED | IN_PROGRESS | FAILED | SUCCEEDED | PROGRESS_STOPPED | PENDING_RESUME Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 957 Amazon Timestream Developer Guide CsvConfiguration Service: Amazon Timestream Write A delimited data format where the column separator can be a comma and the record separator is a newline character. Contents ColumnSeparator Column separator can be one of comma (','), pipe ('|), semicolon (';'), tab('/t'), or blank space (' '). Type: String Length Constraints: Fixed length of 1. Required: No EscapeChar Escape character can be one of Type: String Length Constraints: Fixed length of 1. Required: No NullValue Can be blank space (' '). Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No QuoteChar Can be single quote (') or double quote ("). Type: String Length Constraints: Fixed length of 1. Data Types 958 Amazon Timestream Required: No TrimWhiteSpace Specifies to trim leading and trailing white space. Developer Guide Type: Boolean Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 959 Amazon Timestream Developer Guide Database Service: Amazon Timestream Write A top-level container for a table. Databases and tables are the fundamental management concepts in Amazon Timestream. All tables in a database are encrypted with the same AWS KMS key. Contents Arn The Amazon Resource Name that uniquely identifies this database. Type: String Required: No CreationTime The time when the database was created, calculated from the Unix epoch time. Type: Timestamp Required: No DatabaseName The name of the Timestream database. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: No KmsKeyId The identifier of the AWS KMS key used to encrypt the data stored in the database. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No LastUpdatedTime The last time that this database was updated. Data Types 960 Amazon Timestream Type: Timestamp Required: No TableCount The total number of tables found within a Timestream database. Developer Guide Type: Long Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 961 Amazon Timestream Developer Guide DataModel Service: Amazon Timestream Write Data model for a batch load task. Contents DimensionMappings Source to target mappings for dimensions. Type: Array of DimensionMapping objects Array Members: Minimum number of 1 item. Required: Yes MeasureNameColumn Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No MixedMeasureMappings Source to target mappings for measures. Type: Array of MixedMeasureMapping objects Array Members: Minimum number of 1 item. Required: No MultiMeasureMappings Source to target mappings for multi-measure records. Type: MultiMeasureMappings object Required: No TimeColumn Source column to be mapped to time. Data Types 962 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 256.
timestream-229
timestream.pdf
229
load task. Contents DimensionMappings Source to target mappings for dimensions. Type: Array of DimensionMapping objects Array Members: Minimum number of 1 item. Required: Yes MeasureNameColumn Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No MixedMeasureMappings Source to target mappings for measures. Type: Array of MixedMeasureMapping objects Array Members: Minimum number of 1 item. Required: No MultiMeasureMappings Source to target mappings for multi-measure records. Type: MultiMeasureMappings object Required: No TimeColumn Source column to be mapped to time. Data Types 962 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 256. Required: No TimeUnit The granularity of the timestamp unit. It indicates if the time value is in seconds, milliseconds, nanoseconds, or other supported values. Default is MILLISECONDS. Type: String Valid Values: MILLISECONDS | SECONDS | MICROSECONDS | NANOSECONDS Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 963 Amazon Timestream Developer Guide DataModelConfiguration Service: Amazon Timestream Write Contents DataModel Type: DataModel object Required: No DataModelS3Configuration Type: DataModelS3Configuration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 964 Amazon Timestream Developer Guide DataModelS3Configuration Service: Amazon Timestream Write Contents BucketName Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9] Required: No ObjectKey Type: String Length Constraints: Minimum length of 1. Maximum length of 1024. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 965 Amazon Timestream Developer Guide DataSourceConfiguration Service: Amazon Timestream Write Defines configuration details about the data source. Contents DataFormat This is currently CSV. Type: String Valid Values: CSV Required: Yes DataSourceS3Configuration Configuration of an S3 location for a file which contains data to load. Type: DataSourceS3Configuration object Required: Yes CsvConfiguration A delimited data format where the column separator can be a comma and the record separator is a newline character. Type: CsvConfiguration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 966 Amazon Timestream Developer Guide DataSourceS3Configuration Service: Amazon Timestream Write Contents BucketName The bucket name of the customer S3 bucket. Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9] Required: Yes ObjectKeyPrefix Type: String Length Constraints: Minimum length of 1. Maximum length of 1024. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 967 Amazon Timestream Developer Guide Dimension Service: Amazon Timestream Write Represents the metadata attributes of the time series. For example, the name and Availability Zone of an EC2 instance or the name of the manufacturer of a wind turbine are dimensions. Contents Name Dimension represents the metadata attributes of the time series. For example, the name and Availability Zone of an EC2 instance or the name of the manufacturer of a wind turbine are dimensions. For constraints on dimension names, see Naming Constraints. Type: String Length Constraints: Minimum length of 1. Maximum length of 60. Required: Yes Value The value of the dimension. Type: String Required: Yes DimensionValueType The data type of the dimension for the time-series data point. Type: String Valid Values: VARCHAR Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Data Types 968 Amazon Timestream • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Developer Guide Data Types 969 Amazon Timestream Developer Guide DimensionMapping Service: Amazon Timestream Write Contents DestinationColumn Type: String Length Constraints: Minimum length of 1. Required: No SourceColumn Type: String Length Constraints: Minimum length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 970 Amazon Timestream Developer Guide Endpoint Service: Amazon Timestream Write Represents an available endpoint against which to make API calls against, as well as the TTL for that endpoint. Contents Address An
timestream-230
timestream.pdf
230
Developer Guide DimensionMapping Service: Amazon Timestream Write Contents DestinationColumn Type: String Length Constraints: Minimum length of 1. Required: No SourceColumn Type: String Length Constraints: Minimum length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 970 Amazon Timestream Developer Guide Endpoint Service: Amazon Timestream Write Represents an available endpoint against which to make API calls against, as well as the TTL for that endpoint. Contents Address An endpoint address. Type: String Required: Yes CachePeriodInMinutes The TTL for the endpoint, in minutes. Type: Long Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 971 Amazon Timestream Developer Guide MagneticStoreRejectedDataLocation Service: Amazon Timestream Write The location to write error reports for records rejected, asynchronously, during magnetic store writes. Contents S3Configuration Configuration of an S3 location to write error reports for records rejected, asynchronously, during magnetic store writes. Type: S3Configuration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 972 Amazon Timestream Developer Guide MagneticStoreWriteProperties Service: Amazon Timestream Write The set of properties on a table for configuring magnetic store writes. Contents EnableMagneticStoreWrites A flag to enable magnetic store writes. Type: Boolean Required: Yes MagneticStoreRejectedDataLocation The location to write error reports for records rejected asynchronously during magnetic store writes. Type: MagneticStoreRejectedDataLocation object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 973 Amazon Timestream Developer Guide MeasureValue Service: Amazon Timestream Write Represents the data attribute of the time series. For example, the CPU utilization of an EC2 instance or the RPM of a wind turbine are measures. MeasureValue has both name and value. MeasureValue is only allowed for type MULTI. Using MULTI type, you can pass multiple data attributes associated with the same time series in a single record Contents Name The name of the MeasureValue. For constraints on MeasureValue names, see Naming Constraints in the Amazon Timestream Developer Guide. Type: String Length Constraints: Minimum length of 1. Required: Yes Type Contains the data type of the MeasureValue for the time-series data point. Type: String Valid Values: DOUBLE | BIGINT | VARCHAR | BOOLEAN | TIMESTAMP | MULTI Required: Yes Value The value for the MeasureValue. For information, see Data types. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes Data Types 974 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 975 Amazon Timestream Developer Guide MixedMeasureMapping Service: Amazon Timestream Write Contents MeasureValueType Type: String Valid Values: DOUBLE | BIGINT | VARCHAR | BOOLEAN | TIMESTAMP | MULTI Required: Yes MeasureName Type: String Length Constraints: Minimum length of 1. Required: No MultiMeasureAttributeMappings Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item. Required: No SourceColumn Type: String Length Constraints: Minimum length of 1. Required: No TargetMeasureName Type: String Length Constraints: Minimum length of 1. Data Types 976 Amazon Timestream Required: No See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 977 Amazon Timestream Developer Guide MultiMeasureAttributeMapping Service: Amazon Timestream Write Contents SourceColumn Type: String Length Constraints: Minimum length of 1. Required: Yes MeasureValueType Type: String Valid Values: DOUBLE | BIGINT | BOOLEAN | VARCHAR | TIMESTAMP Required: No TargetMultiMeasureAttributeName Type: String Length Constraints: Minimum length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 978 Amazon Timestream Developer Guide MultiMeasureMappings Service: Amazon Timestream Write Contents MultiMeasureAttributeMappings Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item. Required: Yes TargetMultiMeasureName Type: String Length Constraints: Minimum length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: •
timestream-231
timestream.pdf
231
length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 978 Amazon Timestream Developer Guide MultiMeasureMappings Service: Amazon Timestream Write Contents MultiMeasureAttributeMappings Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item. Required: Yes TargetMultiMeasureName Type: String Length Constraints: Minimum length of 1. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 979 Amazon Timestream Developer Guide PartitionKey Service: Amazon Timestream Write An attribute used in partitioning data in a table. A dimension key partitions data using the values of the dimension specified by the dimension-name as partition key, while a measure key partitions data using measure names (values of the 'measure_name' column). Contents Type The type of the partition key. Options are DIMENSION (dimension key) and MEASURE (measure key). Type: String Valid Values: DIMENSION | MEASURE Required: Yes EnforcementInRecord The level of enforcement for the specification of a dimension key in ingested records. Options are REQUIRED (dimension key must be specified) and OPTIONAL (dimension key does not have to be specified). Type: String Valid Values: REQUIRED | OPTIONAL Required: No Name The name of the attribute used for a dimension key. Type: String Length Constraints: Minimum length of 1. Required: No Data Types 980 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 981 Amazon Timestream Developer Guide Record Service: Amazon Timestream Write Represents a time-series data point being written into Timestream. Each record contains an array of dimensions. Dimensions represent the metadata attributes of a time-series data point, such as the instance name or Availability Zone of an EC2 instance. A record also contains the measure name, which is the name of the measure being collected (for example, the CPU utilization of an EC2 instance). Additionally, a record contains the measure value and the value type, which is the data type of the measure value. Also, the record contains the timestamp of when the measure was collected and the timestamp unit, which represents the granularity of the timestamp. Records have a Version field, which is a 64-bit long that you can use for updating data points. Writes of a duplicate record with the same dimension, timestamp, and measure name but different measure value will only succeed if the Version attribute of the record in the write request is higher than that of the existing record. Timestream defaults to a Version of 1 for records without the Version field. Contents Dimensions Contains the list of dimensions for time-series data points. Type: Array of Dimension objects Array Members: Maximum number of 128 items. Required: No MeasureName Measure represents the data attribute of the time series. For example, the CPU utilization of an EC2 instance or the RPM of a wind turbine are measures. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No MeasureValue Contains the measure value for the time-series data point. Data Types 982 Amazon Timestream Type: String Developer Guide Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No MeasureValues Contains the list of MeasureValue for time-series data points. This is only allowed for type MULTI. For scalar values, use MeasureValue attribute of the record directly. Type: Array of MeasureValue objects Required: No MeasureValueType Contains the data type of the measure value for the time-series data point. Default type is DOUBLE. For more information, see Data types. Type: String Valid Values: DOUBLE | BIGINT | VARCHAR | BOOLEAN | TIMESTAMP | MULTI Required: No Time Contains the time at which the measure value for the data point was collected. The time value plus the unit provides the time elapsed since the epoch. For example, if the time value is 12345 and the unit is ms, then 12345 ms have elapsed since the epoch. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No TimeUnit The granularity of the timestamp unit. It indicates if the time value is in seconds, milliseconds, nanoseconds, or other supported values. Default is MILLISECONDS. Type: String Data Types 983 Amazon Timestream Developer Guide Valid Values: MILLISECONDS | SECONDS | MICROSECONDS | NANOSECONDS Required: No Version 64-bit attribute used for record updates. Write requests for duplicate data with a higher version number will update the existing measure value and version. In cases where the measure
timestream-232
timestream.pdf
232
unit is ms, then 12345 ms have elapsed since the epoch. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No TimeUnit The granularity of the timestamp unit. It indicates if the time value is in seconds, milliseconds, nanoseconds, or other supported values. Default is MILLISECONDS. Type: String Data Types 983 Amazon Timestream Developer Guide Valid Values: MILLISECONDS | SECONDS | MICROSECONDS | NANOSECONDS Required: No Version 64-bit attribute used for record updates. Write requests for duplicate data with a higher version number will update the existing measure value and version. In cases where the measure value is the same, Version will still be updated. Default value is 1. Note Version must be 1 or greater, or you will receive a ValidationException error. Type: Long Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 984 Amazon Timestream Developer Guide RecordsIngested Service: Amazon Timestream Write Information on the records ingested by this request. Contents MagneticStore Count of records ingested into the magnetic store. Type: Integer Required: No MemoryStore Count of records ingested into the memory store. Type: Integer Required: No Total Total count of successfully ingested records. Type: Integer Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 985 Amazon Timestream Developer Guide RejectedRecord Service: Amazon Timestream Write Represents records that were not successfully inserted into Timestream due to data validation issues that must be resolved before reinserting time-series data into the system. Contents ExistingVersion The existing version of the record. This value is populated in scenarios where an identical record exists with a higher version than the version in the write request. Type: Long Required: No Reason The reason why a record was not successfully inserted into Timestream. Possible causes of failure include: • Records with duplicate data where there are multiple records with the same dimensions, timestamps, and measure names but: • Measure values are different • Version is not present in the request, or the value of version in the new record is equal to or lower than the existing value If Timestream rejects data for this case, the ExistingVersion field in the RejectedRecords response will indicate the current record’s version. To force an update, you can resend the request with a version for the record set to a value greater than the ExistingVersion. • Records with timestamps that lie outside the retention duration of the memory store. Note When the retention window is updated, you will receive a RejectedRecords exception if you immediately try to ingest data within the new window. To avoid a RejectedRecords exception, wait until the duration of the new window to ingest new data. For further information, see Best Practices for Configuring Timestream and the explanation of how storage works in Timestream. Data Types 986 Amazon Timestream Developer Guide • Records with dimensions or measures that exceed the Timestream defined limits. For more information, see Access Management in the Timestream Developer Guide. Type: String Required: No RecordIndex The index of the record in the input request for WriteRecords. Indexes begin with 0. Type: Integer Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 987 Amazon Timestream Developer Guide ReportConfiguration Service: Amazon Timestream Write Report configuration for a batch load task. This contains details about where error reports are stored. Contents ReportS3Configuration Configuration of an S3 location to write error reports and events for a batch load. Type: ReportS3Configuration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 988 Amazon Timestream Developer Guide ReportS3Configuration Service: Amazon Timestream Write Contents BucketName Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9] Required: Yes EncryptionOption Type: String Valid Values: SSE_S3 | SSE_KMS Required: No KmsKeyId Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No ObjectKeyPrefix Type: String Length Constraints: Minimum length of 1. Maximum length of 928. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: No Data Types 989 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2
timestream-233
timestream.pdf
233
Amazon Timestream Write Contents BucketName Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9] Required: Yes EncryptionOption Type: String Valid Values: SSE_S3 | SSE_KMS Required: No KmsKeyId Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No ObjectKeyPrefix Type: String Length Constraints: Minimum length of 1. Maximum length of 928. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: No Data Types 989 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 990 Amazon Timestream Developer Guide RetentionProperties Service: Amazon Timestream Write Retention properties contain the duration for which your time-series data must be stored in the magnetic store and the memory store. Contents MagneticStoreRetentionPeriodInDays The duration for which data must be stored in the magnetic store. Type: Long Valid Range: Minimum value of 1. Maximum value of 73000. Required: Yes MemoryStoreRetentionPeriodInHours The duration for which data must be stored in the memory store. Type: Long Valid Range: Minimum value of 1. Maximum value of 8766. Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 991 Amazon Timestream Developer Guide S3Configuration Service: Amazon Timestream Write The configuration that specifies an S3 location. Contents BucketName The bucket name of the customer S3 bucket. Type: String Length Constraints: Minimum length of 3. Maximum length of 63. Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9] Required: No EncryptionOption The encryption option for the customer S3 location. Options are S3 server-side encryption with an S3 managed key or AWS managed key. Type: String Valid Values: SSE_S3 | SSE_KMS Required: No KmsKeyId The AWS KMS key ID for the customer S3 location when encrypting with an AWS managed key. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: No ObjectKeyPrefix The object key preview for the customer S3 location. Type: String Data Types 992 Amazon Timestream Developer Guide Length Constraints: Minimum length of 1. Maximum length of 928. Pattern: [a-zA-Z0-9|!\-_*'\(\)]([a-zA-Z0-9]|[!\-_*'\(\)\/.])+ Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 993 Amazon Timestream Developer Guide Schema Service: Amazon Timestream Write A Schema specifies the expected data model of the table. Contents CompositePartitionKey A non-empty list of partition keys defining the attributes used to partition the table data. The order of the list determines the partition hierarchy. The name and type of each partition key as well as the partition key order cannot be changed after the table is created. However, the enforcement level of each partition key can be changed. Type: Array of PartitionKey objects Array Members: Minimum number of 1 item. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 994 Amazon Timestream Developer Guide Table Service: Amazon Timestream Write Represents a database table in Timestream. Tables contain one or more related time series. You can modify the retention duration of the memory store and the magnetic store for a table. Contents Arn The Amazon Resource Name that uniquely identifies this table. Type: String Required: No CreationTime The time when the Timestream table was created. Type: Timestamp Required: No DatabaseName The name of the Timestream database that contains this table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: No LastUpdatedTime The time when the Timestream table was last updated. Type: Timestamp Required: No MagneticStoreWriteProperties Contains properties to set on the table when enabling magnetic store writes. Type: MagneticStoreWriteProperties object Data Types 995 Amazon Timestream Required: No RetentionProperties The retention duration for the memory store and magnetic store. Type: RetentionProperties object Developer Guide Required: No Schema The schema of the table. Type: Schema object Required: No TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: No TableStatus The current state of the table: • DELETING - The table is being deleted. • ACTIVE - The table is ready for use. Type: String Valid Values: ACTIVE | DELETING | RESTORING Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Data Types 996 Amazon Timestream • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Developer
timestream-234
timestream.pdf
234
Required: No TableName The name of the Timestream table. Type: String Length Constraints: Minimum length of 3. Maximum length of 256. Required: No TableStatus The current state of the table: • DELETING - The table is being deleted. • ACTIVE - The table is ready for use. Type: String Valid Values: ACTIVE | DELETING | RESTORING Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: Data Types 996 Amazon Timestream • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Developer Guide Data Types 997 Amazon Timestream Developer Guide Tag Service: Amazon Timestream Write A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. With tags, you can categorize databases and/or tables, for example, by purpose, owner, or environment. Contents Key The key of the tag. Tag keys are case sensitive. Type: String Length Constraints: Minimum length of 1. Maximum length of 128. Required: Yes Value The value of the tag. Tag values are case-sensitive and can be null. Type: String Length Constraints: Minimum length of 0. Maximum length of 256. Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Amazon Timestream Query The following data types are supported by Amazon Timestream Query: Data Types 998 Amazon Timestream Developer Guide • AccountSettingsNotificationConfiguration • ColumnInfo • Datum • DimensionMapping • Endpoint • ErrorReportConfiguration • ErrorReportLocation • ExecutionStats • LastUpdate • MixedMeasureMapping • MultiMeasureAttributeMapping • MultiMeasureMappings • NotificationConfiguration • ParameterMapping • ProvisionedCapacityRequest • ProvisionedCapacityResponse • QueryComputeRequest • QueryComputeResponse • QueryInsights • QueryInsightsResponse • QuerySpatialCoverage • QuerySpatialCoverageMax • QueryStatus • QueryTemporalRange • QueryTemporalRangeMax • Row • S3Configuration • S3ReportLocation • ScheduleConfiguration • ScheduledQuery Data Types 999 Developer Guide Amazon Timestream • ScheduledQueryDescription • ScheduledQueryInsights • ScheduledQueryInsightsResponse • ScheduledQueryRunSummary • SelectColumn • SnsConfiguration • Tag • TargetConfiguration • TargetDestination • TimeSeriesDataPoint • TimestreamConfiguration • TimestreamDestination • Type Data Types 1000 Amazon Timestream Developer Guide AccountSettingsNotificationConfiguration Service: Amazon Timestream Query Configuration settings for notifications related to account settings. Contents RoleArn An Amazon Resource Name (ARN) that grants Timestream permission to publish notifications. This field is only visible if SNS Topic is provided when updating the account settings. Type: String Length Constraints: Minimum length of 1. Maximum length of 2048. Required: Yes SnsConfiguration Details on SNS that are required to send the notification. Type: SnsConfiguration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1001 Amazon Timestream Developer Guide ColumnInfo Service: Amazon Timestream Query Contains the metadata for query results such as the column names, data types, and other attributes. Contents Type The data type of the result set column. The data type can be a scalar or complex. Scalar data types are integers, strings, doubles, Booleans, and others. Complex data types are types such as arrays, rows, and others. Type: Type object Required: Yes Name The name of the result set column. The name of the result set is available for columns of all data types except for arrays. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1002 Amazon Timestream Developer Guide Datum Service: Amazon Timestream Query Datum represents a single data point in a query result. Contents ArrayValue Indicates if the data point is an array. Type: Array of Datum objects Required: No NullValue Indicates if the data point is null. Type: Boolean Required: No RowValue Indicates if the data point is a row. Type: Row object Required: No ScalarValue Indicates if the data point is a scalar value such as integer, string, double, or Boolean. Type: String Required: No TimeSeriesValue Indicates if the data point is a timeseries data type. Type: Array of TimeSeriesDataPoint objects Required: No Data Types 1003 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1004 Amazon Timestream Developer Guide DimensionMapping Service: Amazon Timestream Query This type is used to map column(s) from the query result to a dimension in the destination table. Contents DimensionValueType Type for the dimension. Type: String Valid Values:
timestream-235
timestream.pdf
235
Indicates if the data point is a timeseries data type. Type: Array of TimeSeriesDataPoint objects Required: No Data Types 1003 Amazon Timestream See Also Developer Guide For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1004 Amazon Timestream Developer Guide DimensionMapping Service: Amazon Timestream Query This type is used to map column(s) from the query result to a dimension in the destination table. Contents DimensionValueType Type for the dimension. Type: String Valid Values: VARCHAR Required: Yes Name Column name from query result. Type: String Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1005 Amazon Timestream Developer Guide Endpoint Service: Amazon Timestream Query Represents an available endpoint against which to make API calls against, as well as the TTL for that endpoint. Contents Address An endpoint address. Type: String Required: Yes CachePeriodInMinutes The TTL for the endpoint, in minutes. Type: Long Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1006 Amazon Timestream Developer Guide ErrorReportConfiguration Service: Amazon Timestream Query Configuration required for error reporting. Contents S3Configuration The S3 configuration for the error reports. Type: S3Configuration object Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1007 Amazon Timestream Developer Guide ErrorReportLocation Service: Amazon Timestream Query This contains the location of the error report for a single scheduled query call. Contents S3ReportLocation The S3 location where error reports are written. Type: S3ReportLocation object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1008 Amazon Timestream Developer Guide ExecutionStats Service: Amazon Timestream Query Statistics for a single scheduled query run. Contents BytesMetered Bytes metered for a single scheduled query run. Type: Long Required: No CumulativeBytesScanned Bytes scanned for a single scheduled query run. Type: Long Required: No DataWrites Data writes metered for records ingested in a single scheduled query run. Type: Long Required: No ExecutionTimeInMillis Total time, measured in milliseconds, that was needed for the scheduled query run to complete. Type: Long Required: No QueryResultRows Number of rows present in the output from running a query before ingestion to destination data source. Type: Long Data Types 1009 Amazon Timestream Required: No RecordsIngested The number of records ingested for a single scheduled query run. Developer Guide Type: Long Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1010 Amazon Timestream Developer Guide LastUpdate Service: Amazon Timestream Query Configuration object that contains the most recent account settings update, visible only if settings have been updated previously. Contents Status The status of the last update. Can be either PENDING, FAILED, or SUCCEEDED. Type: String Valid Values: PENDING | FAILED | SUCCEEDED Required: No StatusMessage Error message describing the last account settings update status, visible only if an error occurred. Type: String Required: No TargetQueryTCU The number of TimeStream Compute Units (TCUs) requested in the last account settings update. Type: Integer Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 Data Types 1011 Amazon Timestream • AWS SDK for Ruby V3 Developer Guide Data Types 1012 Amazon Timestream Developer Guide MixedMeasureMapping Service: Amazon Timestream Query MixedMeasureMappings are mappings that can be used to ingest data into a mixture of narrow and multi measures in the derived table. Contents MeasureValueType Type of the value that is to be read from sourceColumn. If the mapping is for MULTI, use MeasureValueType.MULTI. Type: String Valid Values: BIGINT | BOOLEAN | DOUBLE | VARCHAR | MULTI Required: Yes MeasureName Refers to the value of measure_name in a result row. This field is required if MeasureNameColumn is provided. Type: String Required: No MultiMeasureAttributeMappings Required when measureValueType is MULTI. Attribute mappings for MULTI value measures. Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item.
timestream-236
timestream.pdf
236
be used to ingest data into a mixture of narrow and multi measures in the derived table. Contents MeasureValueType Type of the value that is to be read from sourceColumn. If the mapping is for MULTI, use MeasureValueType.MULTI. Type: String Valid Values: BIGINT | BOOLEAN | DOUBLE | VARCHAR | MULTI Required: Yes MeasureName Refers to the value of measure_name in a result row. This field is required if MeasureNameColumn is provided. Type: String Required: No MultiMeasureAttributeMappings Required when measureValueType is MULTI. Attribute mappings for MULTI value measures. Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item. Required: No SourceColumn This field refers to the source column from which measure-value is to be read for result materialization. Type: String Data Types 1013 Amazon Timestream Required: No TargetMeasureName Developer Guide Target measure name to be used. If not provided, the target measure name by default would be measure-name if provided, or sourceColumn otherwise. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1014 Amazon Timestream Developer Guide MultiMeasureAttributeMapping Service: Amazon Timestream Query Attribute mapping for MULTI value measures. Contents MeasureValueType Type of the attribute to be read from the source column. Type: String Valid Values: BIGINT | BOOLEAN | DOUBLE | VARCHAR | TIMESTAMP Required: Yes SourceColumn Source column from where the attribute value is to be read. Type: String Required: Yes TargetMultiMeasureAttributeName Custom name to be used for attribute name in derived table. If not provided, source column name would be used. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1015 Amazon Timestream Developer Guide MultiMeasureMappings Service: Amazon Timestream Query Only one of MixedMeasureMappings or MultiMeasureMappings is to be provided. MultiMeasureMappings can be used to ingest data as multi measures in the derived table. Contents MultiMeasureAttributeMappings Required. Attribute mappings to be used for mapping query results to ingest data for multi- measure attributes. Type: Array of MultiMeasureAttributeMapping objects Array Members: Minimum number of 1 item. Required: Yes TargetMultiMeasureName The name of the target multi-measure name in the derived table. This input is required when measureNameColumn is not provided. If MeasureNameColumn is provided, then value from that column will be used as multi-measure name. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1016 Amazon Timestream Developer Guide NotificationConfiguration Service: Amazon Timestream Query Notification configuration for a scheduled query. A notification is sent by Timestream when a scheduled query is created, its state is updated or when it is deleted. Contents SnsConfiguration Details about the Amazon Simple Notification Service (SNS) configuration. This field is visible only when SNS Topic is provided when updating the account settings. Type: SnsConfiguration object Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1017 Amazon Timestream Developer Guide ParameterMapping Service: Amazon Timestream Query Mapping for named parameters. Contents Name Parameter name. Type: String Required: Yes Type Contains the data type of a column in a query result set. The data type can be scalar or complex. The supported scalar data types are integers, Boolean, string, double, timestamp, date, time, and intervals. The supported complex data types are arrays, rows, and timeseries. Type: Type object Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1018 Amazon Timestream Developer Guide ProvisionedCapacityRequest Service: Amazon Timestream Query A request to update the provisioned capacity settings for querying data. Contents TargetQueryTCU The target compute capacity for querying data, specified in Timestream Compute Units (TCUs). Type: Integer Required: Yes NotificationConfiguration Configuration settings for notifications related to the provisioned capacity update. Type: AccountSettingsNotificationConfiguration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1019 Amazon Timestream Developer Guide ProvisionedCapacityResponse Service: Amazon Timestream Query The response to a request to update the provisioned capacity settings for querying data.
timestream-237
timestream.pdf
237
for querying data. Contents TargetQueryTCU The target compute capacity for querying data, specified in Timestream Compute Units (TCUs). Type: Integer Required: Yes NotificationConfiguration Configuration settings for notifications related to the provisioned capacity update. Type: AccountSettingsNotificationConfiguration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1019 Amazon Timestream Developer Guide ProvisionedCapacityResponse Service: Amazon Timestream Query The response to a request to update the provisioned capacity settings for querying data. Contents ActiveQueryTCU The number of Timestream Compute Units (TCUs) provisioned in the account. This field is only visible when the compute mode is PROVISIONED. Type: Integer Required: No LastUpdate Information about the last update to the provisioned capacity settings. Type: LastUpdate object Required: No NotificationConfiguration An object that contains settings for notifications that are sent whenever the provisioned capacity settings are modified. This field is only visible when the compute mode is PROVISIONED. Type: AccountSettingsNotificationConfiguration object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1020 Amazon Timestream Developer Guide Data Types 1021 Amazon Timestream Developer Guide QueryComputeRequest Service: Amazon Timestream Query A request to retrieve or update the compute capacity settings for querying data. QueryCompute is available only in the Asia Pacific (Mumbai) region. Contents ComputeMode The mode in which Timestream Compute Units (TCUs) are allocated and utilized within an account. Note that in the Asia Pacific (Mumbai) region, the API operation only recognizes the value PROVISIONED. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: String Valid Values: ON_DEMAND | PROVISIONED Required: No ProvisionedCapacity Configuration object that contains settings for provisioned Timestream Compute Units (TCUs) in your account. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: ProvisionedCapacityRequest object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1022 Amazon Timestream Developer Guide QueryComputeResponse Service: Amazon Timestream Query The response to a request to retrieve or update the compute capacity settings for querying data. QueryCompute is available only in the Asia Pacific (Mumbai) region. Contents ComputeMode The mode in which Timestream Compute Units (TCUs) are allocated and utilized within an account. Note that in the Asia Pacific (Mumbai) region, the API operation only recognizes the value PROVISIONED. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: String Valid Values: ON_DEMAND | PROVISIONED Required: No ProvisionedCapacity Configuration object that contains settings for provisioned Timestream Compute Units (TCUs) in your account. QueryCompute is available only in the Asia Pacific (Mumbai) region. Type: ProvisionedCapacityResponse object Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following: • AWS SDK for C++ • AWS SDK for Java V2 • AWS SDK for Ruby V3 Data Types 1023 Amazon Timestream Developer Guide QueryInsights Service: Amazon Timestream Query QueryInsights is a performance tuning feature that helps you optimize your queries, reducing costs and improving performance. With QueryInsights, you can assess the pruning efficiency of your queries and identify areas for improvement to enhance query performance. With QueryInsights, you can also analyze the effectiveness of your queries in terms of temporal and spatial pruning, and identify opportunities to improve performance. Specifically, you can evaluate how well your queries use time-based and partition key-based indexing strategies to optimize data retrieval. To optimize query performance, it's essential that you fine-tune both the temporal and spatial parameters that govern query execution. The key metrics provided by QueryInsights are QuerySpatialCoverage and QueryTemporalRange. QuerySpatialCoverage indicates how much of the spatial axis the query scans, with lower values being more efficient. QueryTemporalRange shows the time range scanned, with narrower ranges being more performant. Benefits of QueryInsights The following are the key benefits of using QueryInsights: • Identifying inefficient queries – QueryInsights provides information on the time-based and attribute-based pruning of the tables accessed by the query. This information helps you identify the tables that are sub-optimally accessed. • Optimizing your data model and partitioning – You can use the QueryInsights information to access and fine-tune your data model and partitioning strategy. • Tuning queries – QueryInsights highlights opportunities to use indexes more effectively. Note The maximum number of Query API requests you're allowed to make with QueryInsights enabled is 1 query per second (QPS). If you exceed this query rate, it might result in throttling. Data Types 1024 Amazon Timestream Contents Mode Developer