id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
timestream-038
|
timestream.pdf
| 38 |
writeClient.CreateTableAsync(createTableRequest); Console.WriteLine($"Table {Constants.TABLE_NAME} created"); } catch (ConflictException) { Console.WriteLine("Table already exists."); } catch (Exception e) { Console.WriteLine("Create table failed:" + e.ToString()); } } Describe table You can use the following code snippets to get information about the attributes of your table. Describe table 115 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void describeTable() { System.out.println("Describing table"); final DescribeTableRequest describeTableRequest = new DescribeTableRequest(); describeTableRequest.setDatabaseName(DATABASE_NAME); describeTableRequest.setTableName(TABLE_NAME); try { DescribeTableResult result = amazonTimestreamWrite.describeTable(describeTableRequest); String tableId = result.getTable().getArn(); System.out.println("Table " + TABLE_NAME + " has id " + tableId); } catch (final Exception e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } } Java v2 public void describeTable() { System.out.println("Describing table"); final DescribeTableRequest describeTableRequest = DescribeTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).build(); try { DescribeTableResponse response = timestreamWriteClient.describeTable(describeTableRequest); String tableId = response.table().arn(); System.out.println("Table " + TABLE_NAME + " has id " + tableId); } catch (final Exception e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } } Describe table 116 Amazon Timestream Go Developer Guide // Describe table. describeTableInput := ×treamwrite.DescribeTableInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), } describeTableOutput, err := writeSvc.DescribeTable(describeTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Describe table is successful, below is the output:") fmt.Println(describeTableOutput) } Python def describe_table(self): print("Describing table") try: result = self.client.describe_table(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME) print("Table [%s] has id [%s]" % (Constant.TABLE_NAME, result['Table'] ['Arn'])) except self.client.exceptions.ResourceNotFoundException: print("Table doesn't exist") except Exception as err: print("Describe table failed:", err) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. Also see Class DescribeTableCommand and DescribeTable. import { TimestreamWriteClient, DescribeTableCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "us-east-1" }); Describe table 117 Amazon Timestream Developer Guide const params = { DatabaseName: "testDbFromNode", TableName: "testTableFromNode" }; const command = new DescribeTableCommand(params); try { const data = await writeClient.send(command); console.log(`Table ${data.Table.TableName} has id ${data.Table.Arn}`); } catch (error) { if (error.code === 'ResourceNotFoundException') { console.log("Table or Database doesn't exist."); } else { console.log("Describe table failed.", error); throw error; } } The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function describeTable() { console.log("Describing Table"); const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME }; const promise = writeClient.describeTable(params).promise(); await promise.then( (data) => { console.log(`Table ${data.Table.TableName} has id ${data.Table.Arn}`); }, (err) => { if (err.code === 'ResourceNotFoundException') { console.log("Table or Database doesn't exists."); } else { console.log("Describe table failed.", err); throw err; } } Describe table 118 Amazon Timestream ); } .NET Developer Guide public async Task DescribeTable() { Console.WriteLine("Describing Table"); try { var describeTableRequest = new DescribeTableRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME }; DescribeTableResponse response = await writeClient.DescribeTableAsync(describeTableRequest); Console.WriteLine($"Table {Constants.TABLE_NAME} has id: {response.Table.Arn}"); } catch (ResourceNotFoundException) { Console.WriteLine("Table does not exist."); } catch (Exception e) { Console.WriteLine("Describe table failed:" + e.ToString()); } } Update table You can use the following code snippets to update a table. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Update table 119 Amazon Timestream Java Developer Guide public void updateTable() { System.out.println("Updating table"); UpdateTableRequest updateTableRequest = new UpdateTableRequest(); updateTableRequest.setDatabaseName(DATABASE_NAME); updateTableRequest.setTableName(TABLE_NAME); final RetentionProperties retentionProperties = new RetentionProperties() .withMemoryStoreRetentionPeriodInHours(HT_TTL_HOURS) .withMagneticStoreRetentionPeriodInDays(CT_TTL_DAYS); updateTableRequest.setRetentionProperties(retentionProperties); amazonTimestreamWrite.updateTable(updateTableRequest); System.out.println("Table updated"); } Java v2 public void updateTable() { System.out.println("Updating table"); final RetentionProperties retentionProperties = RetentionProperties.builder() .memoryStoreRetentionPeriodInHours(HT_TTL_HOURS) .magneticStoreRetentionPeriodInDays(CT_TTL_DAYS).build(); final UpdateTableRequest updateTableRequest = UpdateTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).retentionProperties(retentionProperties).build(); timestreamWriteClient.updateTable(updateTableRequest); System.out.println("Table updated"); } Go // Update table. magneticStoreRetentionPeriodInDays := int64(7 * 365) memoryStoreRetentionPeriodInHours := int64(24) updateTableInput := ×treamwrite.UpdateTableInput{ DatabaseName: aws.String(*databaseName), Update table 120 Amazon Timestream Developer Guide TableName: aws.String(*tableName), RetentionProperties: ×treamwrite.RetentionProperties{ MagneticStoreRetentionPeriodInDays: &magneticStoreRetentionPeriodInDays, MemoryStoreRetentionPeriodInHours: &memoryStoreRetentionPeriodInHours, }, } updateTableOutput, err := writeSvc.UpdateTable(updateTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Update table is successful, below is the output:") fmt.Println(updateTableOutput) } Python def update_table(self): print("Updating table") retention_properties = { 'MemoryStoreRetentionPeriodInHours': Constant.HT_TTL_HOURS, 'MagneticStoreRetentionPeriodInDays': Constant.CT_TTL_DAYS } try: self.client.update_table(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, RetentionProperties=retention_properties) print("Table updated.") except Exception as err: print("Update table failed:", err) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. Also see Class UpdateTableCommand and UpdateTable. import { TimestreamWriteClient, UpdateTableCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "us-east-1" }); Update table 121 Amazon Timestream Developer Guide const params = { DatabaseName: "testDbFromNode", TableName: "testTableFromNode", RetentionProperties: { MemoryStoreRetentionPeriodInHours: 24, MagneticStoreRetentionPeriodInDays: 180 } }; const command = new UpdateTableCommand(params); try { const data = await writeClient.send(command); console.log("Table updated") } catch (error) { console.log("Error updating
|
timestream-039
|
timestream.pdf
| 39 |
print("Update table failed:", err) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. Also see Class UpdateTableCommand and UpdateTable. import { TimestreamWriteClient, UpdateTableCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "us-east-1" }); Update table 121 Amazon Timestream Developer Guide const params = { DatabaseName: "testDbFromNode", TableName: "testTableFromNode", RetentionProperties: { MemoryStoreRetentionPeriodInHours: 24, MagneticStoreRetentionPeriodInDays: 180 } }; const command = new UpdateTableCommand(params); try { const data = await writeClient.send(command); console.log("Table updated") } catch (error) { console.log("Error updating table. ", error); } The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function updateTable() { console.log("Updating Table"); const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, RetentionProperties: { MemoryStoreRetentionPeriodInHours: constants.HT_TTL_HOURS, MagneticStoreRetentionPeriodInDays: constants.CT_TTL_DAYS } }; const promise = writeClient.updateTable(params).promise(); await promise.then( (data) => { console.log("Table updated") }, (err) => { console.log("Error updating table. ", err); throw err; } ); Update table 122 Amazon Timestream } .NET Developer Guide public async Task UpdateTable() { Console.WriteLine("Updating Table"); try { var updateTableRequest = new UpdateTableRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, RetentionProperties = new RetentionProperties { MagneticStoreRetentionPeriodInDays = Constants.CT_TTL_DAYS, MemoryStoreRetentionPeriodInHours = Constants.HT_TTL_HOURS } }; UpdateTableResponse response = await writeClient.UpdateTableAsync(updateTableRequest); Console.WriteLine($"Table {Constants.TABLE_NAME} updated"); } catch (ResourceNotFoundException) { Console.WriteLine("Table does not exist."); } catch (Exception e) { Console.WriteLine("Update table failed:" + e.ToString()); } } Delete table You can use the following code snippets to delete a table. Delete table 123 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void deleteTable() { System.out.println("Deleting table"); final DeleteTableRequest deleteTableRequest = new DeleteTableRequest(); deleteTableRequest.setDatabaseName(DATABASE_NAME); deleteTableRequest.setTableName(TABLE_NAME); try { DeleteTableResult result = amazonTimestreamWrite.deleteTable(deleteTableRequest); System.out.println("Delete table status: " + result.getSdkHttpMetadata().getHttpStatusCode()); } catch (final ResourceNotFoundException e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } catch (final Exception e) { System.out.println("Could not delete table " + TABLE_NAME + " = " + e); throw e; } } Java v2 public void deleteTable() { System.out.println("Deleting table"); final DeleteTableRequest deleteTableRequest = DeleteTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).build(); try { DeleteTableResponse response = timestreamWriteClient.deleteTable(deleteTableRequest); System.out.println("Delete table status: " + response.sdkHttpResponse().statusCode()); } catch (final ResourceNotFoundException e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } catch (final Exception e) { Delete table 124 Amazon Timestream Developer Guide System.out.println("Could not delete table " + TABLE_NAME + " = " + e); throw e; } } Go deleteTableInput := ×treamwrite.DeleteTableInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), } _, err = writeSvc.DeleteTable(deleteTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Table deleted", *tableName) } Python def delete_table(self): print("Deleting Table") try: result = self.client.delete_table(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME) print("Delete table status [%s]" % result['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.ResourceNotFoundException: print("Table [%s] doesn't exist" % Constant.TABLE_NAME) except Exception as err: print("Delete table failed:", err) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. Also see Class DeleteTableCommand and DeleteTable. import { TimestreamWriteClient, DeleteTableCommand } from "@aws-sdk/client- timestream-write"; Delete table 125 Amazon Timestream Developer Guide const writeClient = new TimestreamWriteClient({ region: "us-east-1" }); const params = { DatabaseName: "testDbFromNode", TableName: "testTableFromNode" }; const command = new DeleteTableCommand(params); try { const data = await writeClient.send(command); console.log("Deleted table"); } catch (error) { if (error.code === 'ResourceNotFoundException') { console.log(`Table ${params.TableName} or Database ${params.DatabaseName} doesn't exist.`); } else { console.log("Delete table failed.", error); throw error; } } The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function deleteTable() { console.log("Deleting Table"); const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME }; const promise = writeClient.deleteTable(params).promise(); await promise.then( function (data) { console.log("Deleted table"); }, function(err) { if (err.code === 'ResourceNotFoundException') { console.log(`Table ${params.TableName} or Database ${params.DatabaseName} doesn't exists.`); } else { Delete table 126 Amazon Timestream Developer Guide console.log("Delete table failed.", err); throw err; } } ); } .NET public async Task DeleteTable() { Console.WriteLine("Deleting table"); try { var deleteTableRequest = new DeleteTableRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME }; DeleteTableResponse response = await writeClient.DeleteTableAsync(deleteTableRequest); Console.WriteLine($"Table {Constants.TABLE_NAME} delete request status: {response.HttpStatusCode}"); } catch (ResourceNotFoundException) { Console.WriteLine($"Table {Constants.TABLE_NAME} does not exists"); } catch (Exception e) { Console.WriteLine("Exception while deleting table:" + e.ToString()); } } List tables You can use the following code snippets to list tables. List tables 127 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications,
|
timestream-040
|
timestream.pdf
| 40 |
} .NET public async Task DeleteTable() { Console.WriteLine("Deleting table"); try { var deleteTableRequest = new DeleteTableRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME }; DeleteTableResponse response = await writeClient.DeleteTableAsync(deleteTableRequest); Console.WriteLine($"Table {Constants.TABLE_NAME} delete request status: {response.HttpStatusCode}"); } catch (ResourceNotFoundException) { Console.WriteLine($"Table {Constants.TABLE_NAME} does not exists"); } catch (Exception e) { Console.WriteLine("Exception while deleting table:" + e.ToString()); } } List tables You can use the following code snippets to list tables. List tables 127 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void listTables() { System.out.println("Listing tables"); ListTablesRequest request = new ListTablesRequest(); request.setDatabaseName(DATABASE_NAME); ListTablesResult result = amazonTimestreamWrite.listTables(request); printTables(result.getTables()); String nextToken = result.getNextToken(); while (nextToken != null && !nextToken.isEmpty()) { request.setNextToken(nextToken); ListTablesResult nextResult = amazonTimestreamWrite.listTables(request); printTables(nextResult.getTables()); nextToken = nextResult.getNextToken(); } } private void printTables(List<Table> tables) { for (Table table : tables) { System.out.println(table.getTableName()); } } Java v2 public void listTables() { System.out.println("Listing tables"); ListTablesRequest request = ListTablesRequest.builder().databaseName(DATABASE_NAME).maxResults(2).build(); ListTablesIterable listTablesIterable = timestreamWriteClient.listTablesPaginator(request); for(ListTablesResponse listTablesResponse : listTablesIterable) { final List<Table> tables = listTablesResponse.tables(); tables.forEach(table -> System.out.println(table.tableName())); List tables 128 Amazon Timestream } } Go Developer Guide listTablesMaxResult := int64(15) listTablesInput := ×treamwrite.ListTablesInput{ DatabaseName: aws.String(*databaseName), MaxResults: &listTablesMaxResult, } listTablesOutput, err := writeSvc.ListTables(listTablesInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("List tables is successful, below is the output:") fmt.Println(listTablesOutput) } Python def list_tables(self): print("Listing tables") try: result = self.client.list_tables(DatabaseName=Constant.DATABASE_NAME, MaxResults=5) self.__print_tables(result['Tables']) next_token = result.get('NextToken', None) while next_token: result = self.client.list_tables(DatabaseName=Constant.DATABASE_NAME, NextToken=next_token, MaxResults=5) self.__print_tables(result['Tables']) next_token = result.get('NextToken', None) except Exception as err: print("List tables failed:", err) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. List tables 129 Amazon Timestream Developer Guide Also see Class ListTablesCommand and ListTables. import { TimestreamWriteClient, ListTablesCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "us-east-1" }); const params = { DatabaseName: "testDbFromNode", MaxResults: 15 }; const command = new ListTablesCommand(params); getTablesList(null); async function getTablesList(nextToken) { if (nextToken) { params.NextToken = nextToken; } try { const data = await writeClient.send(command); data.Tables.forEach(function (table) { console.log(table.TableName); }); if (data.NextToken) { return getTablesList(data.NextToken); } } catch (error) { console.log("Error while listing tables", error); } } The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function listTables() { console.log("Listing tables:"); const tables = await getTablesList(null); tables.forEach(function(table){ console.log(table.TableName); List tables 130 Amazon Timestream }); } Developer Guide function getTablesList(nextToken, tables = []) { var params = { DatabaseName: constants.DATABASE_NAME, MaxResults: 15 }; if(nextToken) { params.NextToken = nextToken; } return writeClient.listTables(params).promise() .then( (data) => { tables.push.apply(tables, data.Tables); if (data.NextToken) { return getTablesList(data.NextToken, tables); } else { return tables; } }, (err) => { console.log("Error while listing databases", err); }); } .NET public async Task ListTables() { Console.WriteLine("Listing Tables"); try { var listTablesRequest = new ListTablesRequest { MaxResults = 5, DatabaseName = Constants.DATABASE_NAME }; ListTablesResponse response = await writeClient.ListTablesAsync(listTablesRequest); List tables 131 Amazon Timestream Developer Guide PrintTables(response.Tables); string nextToken = response.NextToken; while (nextToken != null) { listTablesRequest.NextToken = nextToken; response = await writeClient.ListTablesAsync(listTablesRequest); PrintTables(response.Tables); nextToken = response.NextToken; } } catch (Exception e) { Console.WriteLine("List table failed:" + e.ToString()); } } private void PrintTables(List<Table> tables) { foreach (Table table in tables) Console.WriteLine($"Table: {table.TableName}"); } Write data (inserts and upserts) Topics • Writing batches of records • Writing batches of records with common attributes • Upserting records • Multi-measure attribute example • Handling write failures Writing batches of records You can use the following code snippets to write data into an Amazon Timestream table. Writing data in batches helps to optimize the cost of writes. See Calculating the number of writes for more information. Write data 132 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void writeRecords() { System.out.println("Writing records"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = new Dimension().withName("region").withValue("us- east-1"); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record cpuUtilization = new Record() .withDimensions(dimensions) .withMeasureName("cpu_utilization") .withMeasureValue("13.5") .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)); Record memoryUtilization = new Record() .withDimensions(dimensions) .withMeasureName("memory_utilization") .withMeasureValue("40") .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) Write data 133 Amazon Timestream Developer Guide .withTableName(TABLE_NAME) .withRecords(records); try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.getRejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.getRecordIndex() + ": " + rejectedRecord.getReason()); } System.out.println("Other records were written successfully. "); }
|
timestream-041
|
timestream.pdf
| 41 |
= new Dimension().withName("region").withValue("us- east-1"); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record cpuUtilization = new Record() .withDimensions(dimensions) .withMeasureName("cpu_utilization") .withMeasureValue("13.5") .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)); Record memoryUtilization = new Record() .withDimensions(dimensions) .withMeasureName("memory_utilization") .withMeasureValue("40") .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) Write data 133 Amazon Timestream Developer Guide .withTableName(TABLE_NAME) .withRecords(records); try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.getRejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.getRecordIndex() + ": " + rejectedRecord.getReason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } } Java v2 public void writeRecords() { System.out.println("Writing records"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = Dimension.builder().name("region").value("us- east-1").build(); final Dimension az = Dimension.builder().name("az").value("az1").build(); final Dimension hostname = Dimension.builder().name("hostname").value("host1").build(); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record cpuUtilization = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) Write data 134 Amazon Timestream Developer Guide .measureName("cpu_utilization") .measureValue("13.5") .time(String.valueOf(time)).build(); Record memoryUtilization = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) .measureName("memory_utilization") .measureValue("40") .time(String.valueOf(time)).build(); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).records(records).build(); try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.rejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.recordIndex() + ": " + rejectedRecord.reason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } } Go now := time.Now() currentTimeInSeconds := now.Unix() writeRecordsInput := ×treamwrite.WriteRecordsInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), Records: []*timestreamwrite.Record{ ×treamwrite.Record{ Write data 135 Amazon Timestream Developer Guide Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{ Name: aws.String("hostname"), Value: aws.String("host1"), }, }, MeasureName: aws.String("cpu_utilization"), MeasureValue: aws.String("13.5"), MeasureValueType: aws.String("DOUBLE"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), }, ×treamwrite.Record{ Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{ Name: aws.String("hostname"), Value: aws.String("host1"), }, }, MeasureName: aws.String("memory_utilization"), MeasureValue: aws.String("40"), MeasureValueType: aws.String("DOUBLE"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), }, }, } Write data 136 Amazon Timestream Developer Guide _, err = writeSvc.WriteRecords(writeRecordsInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Write records is successful") } Python def write_records(self): print("Writing records") current_time = self._current_milli_time() dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ] cpu_utilization = { 'Dimensions': dimensions, 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5', 'MeasureValueType': 'DOUBLE', 'Time': current_time } memory_utilization = { 'Dimensions': dimensions, 'MeasureName': 'memory_utilization', 'MeasureValue': '40', 'MeasureValueType': 'DOUBLE', 'Time': current_time } records = [cpu_utilization, memory_utilization] try: result = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=records, CommonAttributes={}) Write data 137 Amazon Timestream Developer Guide print("WriteRecords Status: [%s]" % result['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) @staticmethod def _print_rejected_records_exceptions(err): print("RejectedRecords: ", err) for rr in err.response["RejectedRecords"]: print("Rejected Index " + str(rr["RecordIndex"]) + ": " + rr["Reason"]) if "ExistingVersion" in rr: print("Rejected record existing version: ", rr["ExistingVersion"]) @staticmethod def _current_milli_time(): return str(int(round(time.time() * 1000))) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function writeRecords() { console.log("Writing records"); const currentTime = Date.now().toString(); // Unix time in milliseconds const dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ]; const cpuUtilization = { 'Dimensions': dimensions, 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5', 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString() }; const memoryUtilization = { Write data 138 Amazon Timestream Developer Guide 'Dimensions': dimensions, 'MeasureName': 'memory_utilization', 'MeasureValue': '40', 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString() }; const records = [cpuUtilization, memoryUtilization]; const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, Records: records }; const request = writeClient.writeRecords(params); await request.promise().then( (data) => { console.log("Write records successful"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { const responsePayload = JSON.parse(request.response.httpResponse.body.toString()); console.log("RejectedRecords: ", responsePayload.RejectedRecords); console.log("Other records were written successfully. "); } } ); } .NET public async Task WriteRecords() { Console.WriteLine("Writing records"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); List<Dimension> dimensions = new List<Dimension>{ Write data 139 Amazon Timestream Developer Guide new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name = "hostname", Value = "host1" } }; var cpuUtilization = new Record { Dimensions = dimensions, MeasureName = "cpu_utilization", MeasureValue = "13.6", MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString }; var memoryUtilization = new Record { Dimensions = dimensions, MeasureName = "memory_utilization", MeasureValue = "40", MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString }; List<Record> records = new List<Record> { cpuUtilization, memoryUtilization }; try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { Console.WriteLine("RejectedRecordsException:" + e.ToString()); Write data 140 Amazon Timestream Developer Guide foreach
|
timestream-042
|
timestream.pdf
| 42 |
"host1" } }; var cpuUtilization = new Record { Dimensions = dimensions, MeasureName = "cpu_utilization", MeasureValue = "13.6", MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString }; var memoryUtilization = new Record { Dimensions = dimensions, MeasureName = "memory_utilization", MeasureValue = "40", MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString }; List<Record> records = new List<Record> { cpuUtilization, memoryUtilization }; try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { Console.WriteLine("RejectedRecordsException:" + e.ToString()); Write data 140 Amazon Timestream Developer Guide foreach (RejectedRecord rr in e.RejectedRecords) { Console.WriteLine("RecordIndex " + rr.RecordIndex + " : " + rr.Reason); } Console.WriteLine("Other records were written successfully. "); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } } Writing batches of records with common attributes If your time series data has measures and/or dimensions that are common across many data points, you can also use the following optimized version of the writeRecords API to insert data into Timestream for LiveAnalytics. Using common attributes with batching can further optimize the cost of writes as described in Calculating the number of writes. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void writeRecordsWithCommonAttributes() { System.out.println("Writing records with extracting common attributes"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = new Dimension().withName("region").withValue("us- east-1"); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); Write data 141 Amazon Timestream Developer Guide dimensions.add(hostname); Record commonAttributes = new Record() .withDimensions(dimensions) .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)); Record cpuUtilization = new Record() .withMeasureName("cpu_utilization") .withMeasureValue("13.5"); Record memoryUtilization = new Record() .withMeasureName("memory_utilization") .withMeasureValue("40"); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes); writeRecordsRequest.setRecords(records); try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("writeRecordsWithCommonAttributes Status: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.getRejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.getRecordIndex() + ": " + rejectedRecord.getReason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } } Write data 142 Amazon Timestream Java v2 Developer Guide public void writeRecordsWithCommonAttributes() { System.out.println("Writing records with extracting common attributes"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = Dimension.builder().name("region").value("us- east-1").build(); final Dimension az = Dimension.builder().name("az").value("az1").build(); final Dimension hostname = Dimension.builder().name("hostname").value("host1").build(); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) .time(String.valueOf(time)).build(); Record cpuUtilization = Record.builder() .measureName("cpu_utilization") .measureValue("13.5").build(); Record memoryUtilization = Record.builder() .measureName("memory_utilization") .measureValue("40").build(); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(records).build(); try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); Write data 143 Amazon Timestream Developer Guide System.out.println("writeRecordsWithCommonAttributes Status: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.rejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.recordIndex() + ": " + rejectedRecord.reason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } } Go now = time.Now() currentTimeInSeconds = now.Unix() writeRecordsCommonAttributesInput := ×treamwrite.WriteRecordsInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), CommonAttributes: ×treamwrite.Record{ Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{ Name: aws.String("hostname"), Value: aws.String("host1"), }, }, MeasureValueType: aws.String("DOUBLE"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), }, Records: []*timestreamwrite.Record{ ×treamwrite.Record{ MeasureName: aws.String("cpu_utilization"), Write data 144 Amazon Timestream Developer Guide MeasureValue: aws.String("13.5"), }, ×treamwrite.Record{ MeasureName: aws.String("memory_utilization"), MeasureValue: aws.String("40"), }, }, } _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Ingest records is successful") } Python def write_records_with_common_attributes(self): print("Writing records extracting common attributes") current_time = self._current_milli_time() dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ] common_attributes = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': current_time } cpu_utilization = { 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5' } memory_utilization = { 'MeasureName': 'memory_utilization', Write data 145 Amazon Timestream Developer Guide 'MeasureValue': '40' } records = [cpu_utilization, memory_utilization] try: result = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=records, CommonAttributes=common_attributes) print("WriteRecords Status: [%s]" % result['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) @staticmethod def _print_rejected_records_exceptions(err): print("RejectedRecords: ", err) for rr in err.response["RejectedRecords"]: print("Rejected Index " + str(rr["RecordIndex"]) + ": " + rr["Reason"]) if "ExistingVersion" in rr: print("Rejected record existing version: ", rr["ExistingVersion"]) @staticmethod def _current_milli_time(): return str(int(round(time.time() * 1000))) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function writeRecordsWithCommonAttributes() { console.log("Writing records with common attributes"); const currentTime = Date.now().toString(); // Unix time in milliseconds const dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ]; Write data
|
timestream-043
|
timestream.pdf
| 43 |
print("RejectedRecords: ", err) for rr in err.response["RejectedRecords"]: print("Rejected Index " + str(rr["RecordIndex"]) + ": " + rr["Reason"]) if "ExistingVersion" in rr: print("Rejected record existing version: ", rr["ExistingVersion"]) @staticmethod def _current_milli_time(): return str(int(round(time.time() * 1000))) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function writeRecordsWithCommonAttributes() { console.log("Writing records with common attributes"); const currentTime = Date.now().toString(); // Unix time in milliseconds const dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ]; Write data 146 Amazon Timestream Developer Guide const commonAttributes = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString() }; const cpuUtilization = { 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5' }; const memoryUtilization = { 'MeasureName': 'memory_utilization', 'MeasureValue': '40' }; const records = [cpuUtilization, memoryUtilization]; const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, Records: records, CommonAttributes: commonAttributes }; const request = writeClient.writeRecords(params); await request.promise().then( (data) => { console.log("Write records successful"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { const responsePayload = JSON.parse(request.response.httpResponse.body.toString()); console.log("RejectedRecords: ", responsePayload.RejectedRecords); console.log("Other records were written successfully. "); } } ); } Write data 147 Amazon Timestream .NET Developer Guide public async Task WriteRecordsWithCommonAttributes() { Console.WriteLine("Writing records with common attributes"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); List<Dimension> dimensions = new List<Dimension>{ new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name = "hostname", Value = "host1" } }; var commonAttributes = new Record { Dimensions = dimensions, MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString }; var cpuUtilization = new Record { MeasureName = "cpu_utilization", MeasureValue = "13.6" }; var memoryUtilization = new Record { MeasureName = "memory_utilization", MeasureValue = "40" }; List<Record> records = new List<Record>(); records.Add(cpuUtilization); records.Add(memoryUtilization); try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, Write data 148 Amazon Timestream Developer Guide TableName = Constants.TABLE_NAME, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { Console.WriteLine("RejectedRecordsException:" + e.ToString()); foreach (RejectedRecord rr in e.RejectedRecords) { Console.WriteLine("RecordIndex " + rr.RecordIndex + " : " + rr.Reason); } Console.WriteLine("Other records were written successfully. "); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } } Upserting records While the default writes in Amazon Timestream follow the first writer wins semantics, where data is stored as append only and duplicate records are rejected, there are applications that require the ability to write data into Amazon Timestream using the last writer wins semantics, where the record with the highest version is stored in the system. There are also applications that require the ability to update existing records. To address these scenarios, Amazon Timestream provides the ability to upsert data. Upsert is an operation that inserts a record in to the system when the record does not exist or updates the record, when one exists. You can upsert records by including the Version in record definition while sending a WriteRecords request. Amazon Timestream will store the record with the record with highest Version. The code sample below shows how you can upsert data: Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Write data 149 Amazon Timestream Java Developer Guide public void writeRecordsWithUpsert() { System.out.println("Writing records with upsert"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); // To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source long version = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = new Dimension().withName("region").withValue("us- east-1"); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = new Record() .withDimensions(dimensions) .withMeasureValueType(MeasureValueType.DOUBLE) .withTime(String.valueOf(time)) .withVersion(version); Record cpuUtilization = new Record() .withMeasureName("cpu_utilization") .withMeasureValue("13.5"); Record memoryUtilization = new Record() .withMeasureName("memory_utilization") .withMeasureValue("40"); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes); writeRecordsRequest.setRecords(records); Write data 150 Amazon Timestream Developer Guide // write records for first time try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for first time: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for retry: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // upsert with lower version, this would fail because a higher version is required to update the measure value. version -= 1; commonAttributes.setVersion(version); cpuUtilization.setMeasureValue("14.5"); memoryUtilization.setMeasureValue("50"); List<Record> upsertedRecords = new ArrayList<>();
|
timestream-044
|
timestream.pdf
| 44 |
= amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for first time: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. try { WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for retry: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // upsert with lower version, this would fail because a higher version is required to update the measure value. version -= 1; commonAttributes.setVersion(version); cpuUtilization.setMeasureValue("14.5"); memoryUtilization.setMeasureValue("50"); List<Record> upsertedRecords = new ArrayList<>(); upsertedRecords.add(cpuUtilization); upsertedRecords.add(memoryUtilization); WriteRecordsRequest writeRecordsUpsertRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes); writeRecordsUpsertRequest.setRecords(upsertedRecords); try { Write data 151 Amazon Timestream Developer Guide WriteRecordsResult writeRecordsUpsertResult = amazonTimestreamWrite.writeRecords(writeRecordsUpsertRequest); System.out.println("WriteRecords Status for upsert with lower version: " + writeRecordsUpsertResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { System.out.println("WriteRecords Status for upsert with lower version: "); printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // upsert with higher version as new data in generated version = System.currentTimeMillis(); commonAttributes.setVersion(version); writeRecordsUpsertRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes); writeRecordsUpsertRequest.setRecords(upsertedRecords); try { WriteRecordsResult writeRecordsUpsertResult = amazonTimestreamWrite.writeRecords(writeRecordsUpsertRequest); System.out.println("WriteRecords Status for upsert with higher version: " + writeRecordsUpsertResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } Java v2 public void writeRecordsWithUpsert() { System.out.println("Writing records with upsert"); // Specify repeated values for all records List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); // To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source long version = System.currentTimeMillis(); Write data 152 Amazon Timestream Developer Guide List<Dimension> dimensions = new ArrayList<>(); final Dimension region = Dimension.builder().name("region").value("us- east-1").build(); final Dimension az = Dimension.builder().name("az").value("az1").build(); final Dimension hostname = Dimension.builder().name("hostname").value("host1").build(); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) .time(String.valueOf(time)) .version(version) .build(); Record cpuUtilization = Record.builder() .measureName("cpu_utilization") .measureValue("13.5").build(); Record memoryUtilization = Record.builder() .measureName("memory_utilization") .measureValue("40").build(); records.add(cpuUtilization); records.add(memoryUtilization); WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(records).build(); // write records for first time try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for first time: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { Write data 153 Amazon Timestream Developer Guide System.out.println("Error: " + e); } // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status for retry: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // upsert with lower version, this would fail because a higher version is required to update the measure value. version -= 1; commonAttributes = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) .time(String.valueOf(time)) .version(version) .build(); cpuUtilization = Record.builder() .measureName("cpu_utilization") .measureValue("14.5").build(); memoryUtilization = Record.builder() .measureName("memory_utilization") .measureValue("50").build(); List<Record> upsertedRecords = new ArrayList<>(); upsertedRecords.add(cpuUtilization); upsertedRecords.add(memoryUtilization); WriteRecordsRequest writeRecordsUpsertRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(upsertedRecords).build(); try { Write data 154 Amazon Timestream Developer Guide WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsUpsertRequest); System.out.println("WriteRecords Status for upsert with lower version: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { System.out.println("WriteRecords Status for upsert with lower version: "); printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } // upsert with higher version as new data in generated version = System.currentTimeMillis(); commonAttributes = Record.builder() .dimensions(dimensions) .measureValueType(MeasureValueType.DOUBLE) .time(String.valueOf(time)) .version(version) .build(); writeRecordsUpsertRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(upsertedRecords).build(); try { WriteRecordsResponse writeRecordsUpsertResponse = timestreamWriteClient.writeRecords(writeRecordsUpsertRequest); System.out.println("WriteRecords Status for upsert with higher version: " + writeRecordsUpsertResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } Go // Below code will ingest and upsert cpu_utilization and memory_utilization metric for a host on // region=us-east-1, az=az1, and hostname=host1 Write data 155 Amazon Timestream Developer Guide fmt.Println("Ingesting records and set version as currentTimeInMills, hit enter to continue") reader.ReadString('\n') // Get current time in seconds. now = time.Now() currentTimeInSeconds = now.Unix() // To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source version := time.Now().Round(time.Millisecond).UnixNano() / 1e6 // set version as currentTimeInMills writeRecordsCommonAttributesUpsertInput := ×treamwrite.WriteRecordsInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), CommonAttributes: ×treamwrite.Record{ Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{ Name: aws.String("hostname"), Value: aws.String("host1"), }, }, MeasureValueType: aws.String("DOUBLE"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), Version: &version, }, Records: []*timestreamwrite.Record{ ×treamwrite.Record{ MeasureName: aws.String("cpu_utilization"), MeasureValue: aws.String("13.5"), }, ×treamwrite.Record{ MeasureName: aws.String("memory_utilization"), MeasureValue: aws.String("40"), }, Write data 156 Amazon Timestream }, } Developer Guide // write records for first time _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Frist-time write records is successful") } fmt.Println("Retry same writeRecordsRequest with same records and versions. Because writeRecords API is idempotent, this will success. hit enter to continue") reader.ReadString('\n') _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Retry write records for
|
timestream-045
|
timestream.pdf
| 45 |
}, }, MeasureValueType: aws.String("DOUBLE"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), Version: &version, }, Records: []*timestreamwrite.Record{ ×treamwrite.Record{ MeasureName: aws.String("cpu_utilization"), MeasureValue: aws.String("13.5"), }, ×treamwrite.Record{ MeasureName: aws.String("memory_utilization"), MeasureValue: aws.String("40"), }, Write data 156 Amazon Timestream }, } Developer Guide // write records for first time _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Frist-time write records is successful") } fmt.Println("Retry same writeRecordsRequest with same records and versions. Because writeRecords API is idempotent, this will success. hit enter to continue") reader.ReadString('\n') _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Retry write records for same request is successful") } fmt.Println("Upsert with lower version, this would fail because a higher version is required to update the measure value. hit enter to continue") reader.ReadString('\n') version -= 1 writeRecordsCommonAttributesUpsertInput.CommonAttributes.Version = &version updated_cpu_utilization := ×treamwrite.Record{ MeasureName: aws.String("cpu_utilization"), MeasureValue: aws.String("14.5"), } updated_memory_utilization := ×treamwrite.Record{ MeasureName: aws.String("memory_utilization"), MeasureValue: aws.String("50"), } writeRecordsCommonAttributesUpsertInput.Records = []*timestreamwrite.Record{ updated_cpu_utilization, Write data 157 Amazon Timestream Developer Guide updated_memory_utilization, } _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Write records with lower version is successful") } fmt.Println("Upsert with higher version as new data in generated, this would success. hit enter to continue") reader.ReadString('\n') version = time.Now().Round(time.Millisecond).UnixNano() / 1e6 // set version as currentTimeInMills writeRecordsCommonAttributesUpsertInput.CommonAttributes.Version = &version _, err = writeSvc.WriteRecords(writeRecordsCommonAttributesUpsertInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Write records with higher version is successful") } Python def write_records_with_upsert(self): print("Writing records with upsert") current_time = self._current_milli_time() # To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source version = int(self._current_milli_time()) dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ] Write data 158 Amazon Timestream Developer Guide common_attributes = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': current_time, 'Version': version } cpu_utilization = { 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5' } memory_utilization = { 'MeasureName': 'memory_utilization', 'MeasureValue': '40' } records = [cpu_utilization, memory_utilization] # write records for first time try: result = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=records, CommonAttributes=common_attributes) print("WriteRecords Status for first time: [%s]" % result['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) # Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. try: result = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=records, CommonAttributes=common_attributes) print("WriteRecords Status for retry: [%s]" % result['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) Write data 159 Amazon Timestream Developer Guide # upsert with lower version, this would fail because a higher version is required to update the measure value. version -= 1 common_attributes["Version"] = version cpu_utilization["MeasureValue"] = '14.5' memory_utilization["MeasureValue"] = '50' upsertedRecords = [cpu_utilization, memory_utilization] try: upsertedResult = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=upsertedRecords, CommonAttributes=common_attributes) print("WriteRecords Status for upsert with lower version: [%s]" % upsertedResult['ResponseMetadata']['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) # upsert with higher version as new data is generated version = int(self._current_milli_time()) common_attributes["Version"] = version try: upsertedResult = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=upsertedRecords, CommonAttributes=common_attributes) print("WriteRecords Upsert Status: [%s]" % upsertedResult['ResponseMetadata'] ['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: self._print_rejected_records_exceptions(err) except Exception as err: print("Error:", err) @staticmethod def _current_milli_time(): Write data 160 Amazon Timestream Developer Guide return str(int(round(time.time() * 1000))) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function writeRecordsWithUpsert() { console.log("Writing records with upsert"); const currentTime = Date.now().toString(); // Unix time in milliseconds // To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source let version = Date.now(); const dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ]; const commonAttributes = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString(), 'Version': version }; const cpuUtilization = { 'MeasureName': 'cpu_utilization', 'MeasureValue': '13.5' }; const memoryUtilization = { 'MeasureName': 'memory_utilization', 'MeasureValue': '40' }; const records = [cpuUtilization, memoryUtilization]; const params = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, Records: records, Write data 161 Amazon Timestream Developer Guide CommonAttributes: commonAttributes }; const request = writeClient.writeRecords(params); // write records for first time await request.promise().then( (data) => { console.log("Write records successful for first time."); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(request); } } ); // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. await request.promise().then( (data) => { console.log("Write records successful for retry."); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(request); } } ); // upsert with lower version, this would fail because a higher version is required to update the measure value. version--; const commonAttributesWithLowerVersion = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString(), 'Version': version }; const updatedCpuUtilization = { Write data
|
timestream-046
|
timestream.pdf
| 46 |
first time."); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(request); } } ); // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. await request.promise().then( (data) => { console.log("Write records successful for retry."); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(request); } } ); // upsert with lower version, this would fail because a higher version is required to update the measure value. version--; const commonAttributesWithLowerVersion = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString(), 'Version': version }; const updatedCpuUtilization = { Write data 162 Amazon Timestream Developer Guide 'MeasureName': 'cpu_utilization', 'MeasureValue': '14.5' }; const updatedMemoryUtilization = { 'MeasureName': 'memory_utilization', 'MeasureValue': '50' }; const upsertedRecords = [updatedCpuUtilization, updatedMemoryUtilization]; const upsertedParamsWithLowerVersion = { DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, Records: upsertedRecords, CommonAttributes: commonAttributesWithLowerVersion }; const upsertRequestWithLowerVersion = writeClient.writeRecords(upsertedParamsWithLowerVersion); await upsertRequestWithLowerVersion.promise().then( (data) => { console.log("Write records for upsert with lower version successful"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(upsertRequestWithLowerVersion); } } ); // upsert with higher version as new data in generated version = Date.now(); const commonAttributesWithHigherVersion = { 'Dimensions': dimensions, 'MeasureValueType': 'DOUBLE', 'Time': currentTime.toString(), 'Version': version }; const upsertedParamsWithHigherVerion = { Write data 163 Amazon Timestream Developer Guide DatabaseName: constants.DATABASE_NAME, TableName: constants.TABLE_NAME, Records: upsertedRecords, CommonAttributes: commonAttributesWithHigherVersion }; const upsertRequestWithHigherVersion = writeClient.writeRecords(upsertedParamsWithHigherVerion); await upsertRequestWithHigherVersion.promise().then( (data) => { console.log("Write records upsert successful with higher version"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { printRejectedRecordsException(upsertedParamsWithHigherVerion); } } ); } .NET public async Task WriteRecordsWithUpsert() { Console.WriteLine("Writing records with upsert"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); // To achieve upsert (last writer wins) semantic, one example is to use current time as the version if you are writing directly from the data source long version = now.ToUnixTimeMilliseconds(); List<Dimension> dimensions = new List<Dimension>{ new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name = "hostname", Value = "host1" } }; var commonAttributes = new Record { Write data 164 Amazon Timestream Developer Guide Dimensions = dimensions, MeasureValueType = MeasureValueType.DOUBLE, Time = currentTimeString, Version = version }; var cpuUtilization = new Record { MeasureName = "cpu_utilization", MeasureValue = "13.6" }; var memoryUtilization = new Record { MeasureName = "memory_utilization", MeasureValue = "40" }; List<Record> records = new List<Record>(); records.Add(cpuUtilization); records.Add(memoryUtilization); // write records for first time try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"WriteRecords Status for first time: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { PrintRejectedRecordsException(e); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); Write data 165 Amazon Timestream } Developer Guide // Successfully retry same writeRecordsRequest with same records and versions, because writeRecords API is idempotent. try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"WriteRecords Status for retry: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { PrintRejectedRecordsException(e); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } // upsert with lower version, this would fail because a higher version is required to update the measure value. version--; Type recordType = typeof(Record); recordType.GetProperty("Version").SetValue(commonAttributes, version); recordType.GetProperty("MeasureValue").SetValue(cpuUtilization, "14.6"); recordType.GetProperty("MeasureValue").SetValue(memoryUtilization, "50"); List<Record> upsertedRecords = new List<Record> { cpuUtilization, memoryUtilization }; try { var writeRecordsUpsertRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, Write data 166 Amazon Timestream Developer Guide TableName = Constants.TABLE_NAME, Records = upsertedRecords, CommonAttributes = commonAttributes }; WriteRecordsResponse upsertResponse = await writeClient.WriteRecordsAsync(writeRecordsUpsertRequest); Console.WriteLine($"WriteRecords Status for upsert with lower version: {upsertResponse.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { PrintRejectedRecordsException(e); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } // upsert with higher version as new data in generated now = DateTimeOffset.UtcNow; version = now.ToUnixTimeMilliseconds(); recordType.GetProperty("Version").SetValue(commonAttributes, version); try { var writeRecordsUpsertRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = upsertedRecords, CommonAttributes = commonAttributes }; WriteRecordsResponse upsertResponse = await writeClient.WriteRecordsAsync(writeRecordsUpsertRequest); Console.WriteLine($"WriteRecords Status for upsert with higher version: {upsertResponse.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { PrintRejectedRecordsException(e); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } Write data 167 Amazon Timestream } Multi-measure attribute example Developer Guide This example illustrates writing multi-mearure attributes. Multi-measure attributes are useful when a device or an application you are tracking emits multiple metrics or events at the same timestamp.. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java package com.amazonaws.services.timestream; import static com.amazonaws.services.timestream.Main.DATABASE_NAME; import static com.amazonaws.services.timestream.Main.REGION; import static com.amazonaws.services.timestream.Main.TABLE_NAME; import java.util.ArrayList; import java.util.List; import com.amazonaws.services.timestreamwrite.AmazonTimestreamWrite; import com.amazonaws.services.timestreamwrite.model.Dimension; import com.amazonaws.services.timestreamwrite.model.MeasureValue; import com.amazonaws.services.timestreamwrite.model.MeasureValueType; import com.amazonaws.services.timestreamwrite.model.Record; import com.amazonaws.services.timestreamwrite.model.RejectedRecordsException; import com.amazonaws.services.timestreamwrite.model.WriteRecordsRequest; import com.amazonaws.services.timestreamwrite.model.WriteRecordsResult; public class multimeasureAttributeExample { AmazonTimestreamWrite timestreamWriteClient; public multimeasureAttributeExample(AmazonTimestreamWrite client) { this.timestreamWriteClient = client; } Write data
|
timestream-047
|
timestream.pdf
| 47 |
writing multi-mearure attributes. Multi-measure attributes are useful when a device or an application you are tracking emits multiple metrics or events at the same timestamp.. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java package com.amazonaws.services.timestream; import static com.amazonaws.services.timestream.Main.DATABASE_NAME; import static com.amazonaws.services.timestream.Main.REGION; import static com.amazonaws.services.timestream.Main.TABLE_NAME; import java.util.ArrayList; import java.util.List; import com.amazonaws.services.timestreamwrite.AmazonTimestreamWrite; import com.amazonaws.services.timestreamwrite.model.Dimension; import com.amazonaws.services.timestreamwrite.model.MeasureValue; import com.amazonaws.services.timestreamwrite.model.MeasureValueType; import com.amazonaws.services.timestreamwrite.model.Record; import com.amazonaws.services.timestreamwrite.model.RejectedRecordsException; import com.amazonaws.services.timestreamwrite.model.WriteRecordsRequest; import com.amazonaws.services.timestreamwrite.model.WriteRecordsResult; public class multimeasureAttributeExample { AmazonTimestreamWrite timestreamWriteClient; public multimeasureAttributeExample(AmazonTimestreamWrite client) { this.timestreamWriteClient = client; } Write data 168 Amazon Timestream Developer Guide public void writeRecordsMultiMeasureValueSingleRecord() { System.out.println("Writing records with multi value attributes"); List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); long version = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = new Dimension().withName("region").withValue(REGION); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = new Record() .withDimensions(dimensions) .withTime(String.valueOf(time)) .withVersion(version); MeasureValue cpuUtilization = new MeasureValue() .withName("cpu_utilization") .withType(MeasureValueType.DOUBLE) .withValue("13.5"); MeasureValue memoryUtilization = new MeasureValue() .withName("memory_utilization") .withType(MeasureValueType.DOUBLE) .withValue("40"); Record computationalResources = new Record() .withMeasureName("cpu_memory") .withMeasureValues(cpuUtilization, memoryUtilization) .withMeasureValueType(MeasureValueType.MULTI); records.add(computationalResources); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes) .withRecords(records); Write data 169 Amazon Timestream Developer Guide // write records for first time try { WriteRecordsResult writeRecordResult = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println( "WriteRecords Status for multi value attributes: " + writeRecordResult .getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } public void writeRecordsMultiMeasureValueMultipleRecords() { System.out.println( "Writing records with multi value attributes mixture type"); List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); long version = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = new Dimension().withName("region").withValue(REGION); final Dimension az = new Dimension().withName("az").withValue("az1"); final Dimension hostname = new Dimension().withName("hostname").withValue("host1"); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = new Record() .withDimensions(dimensions) .withTime(String.valueOf(time)) .withVersion(version); MeasureValue cpuUtilization = new MeasureValue() .withName("cpu_utilization") .withType(MeasureValueType.DOUBLE) .withValue("13"); MeasureValue memoryUtilization =new MeasureValue() .withName("memory_utilization") .withType(MeasureValueType.DOUBLE) Write data 170 Amazon Timestream Developer Guide .withValue("40"); MeasureValue activeCores = new MeasureValue() .withName("active_cores") .withType(MeasureValueType.BIGINT) .withValue("4"); Record computationalResources = new Record() .withMeasureName("computational_utilization") .withMeasureValues(cpuUtilization, memoryUtilization, activeCores) .withMeasureValueType(MeasureValueType.MULTI); records.add(computationalResources); WriteRecordsRequest writeRecordsRequest = new WriteRecordsRequest() .withDatabaseName(DATABASE_NAME) .withTableName(TABLE_NAME) .withCommonAttributes(commonAttributes) .withRecords(records); // write records for first time try { WriteRecordsResult writeRecordResult = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println( "WriteRecords Status for multi value attributes: " + writeRecordResult .getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } private void printRejectedRecordsException(RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); e.getRejectedRecords().forEach(System.out::println); } } Java v2 package com.amazonaws.services.timestream; Write data 171 Amazon Timestream Developer Guide import java.util.ArrayList; import java.util.List; import software.amazon.awssdk.services.timestreamwrite.TimestreamWriteClient; import software.amazon.awssdk.services.timestreamwrite.model.Dimension; import software.amazon.awssdk.services.timestreamwrite.model.MeasureValue; import software.amazon.awssdk.services.timestreamwrite.model.MeasureValueType; import software.amazon.awssdk.services.timestreamwrite.model.Record; import software.amazon.awssdk.services.timestreamwrite.model.RejectedRecordsException; import software.amazon.awssdk.services.timestreamwrite.model.WriteRecordsRequest; import software.amazon.awssdk.services.timestreamwrite.model.WriteRecordsResponse; import static com.amazonaws.services.timestream.Main.DATABASE_NAME; import static com.amazonaws.services.timestream.Main.TABLE_NAME; public class multimeasureAttributeExample { TimestreamWriteClient timestreamWriteClient; public multimeasureAttributeExample(TimestreamWriteClient client) { this.timestreamWriteClient = client; } public void writeRecordsMultiMeasureValueSingleRecord() { System.out.println("Writing records with multi value attributes"); List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); long version = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = Dimension.builder().name("region").value("us-east-1").build(); final Dimension az = Dimension.builder().name("az").value("az1").build(); final Dimension hostname = Dimension.builder().name("hostname").value("host1").build(); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Write data 172 Amazon Timestream Developer Guide Record commonAttributes = Record.builder() .dimensions(dimensions) .time(String.valueOf(time)) .version(version) .build(); MeasureValue cpuUtilization = MeasureValue.builder() .name("cpu_utilization") .type(MeasureValueType.DOUBLE) .value("13.5").build(); MeasureValue memoryUtilization = MeasureValue.builder() .name("memory_utilization") .type(MeasureValueType.DOUBLE) .value("40").build(); Record computationalResources = Record .builder() .measureName("cpu_memory") .measureValues(cpuUtilization, memoryUtilization) .measureValueType(MeasureValueType.MULTI) .build(); records.add(computationalResources); WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(records).build(); // write records for first time try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println( "WriteRecords Status for multi value attributes: " + writeRecordsResponse .sdkHttpResponse() .statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } Write data 173 Amazon Timestream Developer Guide public void writeRecordsMultiMeasureValueMultipleRecords() { System.out.println( "Writing records with multi value attributes mixture type"); List<Record> records = new ArrayList<>(); final long time = System.currentTimeMillis(); long version = System.currentTimeMillis(); List<Dimension> dimensions = new ArrayList<>(); final Dimension region = Dimension.builder().name("region").value("us-east-1").build(); final Dimension az = Dimension.builder().name("az").value("az1").build(); final Dimension hostname = Dimension.builder().name("hostname").value("host1").build(); dimensions.add(region); dimensions.add(az); dimensions.add(hostname); Record commonAttributes = Record.builder() .dimensions(dimensions) .time(String.valueOf(time)) .version(version) .build(); MeasureValue cpuUtilization = MeasureValue.builder() .name("cpu_utilization") .type(MeasureValueType.DOUBLE) .value("13.5").build(); MeasureValue memoryUtilization = MeasureValue.builder() .name("memory_utilization") .type(MeasureValueType.DOUBLE) .value("40").build(); MeasureValue activeCores = MeasureValue.builder() .name("active_cores") .type(MeasureValueType.BIGINT) .value("4").build(); Record computationalResources = Record .builder() .measureName("computational_utilization") .measureValues(cpuUtilization, memoryUtilization, activeCores) .measureValueType(MeasureValueType.MULTI) Write data 174 Amazon Timestream .build(); records.add(computationalResources); Developer Guide WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(records).build(); // write records for first time try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println( "WriteRecords Status for multi value attributes: " + writeRecordsResponse .sdkHttpResponse() .statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } private void printRejectedRecordsException(RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); e.rejectedRecords().forEach(System.out::println); } } Go now := time.Now() currentTimeInSeconds := now.Unix() writeRecordsInput := ×treamwrite.WriteRecordsInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), Records: []*timestreamwrite.Record{ ×treamwrite.Record{ Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Write data 175 Amazon Timestream Developer Guide Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{
|
timestream-048
|
timestream.pdf
| 48 |
WriteRecordsRequest writeRecordsRequest = WriteRecordsRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .commonAttributes(commonAttributes) .records(records).build(); // write records for first time try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println( "WriteRecords Status for multi value attributes: " + writeRecordsResponse .sdkHttpResponse() .statusCode()); } catch (RejectedRecordsException e) { printRejectedRecordsException(e); } catch (Exception e) { System.out.println("Error: " + e); } } private void printRejectedRecordsException(RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); e.rejectedRecords().forEach(System.out::println); } } Go now := time.Now() currentTimeInSeconds := now.Unix() writeRecordsInput := ×treamwrite.WriteRecordsInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), Records: []*timestreamwrite.Record{ ×treamwrite.Record{ Dimensions: []*timestreamwrite.Dimension{ ×treamwrite.Dimension{ Name: aws.String("region"), Write data 175 Amazon Timestream Developer Guide Value: aws.String("us-east-1"), }, ×treamwrite.Dimension{ Name: aws.String("az"), Value: aws.String("az1"), }, ×treamwrite.Dimension{ Name: aws.String("hostname"), Value: aws.String("host1"), }, }, MeasureName: aws.String("metrics"), MeasureValueType: aws.String("MULTI"), Time: aws.String(strconv.FormatInt(currentTimeInSeconds, 10)), TimeUnit: aws.String("SECONDS"), MeasureValues: []*timestreamwrite.MeasureValue{ ×treamwrite.MeasureValue{ Name: aws.String("cpu_utilization"), Value: aws.String("13.5"), Type: aws.String("DOUBLE"), }, ×treamwrite.MeasureValue{ Name: aws.String("memory_utilization"), Value: aws.String("40"), Type: aws.String("DOUBLE"), }, }, }, }, } _, err = writeSvc.WriteRecords(writeRecordsInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Write records is successful") } Python import time Write data 176 Developer Guide Amazon Timestream import boto3 import psutil import os from botocore.config import Config DATABASE_NAME = os.environ['DATABASE_NAME'] TABLE_NAME = os.environ['TABLE_NAME'] COUNTRY = "UK" CITY = "London" HOSTNAME = "MyHostname" # You can make it dynamic using socket.gethostname() INTERVAL = 1 # Seconds def prepare_common_attributes(): common_attributes = { 'Dimensions': [ {'Name': 'country', 'Value': COUNTRY}, {'Name': 'city', 'Value': CITY}, {'Name': 'hostname', 'Value': HOSTNAME} ], 'MeasureName': 'utilization', 'MeasureValueType': 'MULTI' } return common_attributes def prepare_record(current_time): record = { 'Time': str(current_time), 'MeasureValues': [] } return record def prepare_measure(measure_name, measure_value): measure = { 'Name': measure_name, 'Value': str(measure_value), 'Type': 'DOUBLE' } return measure Write data 177 Amazon Timestream Developer Guide def write_records(records, common_attributes): try: result = write_client.write_records(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, CommonAttributes=common_attributes, Records=records) status = result['ResponseMetadata']['HTTPStatusCode'] print("Processed %d records. WriteRecords HTTPStatusCode: %s" % (len(records), status)) except Exception as err: print("Error:", err) if __name__ == '__main__': print("writing data to database {} table {}".format( DATABASE_NAME, TABLE_NAME)) session = boto3.Session() write_client = session.client('timestream-write', config=Config( read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) query_client = session.client('timestream-query') # Not used common_attributes = prepare_common_attributes() records = [] while True: current_time = int(time.time() * 1000) cpu_utilization = psutil.cpu_percent() memory_utilization = psutil.virtual_memory().percent swap_utilization = psutil.swap_memory().percent disk_utilization = psutil.disk_usage('/').percent record = prepare_record(current_time) record['MeasureValues'].append(prepare_measure('cpu', cpu_utilization)) record['MeasureValues'].append(prepare_measure('memory', memory_utilization)) record['MeasureValues'].append(prepare_measure('swap', swap_utilization)) record['MeasureValues'].append(prepare_measure('disk', disk_utilization)) records.append(record) Write data 178 Amazon Timestream Developer Guide print("records {} - cpu {} - memory {} - swap {} - disk {}".format( len(records), cpu_utilization, memory_utilization, swap_utilization, disk_utilization)) if len(records) == 100: write_records(records, common_attributes) records = [] time.sleep(INTERVAL) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function writeRecords() { console.log("Writing records"); const currentTime = Date.now().toString(); // Unix time in milliseconds const dimensions = [ {'Name': 'region', 'Value': 'us-east-1'}, {'Name': 'az', 'Value': 'az1'}, {'Name': 'hostname', 'Value': 'host1'} ]; const record = { 'Dimensions': dimensions, 'MeasureName': 'metrics', 'MeasureValues': [ { 'Name': 'cpu_utilization', 'Value': '40', 'Type': 'DOUBLE', }, { 'Name': 'memory_utilization', 'Value': '13.5', 'Type': 'DOUBLE', }, ], 'MeasureValueType': 'MULTI', 'Time': currentTime.toString() } Write data 179 Amazon Timestream Developer Guide const records = [record]; const params = { DatabaseName: 'DatabaseName', TableName: 'TableName', Records: records }; const response = await writeClient.writeRecords(params); console.log(response); } .NET using System; using System.IO; using System.Collections.Generic; using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using System.Threading.Tasks; namespace TimestreamDotNetSample { static class MultiMeasureValueConstants { public const string MultiMeasureValueSampleDb = "multiMeasureValueSampleDb"; public const string MultiMeasureValueSampleTable = "multiMeasureValueSampleTable"; } public class MultiValueAttributesExample { private readonly AmazonTimestreamWriteClient writeClient; public MultiValueAttributesExample(AmazonTimestreamWriteClient writeClient) { this.writeClient = writeClient; } public async Task WriteRecordsMultiMeasureValueSingleRecord() { Write data 180 Amazon Timestream Developer Guide Console.WriteLine("Writing records with multi value attributes"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); List<Dimension> dimensions = new List<Dimension>{ new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name = "hostname", Value = "host1" } }; var commonAttributes = new Record { Dimensions = dimensions, Time = currentTimeString }; var cpuUtilization = new MeasureValue { Name = "cpu_utilization", Value = "13.6", Type = "DOUBLE" }; var memoryUtilization = new MeasureValue { Name = "memory_utilization", Value = "40", Type = "DOUBLE" }; var computationalRecord = new Record { MeasureName = "cpu_memory", MeasureValues = new List<MeasureValue> {cpuUtilization, memoryUtilization}, MeasureValueType = "MULTI" }; List<Record> records = new List<Record>(); records.Add(computationalRecord); try { Write data 181 Amazon Timestream Developer Guide var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = MultiMeasureValueConstants.MultiMeasureValueSampleDb, TableName = MultiMeasureValueConstants.MultiMeasureValueSampleTable, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } } public async Task WriteRecordsMultiMeasureValueMultipleRecords() { Console.WriteLine("Writing records with multi value attributes mixture type"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); List<Dimension> dimensions = new List<Dimension>{ new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name =
|
timestream-049
|
timestream.pdf
| 49 |
{ Write data 181 Amazon Timestream Developer Guide var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = MultiMeasureValueConstants.MultiMeasureValueSampleDb, TableName = MultiMeasureValueConstants.MultiMeasureValueSampleTable, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } } public async Task WriteRecordsMultiMeasureValueMultipleRecords() { Console.WriteLine("Writing records with multi value attributes mixture type"); DateTimeOffset now = DateTimeOffset.UtcNow; string currentTimeString = (now.ToUnixTimeMilliseconds()).ToString(); List<Dimension> dimensions = new List<Dimension>{ new Dimension { Name = "region", Value = "us-east-1" }, new Dimension { Name = "az", Value = "az1" }, new Dimension { Name = "hostname", Value = "host1" } }; var commonAttributes = new Record { Dimensions = dimensions, Time = currentTimeString }; var cpuUtilization = new MeasureValue { Name = "cpu_utilization", Value = "13.6", Type = "DOUBLE" }; Write data 182 Amazon Timestream Developer Guide var memoryUtilization = new MeasureValue { Name = "memory_utilization", Value = "40", Type = "DOUBLE" }; var activeCores = new MeasureValue { Name = "active_cores", Value = "4", Type = "BIGINT" }; var computationalRecord = new Record { MeasureName = "computational_utilization", MeasureValues = new List<MeasureValue> {cpuUtilization, memoryUtilization, activeCores}, MeasureValueType = "MULTI" }; var aliveRecord = new Record { MeasureName = "is_healthy", MeasureValue = "true", MeasureValueType = "BOOLEAN" }; List<Record> records = new List<Record>(); records.Add(computationalRecord); records.Add(aliveRecord); try { var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = MultiMeasureValueConstants.MultiMeasureValueSampleDb, TableName = MultiMeasureValueConstants.MultiMeasureValueSampleTable, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Write data 183 Amazon Timestream Developer Guide Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } } } } Handling write failures Writes in Amazon Timestream can fail for one or more of the following reasons: • There are records with timestamps that lie outside the retention duration of the memory store. • There are records containing dimensions and/or measures that exceed the Timestream defined limits. • Amazon Timestream has detected duplicate records. Records are marked as duplicate, when there are multiple records with the same dimensions, timestamps, and measure names but: • Measure values are different. • Version is not present in the request or the value of version in the new record is equal to or lower than the existing value. If Amazon Timestream rejects data for this reason, the ExistingVersion field in the RejectedRecords will contain the record's current version as stored in Amazon Timestream. To force an update, you can resend the request with a version for the record set to a value greater than the ExistingVersion. For more information about errors and rejected records, see Errors and RejectedRecord. If your application receives a RejectedRecordsException when attempting to write records to Timestream, you can parse the rejected records to learn more about the write failures as shown below. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Write data 184 Amazon Timestream Java try { Developer Guide WriteRecordsResult writeRecordsResult = amazonTimestreamWrite.writeRecords(writeRecordsRequest); System.out.println("WriteRecords Status: " + writeRecordsResult.getSdkHttpMetadata().getHttpStatusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.getRejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.getRecordIndex() + ": " + rejectedRecord.getReason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } Java v2 try { WriteRecordsResponse writeRecordsResponse = timestreamWriteClient.writeRecords(writeRecordsRequest); System.out.println("writeRecordsWithCommonAttributes Status: " + writeRecordsResponse.sdkHttpResponse().statusCode()); } catch (RejectedRecordsException e) { System.out.println("RejectedRecords: " + e); for (RejectedRecord rejectedRecord : e.rejectedRecords()) { System.out.println("Rejected Index " + rejectedRecord.recordIndex() + ": " + rejectedRecord.reason()); } System.out.println("Other records were written successfully. "); } catch (Exception e) { System.out.println("Error: " + e); } Go _, err = writeSvc.WriteRecords(writeRecordsInput) if err != nil { fmt.Println("Error:") fmt.Println(err) Write data 185 Amazon Timestream } else { fmt.Println("Write records is successful") } Developer Guide Python try: result = self.client.write_records(DatabaseName=Constant.DATABASE_NAME, TableName=Constant.TABLE_NAME, Records=records, CommonAttributes=common_attributes) print("WriteRecords Status: [%s]" % result['ResponseMetadata']['HTTPStatusCode']) except self.client.exceptions.RejectedRecordsException as err: print("RejectedRecords: ", err) for rr in err.response["RejectedRecords"]: print("Rejected Index " + str(rr["RecordIndex"]) + ": " + rr["Reason"]) print("Other records were written successfully. ") except Exception as err: print("Error:", err) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. await request.promise().then( (data) => { console.log("Write records successful"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { const responsePayload = JSON.parse(request.response.httpResponse.body.toString()); console.log("RejectedRecords: ", responsePayload.RejectedRecords); console.log("Other records were written successfully. "); } } ); .NET try { Write data 186 Amazon Timestream Developer Guide var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { Console.WriteLine("RejectedRecordsException:" + e.ToString()); foreach (RejectedRecord rr in e.RejectedRecords) { Console.WriteLine("RecordIndex " +
|
timestream-050
|
timestream.pdf
| 50 |
Amazon Timestream for LiveAnalytics application on GitHub. await request.promise().then( (data) => { console.log("Write records successful"); }, (err) => { console.log("Error writing records:", err); if (err.code === 'RejectedRecordsException') { const responsePayload = JSON.parse(request.response.httpResponse.body.toString()); console.log("RejectedRecords: ", responsePayload.RejectedRecords); console.log("Other records were written successfully. "); } } ); .NET try { Write data 186 Amazon Timestream Developer Guide var writeRecordsRequest = new WriteRecordsRequest { DatabaseName = Constants.DATABASE_NAME, TableName = Constants.TABLE_NAME, Records = records, CommonAttributes = commonAttributes }; WriteRecordsResponse response = await writeClient.WriteRecordsAsync(writeRecordsRequest); Console.WriteLine($"Write records status code: {response.HttpStatusCode.ToString()}"); } catch (RejectedRecordsException e) { Console.WriteLine("RejectedRecordsException:" + e.ToString()); foreach (RejectedRecord rr in e.RejectedRecords) { Console.WriteLine("RecordIndex " + rr.RecordIndex + " : " + rr.Reason); } Console.WriteLine("Other records were written successfully. "); } catch (Exception e) { Console.WriteLine("Write records failure:" + e.ToString()); } Run query Topics • Paginating results • Parsing result sets • Accessing the query status Paginating results When you run a query, Timestream returns the result set in a paginated manner to optimize the responsiveness of your applications. The code snippet below shows how you can paginate through the result set. You must loop through all the result set pages until you encounter a null value. Pagination tokens expire 3 hours after being issued by Timestream for LiveAnalytics. Run query 187 Amazon Timestream Note Developer Guide These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java private void runQuery(String queryString) { try { QueryRequest queryRequest = new QueryRequest(); queryRequest.setQueryString(queryString); QueryResult queryResult = queryClient.query(queryRequest); while (true) { parseQueryResult(queryResult); if (queryResult.getNextToken() == null) { break; } queryRequest.setNextToken(queryResult.getNextToken()); queryResult = queryClient.query(queryRequest); } } catch (Exception e) { // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries e.printStackTrace(); } } Java v2 private void runQuery(String queryString) { try { QueryRequest queryRequest = QueryRequest.builder().queryString(queryString).build(); final QueryIterable queryResponseIterator = timestreamQueryClient.queryPaginator(queryRequest); for(QueryResponse queryResponse : queryResponseIterator) { parseQueryResult(queryResponse); } } catch (Exception e) { // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries Run query 188 Amazon Timestream Developer Guide e.printStackTrace(); } } Go func runQuery(queryPtr *string, querySvc *timestreamquery.TimestreamQuery, f *os.File) { queryInput := ×treamquery.QueryInput{ QueryString: aws.String(*queryPtr), } fmt.Println("QueryInput:") fmt.Println(queryInput) // execute the query err := querySvc.QueryPages(queryInput, func(page *timestreamquery.QueryOutput, lastPage bool) bool { // process query response queryStatus := page.QueryStatus fmt.Println("Current query status:", queryStatus) // query response metadata // includes column names and types metadata := page.ColumnInfo // fmt.Println("Metadata:") fmt.Println(metadata) header := "" for i := 0; i < len(metadata); i++ { header += *metadata[i].Name if i != len(metadata)-1 { header += ", " } } write(f, header) // query response data fmt.Println("Data:") // process rows rows := page.Rows for i := 0; i < len(rows); i++ { data := rows[i].Data value := processRowType(data, metadata) fmt.Println(value) write(f, value) } Run query 189 Amazon Timestream Developer Guide fmt.Println("Number of rows:", len(page.Rows)) return true }) if err != nil { fmt.Println("Error:") fmt.Println(err) } } Python def run_query(self, query_string): try: page_iterator = self.paginator.paginate(QueryString=query_string) for page in page_iterator: self._parse_query_result(page) except Exception as err: print("Exception while running query:", err) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function getAllRows(query, nextToken) { const params = { QueryString: query }; if (nextToken) { params.NextToken = nextToken; } await queryClient.query(params).promise() .then( (response) => { parseQueryResult(response); if (response.NextToken) { getAllRows(query, response.NextToken); } }, (err) => { console.error("Error while querying:", err); Run query 190 Amazon Timestream }); } .NET Developer Guide private async Task RunQueryAsync(string queryString) { try { QueryRequest queryRequest = new QueryRequest(); queryRequest.QueryString = queryString; QueryResponse queryResponse = await queryClient.QueryAsync(queryRequest); while (true) { ParseQueryResult(queryResponse); if (queryResponse.NextToken == null) { break; } queryRequest.NextToken = queryResponse.NextToken; queryResponse = await queryClient.QueryAsync(queryRequest); } } catch(Exception e) { // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries Console.WriteLine(e.ToString()); } } Parsing result sets You can use the following code snippets to extract data from the result set. Query results are accessible for up to 24 hours after a query completes. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Run query 191 Amazon Timestream Java Developer Guide private static final DateTimeFormatter TIMESTAMP_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSSSSSSSS"); private static final DateTimeFormatter DATE_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd"); private static final DateTimeFormatter TIME_FORMATTER = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSSSSS"); private static final long ONE_GB_IN_BYTES = 1073741824L; private void parseQueryResult(QueryResult response) { final QueryStatus currentStatusOfQuery = queryResult.getQueryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.getProgressPercentage() + "%"); double bytesScannedSoFar = ((double) currentStatusOfQuery.getCumulativeBytesScanned() / ONE_GB_IN_BYTES); System.out.println("Bytes scanned so far: " + bytesScannedSoFar + " GB"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.getCumulativeBytesMetered() / ONE_GB_IN_BYTES);
|
timestream-051
|
timestream.pdf
| 51 |
full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Run query 191 Amazon Timestream Java Developer Guide private static final DateTimeFormatter TIMESTAMP_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSSSSSSSS"); private static final DateTimeFormatter DATE_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd"); private static final DateTimeFormatter TIME_FORMATTER = DateTimeFormatter.ofPattern("HH:mm:ss.SSSSSSSSS"); private static final long ONE_GB_IN_BYTES = 1073741824L; private void parseQueryResult(QueryResult response) { final QueryStatus currentStatusOfQuery = queryResult.getQueryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.getProgressPercentage() + "%"); double bytesScannedSoFar = ((double) currentStatusOfQuery.getCumulativeBytesScanned() / ONE_GB_IN_BYTES); System.out.println("Bytes scanned so far: " + bytesScannedSoFar + " GB"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.getCumulativeBytesMetered() / ONE_GB_IN_BYTES); System.out.println("Bytes metered so far: " + bytesMeteredSoFar + " GB"); List<ColumnInfo> columnInfo = response.getColumnInfo(); List<Row> rows = response.getRows(); System.out.println("Metadata: " + columnInfo); System.out.println("Data: "); // iterate every row for (Row row : rows) { System.out.println(parseRow(columnInfo, row)); } } private String parseRow(List<ColumnInfo> columnInfo, Row row) { List<Datum> data = row.getData(); List<String> rowOutput = new ArrayList<>(); // iterate every column per row for (int j = 0; j < data.size(); j++) { ColumnInfo info = columnInfo.get(j); Datum datum = data.get(j); Run query 192 Amazon Timestream Developer Guide rowOutput.add(parseDatum(info, datum)); } return String.format("{%s}", rowOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } private String parseDatum(ColumnInfo info, Datum datum) { if (datum.isNullValue() != null && datum.isNullValue()) { return info.getName() + "=" + "NULL"; } Type columnType = info.getType(); // If the column is of TimeSeries Type if (columnType.getTimeSeriesMeasureValueColumnInfo() != null) { return parseTimeSeries(info, datum); } // If the column is of Array Type else if (columnType.getArrayColumnInfo() != null) { List<Datum> arrayValues = datum.getArrayValue(); return info.getName() + "=" + parseArray(info.getType().getArrayColumnInfo(), arrayValues); } // If the column is of Row Type else if (columnType.getRowColumnInfo() != null) { List<ColumnInfo> rowColumnInfo = info.getType().getRowColumnInfo(); Row rowValues = datum.getRowValue(); return parseRow(rowColumnInfo, rowValues); } // If the column is of Scalar Type else { return parseScalarType(info, datum); } } private String parseTimeSeries(ColumnInfo info, Datum datum) { List<String> timeSeriesOutput = new ArrayList<>(); for (TimeSeriesDataPoint dataPoint : datum.getTimeSeriesValue()) { timeSeriesOutput.add("{time=" + dataPoint.getTime() + ", value=" + parseDatum(info.getType().getTimeSeriesMeasureValueColumnInfo(), dataPoint.getValue()) + "}"); } return String.format("[%s]", timeSeriesOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } Run query 193 Amazon Timestream Developer Guide private String parseScalarType(ColumnInfo info, Datum datum) { switch (ScalarType.fromValue(info.getType().getScalarType())) { case VARCHAR: return parseColumnName(info) + datum.getScalarValue(); case BIGINT: Long longValue = Long.valueOf(datum.getScalarValue()); return parseColumnName(info) + longValue; case INTEGER: Integer intValue = Integer.valueOf(datum.getScalarValue()); return parseColumnName(info) + intValue; case BOOLEAN: Boolean booleanValue = Boolean.valueOf(datum.getScalarValue()); return parseColumnName(info) + booleanValue; case DOUBLE: Double doubleValue = Double.valueOf(datum.getScalarValue()); return parseColumnName(info) + doubleValue; case TIMESTAMP: return parseColumnName(info) + LocalDateTime.parse(datum.getScalarValue(), TIMESTAMP_FORMATTER); case DATE: return parseColumnName(info) + LocalDate.parse(datum.getScalarValue(), DATE_FORMATTER); case TIME: return parseColumnName(info) + LocalTime.parse(datum.getScalarValue(), TIME_FORMATTER); case INTERVAL_DAY_TO_SECOND: case INTERVAL_YEAR_TO_MONTH: return parseColumnName(info) + datum.getScalarValue(); case UNKNOWN: return parseColumnName(info) + datum.getScalarValue(); default: throw new IllegalArgumentException("Given type is not valid: " + info.getType().getScalarType()); } } private String parseColumnName(ColumnInfo info) { return info.getName() == null ? "" : info.getName() + "="; } private String parseArray(ColumnInfo arrayColumnInfo, List<Datum> arrayValues) { List<String> arrayOutput = new ArrayList<>(); for (Datum datum : arrayValues) { arrayOutput.add(parseDatum(arrayColumnInfo, datum)); Run query 194 Amazon Timestream } Developer Guide return String.format("[%s]", arrayOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } Java v2 private static final long ONE_GB_IN_BYTES = 1073741824L; private void parseQueryResult(QueryResponse response) { final QueryStatus currentStatusOfQuery = response.queryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.progressPercentage() + "%"); double bytesScannedSoFar = ((double) currentStatusOfQuery.cumulativeBytesScanned() / ONE_GB_IN_BYTES); System.out.println("Bytes scanned so far: " + bytesScannedSoFar + " GB"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.cumulativeBytesMetered() / ONE_GB_IN_BYTES); System.out.println("Bytes metered so far: " + bytesMeteredSoFar + " GB"); List<ColumnInfo> columnInfo = response.columnInfo(); List<Row> rows = response.rows(); System.out.println("Metadata: " + columnInfo); System.out.println("Data: "); // iterate every row for (Row row : rows) { System.out.println(parseRow(columnInfo, row)); } } private String parseRow(List<ColumnInfo> columnInfo, Row row) { List<Datum> data = row.data(); List<String> rowOutput = new ArrayList<>(); // iterate every column per row for (int j = 0; j < data.size(); j++) { ColumnInfo info = columnInfo.get(j); Datum datum = data.get(j); rowOutput.add(parseDatum(info, datum)); Run query 195 Amazon Timestream } Developer Guide return String.format("{%s}", rowOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } private String parseDatum(ColumnInfo info, Datum datum) { if (datum.nullValue() != null && datum.nullValue()) { return info.name() + "=" + "NULL"; } Type columnType = info.type(); // If the column is of TimeSeries Type if (columnType.timeSeriesMeasureValueColumnInfo() != null) { return parseTimeSeries(info, datum); } // If the column is of Array Type else if (columnType.arrayColumnInfo() != null) { List<Datum> arrayValues = datum.arrayValue(); return info.name() + "=" + parseArray(info.type().arrayColumnInfo(), arrayValues); } // If the column is of Row Type else if (columnType.rowColumnInfo() != null && columnType.rowColumnInfo().size() > 0) { List<ColumnInfo> rowColumnInfo = info.type().rowColumnInfo(); Row rowValues = datum.rowValue(); return parseRow(rowColumnInfo, rowValues); } // If the column is of Scalar Type else { return parseScalarType(info, datum); } } private String parseTimeSeries(ColumnInfo info, Datum datum) { List<String> timeSeriesOutput = new ArrayList<>(); for (TimeSeriesDataPoint dataPoint : datum.timeSeriesValue()) { timeSeriesOutput.add("{time=" + dataPoint.time() + ", value=" + parseDatum(info.type().timeSeriesMeasureValueColumnInfo(), dataPoint.value()) + "}"); } return String.format("[%s]", timeSeriesOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } Run query 196 Amazon Timestream Developer Guide private String parseScalarType(ColumnInfo info, Datum datum) { return
|
timestream-052
|
timestream.pdf
| 52 |
+ "=" + parseArray(info.type().arrayColumnInfo(), arrayValues); } // If the column is of Row Type else if (columnType.rowColumnInfo() != null && columnType.rowColumnInfo().size() > 0) { List<ColumnInfo> rowColumnInfo = info.type().rowColumnInfo(); Row rowValues = datum.rowValue(); return parseRow(rowColumnInfo, rowValues); } // If the column is of Scalar Type else { return parseScalarType(info, datum); } } private String parseTimeSeries(ColumnInfo info, Datum datum) { List<String> timeSeriesOutput = new ArrayList<>(); for (TimeSeriesDataPoint dataPoint : datum.timeSeriesValue()) { timeSeriesOutput.add("{time=" + dataPoint.time() + ", value=" + parseDatum(info.type().timeSeriesMeasureValueColumnInfo(), dataPoint.value()) + "}"); } return String.format("[%s]", timeSeriesOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } Run query 196 Amazon Timestream Developer Guide private String parseScalarType(ColumnInfo info, Datum datum) { return parseColumnName(info) + datum.scalarValue(); } private String parseColumnName(ColumnInfo info) { return info.name() == null ? "" : info.name() + "="; } private String parseArray(ColumnInfo arrayColumnInfo, List<Datum> arrayValues) { List<String> arrayOutput = new ArrayList<>(); for (Datum datum : arrayValues) { arrayOutput.add(parseDatum(arrayColumnInfo, datum)); } return String.format("[%s]", arrayOutput.stream().map(Object::toString).collect(Collectors.joining(","))); } Go func processScalarType(data *timestreamquery.Datum) string { return *data.ScalarValue } func processTimeSeriesType(data []*timestreamquery.TimeSeriesDataPoint, columnInfo *timestreamquery.ColumnInfo) string { value := "" for k := 0; k < len(data); k++ { time := data[k].Time value += *time + ":" if columnInfo.Type.ScalarType != nil { value += processScalarType(data[k].Value) } else if columnInfo.Type.ArrayColumnInfo != nil { value += processArrayType(data[k].Value.ArrayValue, columnInfo.Type.ArrayColumnInfo) } else if columnInfo.Type.RowColumnInfo != nil { value += processRowType(data[k].Value.RowValue.Data, columnInfo.Type.RowColumnInfo) } else { fail("Bad data type") } if k != len(data)-1 { value += ", " } Run query 197 Amazon Timestream } return value } Developer Guide func processArrayType(datumList []*timestreamquery.Datum, columnInfo *timestreamquery.ColumnInfo) string { value := "" for k := 0; k < len(datumList); k++ { if columnInfo.Type.ScalarType != nil { value += processScalarType(datumList[k]) } else if columnInfo.Type.TimeSeriesMeasureValueColumnInfo != nil { value += processTimeSeriesType(datumList[k].TimeSeriesValue, columnInfo.Type.TimeSeriesMeasureValueColumnInfo) } else if columnInfo.Type.ArrayColumnInfo != nil { value += "[" value += processArrayType(datumList[k].ArrayValue, columnInfo.Type.ArrayColumnInfo) value += "]" } else if columnInfo.Type.RowColumnInfo != nil { value += "[" value += processRowType(datumList[k].RowValue.Data, columnInfo.Type.RowColumnInfo) value += "]" } else { fail("Bad column type") } if k != len(datumList)-1 { value += ", " } } return value } func processRowType(data []*timestreamquery.Datum, metadata []*timestreamquery.ColumnInfo) string { value := "" for j := 0; j < len(data); j++ { if metadata[j].Type.ScalarType != nil { // process simple data types value += processScalarType(data[j]) } else if metadata[j].Type.TimeSeriesMeasureValueColumnInfo != nil { // fmt.Println("Timeseries measure value column info") // fmt.Println(metadata[j].Type.TimeSeriesMeasureValueColumnInfo.Type) Run query 198 Amazon Timestream Developer Guide datapointList := data[j].TimeSeriesValue value += "[" value += processTimeSeriesType(datapointList, metadata[j].Type.TimeSeriesMeasureValueColumnInfo) value += "]" } else if metadata[j].Type.ArrayColumnInfo != nil { columnInfo := metadata[j].Type.ArrayColumnInfo // fmt.Println("Array column info") // fmt.Println(columnInfo) datumList := data[j].ArrayValue value += "[" value += processArrayType(datumList, columnInfo) value += "]" } else if metadata[j].Type.RowColumnInfo != nil { columnInfo := metadata[j].Type.RowColumnInfo datumList := data[j].RowValue.Data value += "[" value += processRowType(datumList, columnInfo) value += "]" } else { panic("Bad column type") } // comma seperated column values if j != len(data)-1 { value += ", " } } return value } Python def _parse_query_result(self, query_result): query_status = query_result["QueryStatus"] progress_percentage = query_status["ProgressPercentage"] print(f"Query progress so far: {progress_percentage}%") bytes_scanned = float(query_status["CumulativeBytesScanned"]) / ONE_GB_IN_BYTES print(f"Data scanned so far: {bytes_scanned} GB") Run query 199 Amazon Timestream Developer Guide bytes_metered = float(query_status["CumulativeBytesMetered"]) / ONE_GB_IN_BYTES print(f"Data metered so far: {bytes_metered} GB") column_info = query_result['ColumnInfo'] print("Metadata: %s" % column_info) print("Data: ") for row in query_result['Rows']: print(self._parse_row(column_info, row)) def _parse_row(self, column_info, row): data = row['Data'] row_output = [] for j in range(len(data)): info = column_info[j] datum = data[j] row_output.append(self._parse_datum(info, datum)) return "{%s}" % str(row_output) def _parse_datum(self, info, datum): if datum.get('NullValue', False): return "%s=NULL" % info['Name'], column_type = info['Type'] # If the column is of TimeSeries Type if 'TimeSeriesMeasureValueColumnInfo' in column_type: return self._parse_time_series(info, datum) # If the column is of Array Type elif 'ArrayColumnInfo' in column_type: array_values = datum['ArrayValue'] return "%s=%s" % (info['Name'], self._parse_array(info['Type'] ['ArrayColumnInfo'], array_values)) # If the column is of Row Type elif 'RowColumnInfo' in column_type: row_column_info = info['Type']['RowColumnInfo'] row_values = datum['RowValue'] return self._parse_row(row_column_info, row_values) # If the column is of Scalar Type Run query 200 Amazon Timestream else: Developer Guide return self._parse_column_name(info) + datum['ScalarValue'] def _parse_time_series(self, info, datum): time_series_output = [] for data_point in datum['TimeSeriesValue']: time_series_output.append("{time=%s, value=%s}" % (data_point['Time'], self._parse_datum(info['Type'] ['TimeSeriesMeasureValueColumnInfo'], data_point['Value']))) return "[%s]" % str(time_series_output) def _parse_array(self, array_column_info, array_values): array_output = [] for datum in array_values: array_output.append(self._parse_datum(array_column_info, datum)) return "[%s]" % str(array_output) @staticmethod def _parse_column_name(info): if 'Name' in info: return info['Name'] + "=" else: return "" Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. function parseQueryResult(response) { const queryStatus = response.QueryStatus; console.log("Current query status: " + JSON.stringify(queryStatus)); const columnInfo = response.ColumnInfo; const rows = response.Rows; console.log("Metadata: " + JSON.stringify(columnInfo)); console.log("Data: "); rows.forEach(function (row) { Run query 201 Amazon Timestream Developer Guide console.log(parseRow(columnInfo, row)); }); } function parseRow(columnInfo, row) { const data = row.Data; const rowOutput = []; var i; for ( i = 0; i < data.length; i++
|
timestream-053
|
timestream.pdf
| 53 |
return info['Name'] + "=" else: return "" Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. function parseQueryResult(response) { const queryStatus = response.QueryStatus; console.log("Current query status: " + JSON.stringify(queryStatus)); const columnInfo = response.ColumnInfo; const rows = response.Rows; console.log("Metadata: " + JSON.stringify(columnInfo)); console.log("Data: "); rows.forEach(function (row) { Run query 201 Amazon Timestream Developer Guide console.log(parseRow(columnInfo, row)); }); } function parseRow(columnInfo, row) { const data = row.Data; const rowOutput = []; var i; for ( i = 0; i < data.length; i++ ) { info = columnInfo[i]; datum = data[i]; rowOutput.push(parseDatum(info, datum)); } return `{${rowOutput.join(", ")}}` } function parseDatum(info, datum) { if (datum.NullValue != null && datum.NullValue === true) { return `${info.Name}=NULL`; } const columnType = info.Type; // If the column is of TimeSeries Type if (columnType.TimeSeriesMeasureValueColumnInfo != null) { return parseTimeSeries(info, datum); } // If the column is of Array Type else if (columnType.ArrayColumnInfo != null) { const arrayValues = datum.ArrayValue; return `${info.Name}=${parseArray(info.Type.ArrayColumnInfo, arrayValues)}`; } // If the column is of Row Type else if (columnType.RowColumnInfo != null) { const rowColumnInfo = info.Type.RowColumnInfo; const rowValues = datum.RowValue; return parseRow(rowColumnInfo, rowValues); } // If the column is of Scalar Type else { return parseScalarType(info, datum); } Run query 202 Amazon Timestream } Developer Guide function parseTimeSeries(info, datum) { const timeSeriesOutput = []; datum.TimeSeriesValue.forEach(function (dataPoint) { timeSeriesOutput.push(`{time=${dataPoint.Time}, value= ${parseDatum(info.Type.TimeSeriesMeasureValueColumnInfo, dataPoint.Value)}}`) }); return `[${timeSeriesOutput.join(", ")}]` } function parseScalarType(info, datum) { return parseColumnName(info) + datum.ScalarValue; } function parseColumnName(info) { return info.Name == null ? "" : `${info.Name}=`; } function parseArray(arrayColumnInfo, arrayValues) { const arrayOutput = []; arrayValues.forEach(function (datum) { arrayOutput.push(parseDatum(arrayColumnInfo, datum)); }); return `[${arrayOutput.join(", ")}]` } .NET private void ParseQueryResult(QueryResponse response) { List<ColumnInfo> columnInfo = response.ColumnInfo; var options = new JsonSerializerOptions { IgnoreNullValues = true }; List<String> columnInfoStrings = columnInfo.ConvertAll(x => JsonSerializer.Serialize(x, options)); List<Row> rows = response.Rows; QueryStatus queryStatus = response.QueryStatus; Run query 203 Amazon Timestream Developer Guide Console.WriteLine("Current Query status:" + JsonSerializer.Serialize(queryStatus, options)); Console.WriteLine("Metadata:" + string.Join(",", columnInfoStrings)); Console.WriteLine("Data:"); foreach (Row row in rows) { Console.WriteLine(ParseRow(columnInfo, row)); } } private string ParseRow(List<ColumnInfo> columnInfo, Row row) { List<Datum> data = row.Data; List<string> rowOutput = new List<string>(); for (int j = 0; j < data.Count; j++) { ColumnInfo info = columnInfo[j]; Datum datum = data[j]; rowOutput.Add(ParseDatum(info, datum)); } return $"{{{string.Join(",", rowOutput)}}}"; } private string ParseDatum(ColumnInfo info, Datum datum) { if (datum.NullValue) { return $"{info.Name}=NULL"; } Amazon.TimestreamQuery.Model.Type columnType = info.Type; if (columnType.TimeSeriesMeasureValueColumnInfo != null) { return ParseTimeSeries(info, datum); } else if (columnType.ArrayColumnInfo != null) { List<Datum> arrayValues = datum.ArrayValue; return $"{info.Name}={ParseArray(info.Type.ArrayColumnInfo, arrayValues)}"; } Run query 204 Amazon Timestream Developer Guide else if (columnType.RowColumnInfo != null && columnType.RowColumnInfo.Count > 0) { List<ColumnInfo> rowColumnInfo = info.Type.RowColumnInfo; Row rowValue = datum.RowValue; return ParseRow(rowColumnInfo, rowValue); } else { return ParseScalarType(info, datum); } } private string ParseTimeSeries(ColumnInfo info, Datum datum) { var timeseriesString = datum.TimeSeriesValue .Select(value => $"{{time={value.Time}, value={ParseDatum(info.Type.TimeSeriesMeasureValueColumnInfo, value.Value)}}}") .Aggregate((current, next) => current + "," + next); return $"[{timeseriesString}]"; } private string ParseScalarType(ColumnInfo info, Datum datum) { return ParseColumnName(info) + datum.ScalarValue; } private string ParseColumnName(ColumnInfo info) { return info.Name == null ? "" : (info.Name + "="); } private string ParseArray(ColumnInfo arrayColumnInfo, List<Datum> arrayValues) { return $"[{arrayValues.Select(value => ParseDatum(arrayColumnInfo, value)).Aggregate((current, next) => current + "," + next)}]"; } Run query 205 Amazon Timestream Accessing the query status Developer Guide You can access the query status through QueryResponse, which contains information about progress of a query, the bytes scanned by a query and the bytes metered by a query. The bytesMetered and bytesScanned values are cumulative and continuously updated while paging query results. You can use this information to understand the bytes scanned by an individual query and also use it to make certain decisions. For example, assuming that the query price is $0.01 per GB scanned, you may want to cancel queries that exceed $25 per query, or X GB. The code snippet below shows how this can be done. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java private static final long ONE_GB_IN_BYTES = 1073741824L; private static final double QUERY_COST_PER_GB_IN_DOLLARS = 0.01; // Assuming the price of query is $0.01 per GB public void cancelQueryBasedOnQueryStatus() { System.out.println("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = new QueryRequest(); queryRequest.setQueryString(SELECT_ALL_QUERY); QueryResult queryResult = queryClient.query(queryRequest); while (true) { final QueryStatus currentStatusOfQuery = queryResult.getQueryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.getProgressPercentage() + "%"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.getCumulativeBytesMetered() / ONE_GB_IN_BYTES); System.out.println("Bytes metered so far: " + bytesMeteredSoFar + " GB"); // Cancel query if its costing more than 1 cent if (bytesMeteredSoFar * QUERY_COST_PER_GB_IN_DOLLARS > 0.01) { cancelQuery(queryResult); break; } Run query 206 Amazon Timestream Developer Guide if (queryResult.getNextToken() == null) { break; } queryRequest.setNextToken(queryResult.getNextToken()); queryResult = queryClient.query(queryRequest); } } Java v2 private static final long ONE_GB_IN_BYTES
|
timestream-054
|
timestream.pdf
| 54 |
public void cancelQueryBasedOnQueryStatus() { System.out.println("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = new QueryRequest(); queryRequest.setQueryString(SELECT_ALL_QUERY); QueryResult queryResult = queryClient.query(queryRequest); while (true) { final QueryStatus currentStatusOfQuery = queryResult.getQueryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.getProgressPercentage() + "%"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.getCumulativeBytesMetered() / ONE_GB_IN_BYTES); System.out.println("Bytes metered so far: " + bytesMeteredSoFar + " GB"); // Cancel query if its costing more than 1 cent if (bytesMeteredSoFar * QUERY_COST_PER_GB_IN_DOLLARS > 0.01) { cancelQuery(queryResult); break; } Run query 206 Amazon Timestream Developer Guide if (queryResult.getNextToken() == null) { break; } queryRequest.setNextToken(queryResult.getNextToken()); queryResult = queryClient.query(queryRequest); } } Java v2 private static final long ONE_GB_IN_BYTES = 1073741824L; private static final double QUERY_COST_PER_GB_IN_DOLLARS = 0.01; // Assuming the price of query is $0.01 per GB public void cancelQueryBasedOnQueryStatus() { System.out.println("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = QueryRequest.builder().queryString(SELECT_ALL_QUERY).build(); final QueryIterable queryResponseIterator = timestreamQueryClient.queryPaginator(queryRequest); for(QueryResponse queryResponse : queryResponseIterator) { final QueryStatus currentStatusOfQuery = queryResponse.queryStatus(); System.out.println("Query progress so far: " + currentStatusOfQuery.progressPercentage() + "%"); double bytesMeteredSoFar = ((double) currentStatusOfQuery.cumulativeBytesMetered() / ONE_GB_IN_BYTES); System.out.println("Bytes metered so far: " + bytesMeteredSoFar + "GB"); // Cancel query if its costing more than 1 cent if (bytesMeteredSoFar * QUERY_COST_PER_GB_IN_DOLLARS > 0.01) { cancelQuery(queryResponse); break; } } } Go const OneGbInBytes = 1073741824 // Assuming the price of query is $0.01 per GB const QueryCostPerGbInDollars = 0.01 Run query 207 Amazon Timestream Developer Guide func cancelQueryBasedOnQueryStatus(queryPtr *string, querySvc *timestreamquery.TimestreamQuery, f *os.File) { queryInput := ×treamquery.QueryInput{ QueryString: aws.String(*queryPtr), } fmt.Println("QueryInput:") fmt.Println(queryInput) // execute the query err := querySvc.QueryPages(queryInput, func(page *timestreamquery.QueryOutput, lastPage bool) bool { // process query response queryStatus := page.QueryStatus fmt.Println("Current query status:", queryStatus) bytes_metered := float64(*queryStatus.CumulativeBytesMetered) / float64(ONE_GB_IN_BYTES) if bytes_metered * QUERY_COST_PER_GB_IN_DOLLARS > 0.01 { cancelQuery(page, querySvc) return true } // query response metadata // includes column names and types metadata := page.ColumnInfo // fmt.Println("Metadata:") fmt.Println(metadata) header := "" for i := 0; i < len(metadata); i++ { header += *metadata[i].Name if i != len(metadata)-1 { header += ", " } } write(f, header) // query response data fmt.Println("Data:") // process rows rows := page.Rows for i := 0; i < len(rows); i++ { data := rows[i].Data value := processRowType(data, metadata) fmt.Println(value) write(f, value) } fmt.Println("Number of rows:", len(page.Rows)) Run query 208 Amazon Timestream Developer Guide return true }) if err != nil { fmt.Println("Error:") fmt.Println(err) } } Python ONE_GB_IN_BYTES = 1073741824 # Assuming the price of query is $0.01 per GB QUERY_COST_PER_GB_IN_DOLLARS = 0.01 def cancel_query_based_on_query_status(self): try: print("Starting query: " + self.SELECT_ALL) page_iterator = self.paginator.paginate(QueryString=self.SELECT_ALL) for page in page_iterator: query_status = page["QueryStatus"] progress_percentage = query_status["ProgressPercentage"] print("Query progress so far: " + str(progress_percentage) + "%") bytes_metered = query_status["CumulativeBytesMetered"] / self.ONE_GB_IN_BYTES print("Bytes Metered so far: " + str(bytes_metered) + " GB") if bytes_metered * self.QUERY_COST_PER_GB_IN_DOLLARS > 0.01: self.cancel_query_for(page) break except Exception as err: print("Exception while running query:", err) traceback.print_exc(file=sys.stderr) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. function parseQueryResult(response) { const queryStatus = response.QueryStatus; console.log("Current query status: " + JSON.stringify(queryStatus)); const columnInfo = response.ColumnInfo; const rows = response.Rows; Run query 209 Amazon Timestream Developer Guide console.log("Metadata: " + JSON.stringify(columnInfo)); console.log("Data: "); rows.forEach(function (row) { console.log(parseRow(columnInfo, row)); }); } function parseRow(columnInfo, row) { const data = row.Data; const rowOutput = []; var i; for ( i = 0; i < data.length; i++ ) { info = columnInfo[i]; datum = data[i]; rowOutput.push(parseDatum(info, datum)); } return `{${rowOutput.join(", ")}}` } function parseDatum(info, datum) { if (datum.NullValue != null && datum.NullValue === true) { return `${info.Name}=NULL`; } const columnType = info.Type; // If the column is of TimeSeries Type if (columnType.TimeSeriesMeasureValueColumnInfo != null) { return parseTimeSeries(info, datum); } // If the column is of Array Type else if (columnType.ArrayColumnInfo != null) { const arrayValues = datum.ArrayValue; return `${info.Name}=${parseArray(info.Type.ArrayColumnInfo, arrayValues)}`; } // If the column is of Row Type else if (columnType.RowColumnInfo != null) { const rowColumnInfo = info.Type.RowColumnInfo; const rowValues = datum.RowValue; return parseRow(rowColumnInfo, rowValues); Run query 210 Amazon Timestream } // If the column is of Scalar Type else { return parseScalarType(info, datum); } } function parseTimeSeries(info, datum) { const timeSeriesOutput = []; Developer Guide datum.TimeSeriesValue.forEach(function (dataPoint) { timeSeriesOutput.push(`{time=${dataPoint.Time}, value= ${parseDatum(info.Type.TimeSeriesMeasureValueColumnInfo, dataPoint.Value)}}`) }); return `[${timeSeriesOutput.join(", ")}]` } function parseScalarType(info, datum) { return parseColumnName(info) + datum.ScalarValue; } function parseColumnName(info) { return info.Name == null ? "" : `${info.Name}=`; } function parseArray(arrayColumnInfo, arrayValues) { const arrayOutput = []; arrayValues.forEach(function (datum) { arrayOutput.push(parseDatum(arrayColumnInfo, datum)); }); return `[${arrayOutput.join(", ")}]` } .NET private static readonly long ONE_GB_IN_BYTES = 1073741824L; private static readonly double QUERY_COST_PER_GB_IN_DOLLARS = 0.01; // Assuming the price of query is $0.01 per GB private async Task CancelQueryBasedOnQueryStatus(string queryString) { try { Run query 211 Amazon Timestream Developer Guide QueryRequest queryRequest = new QueryRequest(); queryRequest.QueryString = queryString; QueryResponse queryResponse = await queryClient.QueryAsync(queryRequest); while (true) { QueryStatus queryStatus = queryResponse.QueryStatus; double bytesMeteredSoFar = ((double) queryStatus.CumulativeBytesMetered / ONE_GB_IN_BYTES); // Cancel query if
|
timestream-055
|
timestream.pdf
| 55 |
parseColumnName(info) { return info.Name == null ? "" : `${info.Name}=`; } function parseArray(arrayColumnInfo, arrayValues) { const arrayOutput = []; arrayValues.forEach(function (datum) { arrayOutput.push(parseDatum(arrayColumnInfo, datum)); }); return `[${arrayOutput.join(", ")}]` } .NET private static readonly long ONE_GB_IN_BYTES = 1073741824L; private static readonly double QUERY_COST_PER_GB_IN_DOLLARS = 0.01; // Assuming the price of query is $0.01 per GB private async Task CancelQueryBasedOnQueryStatus(string queryString) { try { Run query 211 Amazon Timestream Developer Guide QueryRequest queryRequest = new QueryRequest(); queryRequest.QueryString = queryString; QueryResponse queryResponse = await queryClient.QueryAsync(queryRequest); while (true) { QueryStatus queryStatus = queryResponse.QueryStatus; double bytesMeteredSoFar = ((double) queryStatus.CumulativeBytesMetered / ONE_GB_IN_BYTES); // Cancel query if its costing more than 1 cent if (bytesMeteredSoFar * QUERY_COST_PER_GB_IN_DOLLARS > 0.01) { await CancelQuery(queryResponse); break; } ParseQueryResult(queryResponse); if (queryResponse.NextToken == null) { break; } queryRequest.NextToken = queryResponse.NextToken; queryResponse = await queryClient.QueryAsync(queryRequest); } } catch(Exception e) { // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries Console.WriteLine(e.ToString()); } } For additional details on how to cancel a query, see Cancel query. Run UNLOAD query The following code examples call an UNLOAD query. For information about UNLOAD, see Using UNLOAD to export query results to S3 from Timestream for LiveAnalytics. For examples of UNLOAD queries, see Example use case for UNLOAD from Timestream for LiveAnalytics. Topics • Build and run an UNLOAD query Run UNLOAD query 212 Amazon Timestream Developer Guide • Parse UNLOAD response, and get row count, manifest link, and metadata link • Read and parse manifest content • Read and parse metadata content Build and run an UNLOAD query Java // When you have a SELECT like below String QUERY_1 = "SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM " + DATABASE_NAME + "." + UNLOAD_TABLE_NAME + " WHERE time BETWEEN ago(2d) AND now()"; // You can construct UNLOAD query as follows UnloadQuery unloadQuery = UnloadQuery.builder() .selectQuery(QUERY_1) .bucketName("timestream-sample-<region>-<accountId>") .resultsPrefix("without_partition") .format(CSV) .compression(UnloadQuery.Compression.GZIP) .build(); QueryResult unloadResult = runQuery(unloadQuery.getUnloadQuery()); // Run UNLOAD query (Similar to how you run SELECT query) // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination private QueryResult runQuery(String queryString) { QueryResult queryResult = null; try { QueryRequest queryRequest = new QueryRequest(); queryRequest.setQueryString(queryString); queryResult = queryClient.query(queryRequest); while (true) { parseQueryResult(queryResult); if (queryResult.getNextToken() == null) { break; } queryRequest.setNextToken(queryResult.getNextToken()); queryResult = queryClient.query(queryRequest); } Run UNLOAD query 213 Amazon Timestream Developer Guide } catch (Exception e) { // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries e.printStackTrace(); } return queryResult; } // Utility that helps to construct UNLOAD query @Builder static class UnloadQuery { private String selectQuery; private String bucketName; private String resultsPrefix; private Format format; private Compression compression; private EncryptionType encryptionType; private List<String> partitionColumns; private String kmsKey; private Character csvFieldDelimiter; private Character csvEscapeCharacter; public String getUnloadQuery() { String destination = constructDestination(); String withClause = constructOptionalParameters(); return String.format("UNLOAD (%s) TO '%s' %s", selectQuery, destination, withClause); } private String constructDestination() { return "s3://" + this.bucketName + "/" + this.resultsPrefix + "/"; } private String constructOptionalParameters() { boolean isOptionalParametersPresent = Objects.nonNull(format) || Objects.nonNull(compression) || Objects.nonNull(encryptionType) || Objects.nonNull(partitionColumns) || Objects.nonNull(kmsKey) || Objects.nonNull(csvFieldDelimiter) || Objects.nonNull(csvEscapeCharacter); String withClause = ""; Run UNLOAD query 214 Amazon Timestream Developer Guide if (isOptionalParametersPresent) { StringJoiner optionalParameters = new StringJoiner(","); if (Objects.nonNull(format)) { optionalParameters.add("format = '" + format + "'"); } if (Objects.nonNull(compression)) { optionalParameters.add("compression = '" + compression + "'"); } if (Objects.nonNull(encryptionType)) { optionalParameters.add("encryption = '" + encryptionType + "'"); } if (Objects.nonNull(kmsKey)) { optionalParameters.add("kms_key = '" + kmsKey + "'"); } if (Objects.nonNull(csvFieldDelimiter)) { optionalParameters.add("field_delimiter = '" + csvFieldDelimiter + "'"); } if (Objects.nonNull(csvEscapeCharacter)) { optionalParameters.add("escaped_by = '" + csvEscapeCharacter + "'"); } if (Objects.nonNull(partitionColumns) && !partitionColumns.isEmpty()) { final StringJoiner partitionedByList = new StringJoiner(","); partitionColumns.forEach(column -> partitionedByList.add("'" + column + "'")); optionalParameters.add(String.format("partitioned_by = ARRAY[%s]", partitionedByList)); } withClause = String.format("WITH (%s)", optionalParameters); } return withClause; } public enum Format { CSV, PARQUET } public enum Compression { GZIP, NONE } public enum EncryptionType { SSE_S3, SSE_KMS } Run UNLOAD query 215 Amazon Timestream Developer Guide @Override public String toString() { return getUnloadQuery(); } } Java v2 // When you have a SELECT like below String QUERY_1 = "SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM " + DATABASE_NAME + "." + UNLOAD_TABLE_NAME + " WHERE time BETWEEN ago(2d) AND now()"; //You can construct UNLOAD query as follows UnloadQuery unloadQuery = UnloadQuery.builder() .selectQuery(QUERY_1) .bucketName("timestream-sample-<region>-<accountId>") .resultsPrefix("without_partition") .format(CSV) .compression(UnloadQuery.Compression.GZIP) .build(); QueryResponse unloadResponse = runQuery(unloadQuery.getUnloadQuery()); // Run UNLOAD query (Similar to how you run SELECT query) // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination private QueryResponse runQuery(String queryString) { QueryResponse finalResponse = null; try { QueryRequest queryRequest = QueryRequest.builder().queryString(queryString).build(); final QueryIterable queryResponseIterator = timestreamQueryClient.queryPaginator(queryRequest); for(QueryResponse queryResponse : queryResponseIterator) { parseQueryResult(queryResponse); finalResponse = queryResponse; } } catch (Exception e) { Run UNLOAD query 216 Amazon Timestream Developer Guide // Some queries might fail with 500 if the result of a sequence function has more than
|
timestream-056
|
timestream.pdf
| 56 |
time BETWEEN ago(2d) AND now()"; //You can construct UNLOAD query as follows UnloadQuery unloadQuery = UnloadQuery.builder() .selectQuery(QUERY_1) .bucketName("timestream-sample-<region>-<accountId>") .resultsPrefix("without_partition") .format(CSV) .compression(UnloadQuery.Compression.GZIP) .build(); QueryResponse unloadResponse = runQuery(unloadQuery.getUnloadQuery()); // Run UNLOAD query (Similar to how you run SELECT query) // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination private QueryResponse runQuery(String queryString) { QueryResponse finalResponse = null; try { QueryRequest queryRequest = QueryRequest.builder().queryString(queryString).build(); final QueryIterable queryResponseIterator = timestreamQueryClient.queryPaginator(queryRequest); for(QueryResponse queryResponse : queryResponseIterator) { parseQueryResult(queryResponse); finalResponse = queryResponse; } } catch (Exception e) { Run UNLOAD query 216 Amazon Timestream Developer Guide // Some queries might fail with 500 if the result of a sequence function has more than 10000 entries e.printStackTrace(); } return finalResponse; } // Utility that helps to construct UNLOAD query @Builder static class UnloadQuery { private String selectQuery; private String bucketName; private String resultsPrefix; private Format format; private Compression compression; private EncryptionType encryptionType; private List<String> partitionColumns; private String kmsKey; private Character csvFieldDelimiter; private Character csvEscapeCharacter; public String getUnloadQuery() { String destination = constructDestination(); String withClause = constructOptionalParameters(); return String.format("UNLOAD (%s) TO '%s' %s", selectQuery, destination, withClause); } private String constructDestination() { return "s3://" + this.bucketName + "/" + this.resultsPrefix + "/"; } private String constructOptionalParameters() { boolean isOptionalParametersPresent = Objects.nonNull(format) || Objects.nonNull(compression) || Objects.nonNull(encryptionType) || Objects.nonNull(partitionColumns) || Objects.nonNull(kmsKey) || Objects.nonNull(csvFieldDelimiter) || Objects.nonNull(csvEscapeCharacter); String withClause = ""; if (isOptionalParametersPresent) { StringJoiner optionalParameters = new StringJoiner(","); Run UNLOAD query 217 Amazon Timestream Developer Guide if (Objects.nonNull(format)) { optionalParameters.add("format = '" + format + "'"); } if (Objects.nonNull(compression)) { optionalParameters.add("compression = '" + compression + "'"); } if (Objects.nonNull(encryptionType)) { optionalParameters.add("encryption = '" + encryptionType + "'"); } if (Objects.nonNull(kmsKey)) { optionalParameters.add("kms_key = '" + kmsKey + "'"); } if (Objects.nonNull(csvFieldDelimiter)) { optionalParameters.add("field_delimiter = '" + csvFieldDelimiter + "'"); } if (Objects.nonNull(csvEscapeCharacter)) { optionalParameters.add("escaped_by = '" + csvEscapeCharacter + "'"); } if (Objects.nonNull(partitionColumns) && !partitionColumns.isEmpty()) { final StringJoiner partitionedByList = new StringJoiner(","); partitionColumns.forEach(column -> partitionedByList.add("'" + column + "'")); optionalParameters.add(String.format("partitioned_by = ARRAY[%s]", partitionedByList)); } withClause = String.format("WITH (%s)", optionalParameters); } return withClause; } public enum Format { CSV, PARQUET } public enum Compression { GZIP, NONE } public enum EncryptionType { SSE_S3, SSE_KMS } @Override Run UNLOAD query 218 Amazon Timestream Developer Guide public String toString() { return getUnloadQuery(); } } Go // When you have a SELECT like below var Query = "SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM " + *databaseName + "." + *tableName + " WHERE time BETWEEN ago(2d) AND now()" // You can construct UNLOAD query as follows var unloadQuery = UnloadQuery{ Query: "SELECT user_id, ip_address, session_id, measure_name, time, query, quantity, product_id, channel, event FROM " + *databaseName + "." + *tableName + " WHERE time BETWEEN ago(2d) AND now()", Partitioned_by: []string{}, Compression: "GZIP", Format: "CSV", S3Location: bucketName, ResultPrefix: "without_partition", } // Run UNLOAD query (Similar to how you run SELECT query) // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination queryInput := ×treamquery.QueryInput{ QueryString: build_query(unloadQuery), } err := querySvc.QueryPages(queryInput, func(page *timestreamquery.QueryOutput, lastPage bool) bool { if (lastPage) { var response = parseQueryResult(page) var unloadFiles = getManifestAndMetadataFiles(s3Svc, response) displayColumns(unloadFiles, unloadQuery.Partitioned_by) displayResults(s3Svc, unloadFiles) } return true }) Run UNLOAD query 219 Amazon Timestream Developer Guide if err != nil { fmt.Println("Error:") fmt.Println(err) } // Utility that helps to construct UNLOAD query type UnloadQuery struct { Query string Partitioned_by []string Format string S3Location string ResultPrefix string Compression string } func build_query(unload_query UnloadQuery) *string { var query_results_s3_path = "'s3://" + unload_query.S3Location + "/" + unload_query.ResultPrefix + "/'" var query = "UNLOAD(" + unload_query.Query + ") TO " + query_results_s3_path + " WITH ( " if (len(unload_query.Partitioned_by) > 0) { query = query + "partitioned_by=ARRAY[" for i, column := range unload_query.Partitioned_by { if i == 0 { query = query + "'" + column + "'" } else { query = query + ",'" + column + "'" } } query = query + "]," } query = query + " format='" + unload_query.Format + "', " query = query + " compression='" + unload_query.Compression + "')" fmt.Println(query) return aws.String(query) } Python # When you have a SELECT like below QUERY_1 = "SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM " Run UNLOAD query 220 Amazon Timestream Developer Guide + database_name + "." + table_name + " WHERE time BETWEEN ago(2d) AND now()" # You can construct UNLOAD query as follows UNLOAD_QUERY_1 = UnloadQuery(QUERY_1, "timestream-sample-<region>-<accountId>", "without_partition", "CSV", "GZIP", "") # Run UNLOAD query (Similar to how you run SELECT query) # https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination def run_query(self, query_string): try: page_iterator = self.paginator.paginate(QueryString=UNLOAD_QUERY_1) except Exception as err: print("Exception while running query:", err) # Utility that helps to construct UNLOAD query class UnloadQuery: def __init__(self, query, s3_bucket_location, results_prefix, format, compression , partition_by): self.query = query self.s3_bucket_location = s3_bucket_location self.results_prefix = results_prefix self.format = format self.compression = compression self.partition_by = partition_by def build_query(self): query_results_s3_path = "'s3://" + self.s3_bucket_location + "/" + self.results_prefix + "/'" unload_query = "UNLOAD(" unload_query =
|
timestream-057
|
timestream.pdf
| 57 |
You can construct UNLOAD query as follows UNLOAD_QUERY_1 = UnloadQuery(QUERY_1, "timestream-sample-<region>-<accountId>", "without_partition", "CSV", "GZIP", "") # Run UNLOAD query (Similar to how you run SELECT query) # https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination def run_query(self, query_string): try: page_iterator = self.paginator.paginate(QueryString=UNLOAD_QUERY_1) except Exception as err: print("Exception while running query:", err) # Utility that helps to construct UNLOAD query class UnloadQuery: def __init__(self, query, s3_bucket_location, results_prefix, format, compression , partition_by): self.query = query self.s3_bucket_location = s3_bucket_location self.results_prefix = results_prefix self.format = format self.compression = compression self.partition_by = partition_by def build_query(self): query_results_s3_path = "'s3://" + self.s3_bucket_location + "/" + self.results_prefix + "/'" unload_query = "UNLOAD(" unload_query = unload_query + self.query unload_query = unload_query + ") " unload_query = unload_query + " TO " + query_results_s3_path unload_query = unload_query + " WITH ( " if(len(self.partition_by) > 0) : unload_query = unload_query + " partitioned_by = ARRAY" + str(self.partition_by) + "," unload_query = unload_query + " format='" + self.format + "', " unload_query = unload_query + " compression='" + self.compression + "')" return unload_query Run UNLOAD query 221 Amazon Timestream Node.js Developer Guide // When you have a SELECT like below QUERY_1 = "SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM " + database_name + "." + table_name + " WHERE time BETWEEN ago(2d) AND now()" // You can construct UNLOAD query as follows UNLOAD_QUERY_1 = new UnloadQuery(QUERY_1, "timestream-sample-<region>-<accountId>", "without_partition", "CSV", "GZIP", "") // Run UNLOAD query (Similar to how you run SELECT query) // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.pagination async runQuery(query = UNLOAD_QUERY_1, nextToken) { const params = new QueryCommand({ QueryString: query }); if (nextToken) { params.NextToken = nextToken; } await queryClient.send(params).then( (response) => { if (response.NextToken) { runQuery(queryClient, query, response.NextToken); } else { await parseAndDisplayResults(response); } }, (err) => { console.error("Error while querying:", err); }); } class UnloadQuery { constructor(query, s3_bucket_location, results_prefix, format, compression , partition_by) { this.query = query; this.s3_bucket_location = s3_bucket_location this.results_prefix = results_prefix Run UNLOAD query 222 Amazon Timestream Developer Guide this.format = format this.compression = compression this.partition_by = partition_by } buildQuery() { const query_results_s3_path = "'s3://" + this.s3_bucket_location + "/" + this.results_prefix + "/'" let unload_query = "UNLOAD(" unload_query = unload_query + this.query unload_query = unload_query + ") " unload_query = unload_query + " TO " + query_results_s3_path unload_query = unload_query + " WITH ( " if(this.partition_by.length > 0) { let partitionBy = "" this.partition_by.forEach((str, i) => { partitionBy = partitionBy + (i ? ",'" : "'") + str + "'" }) unload_query = unload_query + " partitioned_by = ARRAY[" + partitionBy + "]," } unload_query = unload_query + " format='" + this.format + "', " unload_query = unload_query + " compression='" + this.compression + "')" return unload_query } } Parse UNLOAD response, and get row count, manifest link, and metadata link Java // Parsing UNLOAD query response is similar to how you parse SELECT query response: // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing // But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed // (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) public UnloadResponse parseResult(QueryResult queryResult) { Map<String, String> outputMap = new HashMap<>(); Run UNLOAD query 223 Amazon Timestream Developer Guide for (int i = 0; i < queryResult.getColumnInfo().size(); i++) { outputMap.put(queryResult.getColumnInfo().get(i).getName(), queryResult.getRows().get(0).getData().get(i).getScalarValue()); } return new UnloadResponse(outputMap); } @Getter class UnloadResponse { private final String metadataFile; private final String manifestFile; private final int rows; public UnloadResponse(Map<String, String> unloadResponse) { this.metadataFile = unloadResponse.get("metadataFile"); this.manifestFile = unloadResponse.get("manifestFile"); this.rows = Integer.parseInt(unloadResponse.get("rows")); } } Java v2 // Parsing UNLOAD query response is similar to how you parse SELECT query response: // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing // But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed // (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) public UnloadResponse parseResult(QueryResponse queryResponse) { Map<String, String> outputMap = new HashMap<>(); for (int i = 0; i < queryResponse.columnInfo().size(); i++) { outputMap.put(queryResponse.columnInfo().get(i).name(), queryResponse.rows().get(0).data().get(i).scalarValue()); } return new UnloadResponse(outputMap); } @Getter class UnloadResponse { private final String metadataFile; Run UNLOAD query 224 Amazon Timestream Developer Guide private final String manifestFile; private final int rows; public UnloadResponse(Map<String, String> unloadResponse) { this.metadataFile = unloadResponse.get("metadataFile"); this.manifestFile = unloadResponse.get("manifestFile"); this.rows = Integer.parseInt(unloadResponse.get("rows")); } } Go // Parsing UNLOAD query response is similar to how you parse SELECT query response: // https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing // But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed // (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) func parseQueryResult(queryOutput *timestreamquery.QueryOutput) map[string]string { var columnInfo = queryOutput.ColumnInfo; fmt.Println("ColumnInfo", columnInfo) fmt.Println("QueryId", queryOutput.QueryId) fmt.Println("QueryStatus", queryOutput.QueryStatus) return parseResponse(columnInfo, queryOutput.Rows[0]) } func parseResponse(columnInfo []*timestreamquery.ColumnInfo, row *timestreamquery.Row) map[string]string { var datum = row.Data response := make(map[string]string) for i, column := range columnInfo { response[*column.Name] = *datum[i].ScalarValue } return response } Python # Parsing UNLOAD query response is similar to how you parse SELECT query response: # https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing Run UNLOAD query 225 Amazon Timestream Developer Guide # But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed # (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) for page
|
timestream-058
|
timestream.pdf
| 58 |
=> (BIGINT, VARCHAR, VARCHAR) func parseQueryResult(queryOutput *timestreamquery.QueryOutput) map[string]string { var columnInfo = queryOutput.ColumnInfo; fmt.Println("ColumnInfo", columnInfo) fmt.Println("QueryId", queryOutput.QueryId) fmt.Println("QueryStatus", queryOutput.QueryStatus) return parseResponse(columnInfo, queryOutput.Rows[0]) } func parseResponse(columnInfo []*timestreamquery.ColumnInfo, row *timestreamquery.Row) map[string]string { var datum = row.Data response := make(map[string]string) for i, column := range columnInfo { response[*column.Name] = *datum[i].ScalarValue } return response } Python # Parsing UNLOAD query response is similar to how you parse SELECT query response: # https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing Run UNLOAD query 225 Amazon Timestream Developer Guide # But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed # (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) for page in page_iterator: last_page = page response = self._parse_unload_query_result(last_page) def _parse_unload_query_result(self, query_result): column_info = query_result['ColumnInfo'] print("ColumnInfo: %s" % column_info) print("QueryId: %s" % query_result['QueryId']) print("QueryStatus:%s" % query_result['QueryStatus']) return self.parse_unload_response(column_info, query_result['Rows'][0]) def parse_unload_response(self, column_info, row): response = {} data = row['Data'] for i, column in enumerate(column_info): response[column['Name']] = data[i]['ScalarValue'] print("Rows: %s" % response['rows']) print("Metadata File location: %s" % response['metadataFile']) print("Manifest File location: %s" % response['manifestFile']) return response Node.js # Parsing UNLOAD query response is similar to how you parse SELECT query response: # https://docs.aws.amazon.com/timestream/latest/developerguide/code-samples.run- query.html#code-samples.run-query.parsing # But unlike SELECT, UNLOAD only has 1 row * 3 columns outputed # (rows, metadataFile, manifestFile) => (BIGINT, VARCHAR, VARCHAR) async parseAndDisplayResults(data, query) { const columnInfo = data['ColumnInfo']; console.log("ColumnInfo:", columnInfo) console.log("QueryId: %s", data['QueryId']) console.log("QueryStatus:", data['QueryStatus']) await this.parseResponse(columnInfo, data['Rows'][0], query) } async parseResponse(columnInfo, row, query) { Run UNLOAD query 226 Amazon Timestream Developer Guide let response = {} const data = row['Data'] columnInfo.forEach((column, i) => { response[column['Name']] = data[i]['ScalarValue'] }) console.log("Manifest file", response['manifestFile']); console.log("Metadata file", response['metadataFile']); return response } Read and parse manifest content Java // Read and parse manifest content public UnloadManifest getUnloadManifest(UnloadResponse unloadResponse) throws IOException { AmazonS3URI s3URI = new AmazonS3URI(unloadResponse.getManifestFile()); S3Object s3Object = s3Client.getObject(s3URI.getBucket(), s3URI.getKey()); String manifestFileContent = new String(IOUtils.toByteArray(s3Object.getObjectContent()), StandardCharsets.UTF_8); return new Gson().fromJson(manifestFileContent, UnloadManifest.class); } class UnloadManifest { @Getter public class FileMetadata { long content_length_in_bytes; long row_count; } @Getter public class ResultFile { String url; FileMetadata file_metadata; } @Getter public class QueryMetadata { long total_content_length_in_bytes; Run UNLOAD query 227 Amazon Timestream Developer Guide long total_row_count; String result_format; String result_version; } @Getter public class Author { String name; String manifest_file_version; } @Getter private List<ResultFile> result_files; @Getter private QueryMetadata query_metadata; @Getter private Author author; } Java v2 // Read and parse manifest content public UnloadManifest getUnloadManifest(UnloadResponse unloadResponse) throws URISyntaxException { // Space needs to encoded to use S3 parseUri function S3Uri s3Uri = s3Utilities.parseUri(URI.create(unloadResponse.getManifestFile().replace(" ", "%20"))); ResponseBytes<GetObjectResponse> objectBytes = s3Client.getObjectAsBytes(GetObjectRequest.builder() .bucket(s3Uri.bucket().orElseThrow(() -> new URISyntaxException(unloadResponse.getManifestFile(), "Invalid S3 URI"))) .key(s3Uri.key().orElseThrow(() -> new URISyntaxException(unloadResponse.getManifestFile(), "Invalid S3 URI"))) .build()); String manifestFileContent = new String(objectBytes.asByteArray(), StandardCharsets.UTF_8); return new Gson().fromJson(manifestFileContent, UnloadManifest.class); } class UnloadManifest { @Getter public class FileMetadata { Run UNLOAD query 228 Amazon Timestream Developer Guide long content_length_in_bytes; long row_count; } @Getter public class ResultFile { String url; FileMetadata file_metadata; } @Getter public class QueryMetadata { long total_content_length_in_bytes; long total_row_count; String result_format; String result_version; } @Getter public class Author { String name; String manifest_file_version; } @Getter private List<ResultFile> result_files; @Getter private QueryMetadata query_metadata; @Getter private Author author; } Go // Read and parse manifest content func getManifestFile(s3Svc *s3.S3, response map[string]string) Manifest { var manifestBuf = getObject(s3Svc, response["manifestFile"]) var manifest Manifest json.Unmarshal(manifestBuf.Bytes(), &manifest) return manifest } Run UNLOAD query 229 Amazon Timestream Developer Guide func getObject(s3Svc *s3.S3, s3Uri string) *bytes.Buffer { u,_ := url.Parse(s3Uri) getObjectInput := &s3.GetObjectInput{ Key: aws.String(u.Path), Bucket: aws.String(u.Host), } getObjectOutput, err := s3Svc.GetObject(getObjectInput) if err != nil { fmt.Println("Error: %s\n", err.Error()) } buf := new(bytes.Buffer) buf.ReadFrom(getObjectOutput.Body) return buf } // Unload's Manifest structure type Manifest struct { Author interface{} Query_metadata map[string]any Result_files []struct { File_metadata interface{} Url string } }} Python def __get_manifest_file(self, response): manifest = self.get_object(response['manifestFile']).read().decode('utf-8') parsed_manifest = json.loads(manifest) print("Manifest contents: \n%s" % parsed_manifest) def get_object(self, uri): try: bucket, key = uri.replace("s3://", "").split("/", 1) s3_client = boto3.client('s3', region_name=<region>) response = s3_client.get_object(Bucket=bucket, Key=key) return response['Body'] except Exception as err: print("Failed to get the object for URI:", uri) raise err Run UNLOAD query 230 Amazon Timestream Node.js Developer Guide // Read and parse manifest content async getManifestFile(response) { let manifest; await this.getS3Object(response['manifestFile']).then( (data) => { manifest = JSON.parse(data); } ); return manifest; } async getS3Object(uri) { const {bucketName, key} = this.getBucketAndKey(uri); const params = new GetObjectCommand({ Bucket: bucketName, Key: key }) const response = await this.s3Client.send(params); return await response.Body.transformToString(); } getBucketAndKey(uri) { const [bucketName] = uri.replace("s3://", "").split("/", 1); const key = uri.replace("s3://", "").split('/').slice(1).join('/'); return {bucketName, key}; } Read and parse metadata content Java // Read and parse metadata content public UnloadMetadata getUnloadMetadata(UnloadResponse unloadResponse) throws IOException { AmazonS3URI s3URI = new AmazonS3URI(unloadResponse.getMetadataFile()); S3Object s3Object = s3Client.getObject(s3URI.getBucket(), s3URI.getKey()); String metadataFileContent = new String(IOUtils.toByteArray(s3Object.getObjectContent()), StandardCharsets.UTF_8); final Gson gson = new GsonBuilder() Run UNLOAD query 231 Amazon Timestream Developer Guide .setFieldNamingPolicy(FieldNamingPolicy.UPPER_CAMEL_CASE) .create(); return gson.fromJson(metadataFileContent, UnloadMetadata.class); } class UnloadMetadata { @JsonProperty("ColumnInfo") List<ColumnInfo> columnInfo; @JsonProperty("Author") Author author; @Data public class Author { @JsonProperty("Name") String name; @JsonProperty("MetadataFileVersion") String metadataFileVersion; } } Java v2 // Read and
|
timestream-059
|
timestream.pdf
| 59 |
} getBucketAndKey(uri) { const [bucketName] = uri.replace("s3://", "").split("/", 1); const key = uri.replace("s3://", "").split('/').slice(1).join('/'); return {bucketName, key}; } Read and parse metadata content Java // Read and parse metadata content public UnloadMetadata getUnloadMetadata(UnloadResponse unloadResponse) throws IOException { AmazonS3URI s3URI = new AmazonS3URI(unloadResponse.getMetadataFile()); S3Object s3Object = s3Client.getObject(s3URI.getBucket(), s3URI.getKey()); String metadataFileContent = new String(IOUtils.toByteArray(s3Object.getObjectContent()), StandardCharsets.UTF_8); final Gson gson = new GsonBuilder() Run UNLOAD query 231 Amazon Timestream Developer Guide .setFieldNamingPolicy(FieldNamingPolicy.UPPER_CAMEL_CASE) .create(); return gson.fromJson(metadataFileContent, UnloadMetadata.class); } class UnloadMetadata { @JsonProperty("ColumnInfo") List<ColumnInfo> columnInfo; @JsonProperty("Author") Author author; @Data public class Author { @JsonProperty("Name") String name; @JsonProperty("MetadataFileVersion") String metadataFileVersion; } } Java v2 // Read and parse metadata content public UnloadMetadata getUnloadMetadata(UnloadResponse unloadResponse) throws URISyntaxException { // Space needs to encoded to use S3 parseUri function S3Uri s3Uri = s3Utilities.parseUri(URI.create(unloadResponse.getMetadataFile().replace(" ", "%20"))); ResponseBytes<GetObjectResponse> objectBytes = s3Client.getObjectAsBytes(GetObjectRequest.builder() .bucket(s3Uri.bucket().orElseThrow(() -> new URISyntaxException(unloadResponse.getMetadataFile(), "Invalid S3 URI"))) .key(s3Uri.key().orElseThrow(() -> new URISyntaxException(unloadResponse.getMetadataFile(), "Invalid S3 URI"))) .build()); String metadataFileContent = new String(objectBytes.asByteArray(), StandardCharsets.UTF_8); final Gson gson = new GsonBuilder() .setFieldNamingPolicy(FieldNamingPolicy.UPPER_CAMEL_CASE) .create(); return gson.fromJson(metadataFileContent, UnloadMetadata.class); Run UNLOAD query 232 Amazon Timestream } Developer Guide class UnloadMetadata { @JsonProperty("ColumnInfo") List<ColumnInfo> columnInfo; @JsonProperty("Author") Author author; @Data public class Author { @JsonProperty("Name") String name; @JsonProperty("MetadataFileVersion") String metadataFileVersion; } } Go // Read and parse metadata content func getMetadataFile(s3Svc *s3.S3, response map[string]string) Metadata { var metadataBuf = getObject(s3Svc, response["metadataFile"]) var metadata Metadata json.Unmarshal(metadataBuf.Bytes(), &metadata) return metadata } func getObject(s3Svc *s3.S3, s3Uri string) *bytes.Buffer { u,_ := url.Parse(s3Uri) getObjectInput := &s3.GetObjectInput{ Key: aws.String(u.Path), Bucket: aws.String(u.Host), } getObjectOutput, err := s3Svc.GetObject(getObjectInput) if err != nil { fmt.Println("Error: %s\n", err.Error()) } buf := new(bytes.Buffer) buf.ReadFrom(getObjectOutput.Body) return buf } Run UNLOAD query 233 Amazon Timestream Developer Guide // Unload's Metadata structure type Metadata struct { Author interface{} ColumnInfo []struct { Name string Type map[string]string } } Python def __get_metadata_file(self, response): metadata = self.get_object(response['metadataFile']).read().decode('utf-8') parsed_metadata = json.loads(metadata) print("Metadata contents: \n%s" % parsed_metadata) def get_object(self, uri): try: bucket, key = uri.replace("s3://", "").split("/", 1) s3_client = boto3.client('s3', region_name=<region>) response = s3_client.get_object(Bucket=bucket, Key=key) return response['Body'] except Exception as err: print("Failed to get the object for URI:", uri) raise err Node.js // Read and parse metadata content async getMetadataFile(response) { let metadata; await this.getS3Object(response['metadataFile']).then( (data) => { metadata = JSON.parse(data); } ); return metadata; } async getS3Object(uri) { Run UNLOAD query 234 Amazon Timestream Developer Guide const {bucketName, key} = this.getBucketAndKey(uri); const params = new GetObjectCommand({ Bucket: bucketName, Key: key }) const response = await this.s3Client.send(params); return await response.Body.transformToString(); } getBucketAndKey(uri) { const [bucketName] = uri.replace("s3://", "").split("/", 1); const key = uri.replace("s3://", "").split('/').slice(1).join('/'); return {bucketName, key}; } Cancel query You can use the following code snippets to cancel a query. Note These code snippets are based on full sample applications on GitHub. For more information about how to get started with the sample applications, see Sample application. Java public void cancelQuery() { System.out.println("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = new QueryRequest(); queryRequest.setQueryString(SELECT_ALL_QUERY); QueryResult queryResult = queryClient.query(queryRequest); System.out.println("Cancelling the query: " + SELECT_ALL_QUERY); final CancelQueryRequest cancelQueryRequest = new CancelQueryRequest(); cancelQueryRequest.setQueryId(queryResult.getQueryId()); try { queryClient.cancelQuery(cancelQueryRequest); System.out.println("Query has been successfully cancelled"); } catch (Exception e) { Cancel query 235 Amazon Timestream Developer Guide System.out.println("Could not cancel the query: " + SELECT_ALL_QUERY + " = " + e); } } Java v2 public void cancelQuery() { System.out.println("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = QueryRequest.builder().queryString(SELECT_ALL_QUERY).build(); QueryResponse queryResponse = timestreamQueryClient.query(queryRequest); System.out.println("Cancelling the query: " + SELECT_ALL_QUERY); final CancelQueryRequest cancelQueryRequest = CancelQueryRequest.builder() .queryId(queryResponse.queryId()).build(); try { timestreamQueryClient.cancelQuery(cancelQueryRequest); System.out.println("Query has been successfully cancelled"); } catch (Exception e) { System.out.println("Could not cancel the query: " + SELECT_ALL_QUERY + " = " + e); } } Go cancelQueryInput := ×treamquery.CancelQueryInput{ QueryId: aws.String(*queryOutput.QueryId), } fmt.Println("Submitting cancellation for the query") fmt.Println(cancelQueryInput) // submit the query cancelQueryOutput, err := querySvc.CancelQuery(cancelQueryInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Query has been cancelled successfully") fmt.Println(cancelQueryOutput) Cancel query 236 Amazon Timestream } Python Developer Guide def cancel_query(self): print("Starting query: " + self.SELECT_ALL) result = self.client.query(QueryString=self.SELECT_ALL) print("Cancelling query: " + self.SELECT_ALL) try: self.client.cancel_query(QueryId=result['QueryId']) print("Query has been successfully cancelled") except Exception as err: print("Cancelling query failed:", err) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function tryCancelQuery() { const params = { QueryString: SELECT_ALL_QUERY }; console.log(`Running query: ${SELECT_ALL_QUERY}`); await queryClient.query(params).promise() .then( async (response) => { await cancelQuery(response.QueryId); }, (err) => { console.error("Error while executing select all query:", err); }); } async function cancelQuery(queryId) { const cancelParams = { QueryId: queryId }; console.log(`Sending cancellation for query: ${SELECT_ALL_QUERY}`); await queryClient.cancelQuery(cancelParams).promise() .then( (response) => { Cancel query 237 Amazon Timestream Developer Guide console.log("Query has been cancelled successfully"); }, (err) => { console.error("Error while cancelling select all:", err); }); } .NET public async Task CancelQuery() { Console.WriteLine("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = new QueryRequest(); queryRequest.QueryString = SELECT_ALL_QUERY; QueryResponse queryResponse = await queryClient.QueryAsync(queryRequest); Console.WriteLine("Cancelling query: " + SELECT_ALL_QUERY); CancelQueryRequest cancelQueryRequest = new
|
timestream-060
|
timestream.pdf
| 60 |
queryClient.query(params).promise() .then( async (response) => { await cancelQuery(response.QueryId); }, (err) => { console.error("Error while executing select all query:", err); }); } async function cancelQuery(queryId) { const cancelParams = { QueryId: queryId }; console.log(`Sending cancellation for query: ${SELECT_ALL_QUERY}`); await queryClient.cancelQuery(cancelParams).promise() .then( (response) => { Cancel query 237 Amazon Timestream Developer Guide console.log("Query has been cancelled successfully"); }, (err) => { console.error("Error while cancelling select all:", err); }); } .NET public async Task CancelQuery() { Console.WriteLine("Starting query: " + SELECT_ALL_QUERY); QueryRequest queryRequest = new QueryRequest(); queryRequest.QueryString = SELECT_ALL_QUERY; QueryResponse queryResponse = await queryClient.QueryAsync(queryRequest); Console.WriteLine("Cancelling query: " + SELECT_ALL_QUERY); CancelQueryRequest cancelQueryRequest = new CancelQueryRequest(); cancelQueryRequest.QueryId = queryResponse.QueryId; try { await queryClient.CancelQueryAsync(cancelQueryRequest); Console.WriteLine("Query has been successfully cancelled."); } catch(Exception e) { Console.WriteLine("Could not cancel the query: " + SELECT_ALL_QUERY + " = " + e); } } Create batch load task You can use the following code snippets to create batch load tasks. Java package com.example.tryit; import java.util.Arrays; Create batch load task 238 Amazon Timestream import Developer Guide software.amazon.awssdk.services.timestreamwrite.model.CreateBatchLoadTaskRequest; import software.amazon.awssdk.services.timestreamwrite.model.CreateBatchLoadTaskResponse; import software.amazon.awssdk.services.timestreamwrite.model.DataModel; import software.amazon.awssdk.services.timestreamwrite.model.DataModelConfiguration; import software.amazon.awssdk.services.timestreamwrite.model.DataSourceConfiguration; import software.amazon.awssdk.services.timestreamwrite.model.DataSourceS3Configuration; import software.amazon.awssdk.services.timestreamwrite.model.DimensionMapping; import software.amazon.awssdk.services.timestreamwrite.model.MultiMeasureAttributeMapping; import software.amazon.awssdk.services.timestreamwrite.model.MultiMeasureMappings; import software.amazon.awssdk.services.timestreamwrite.model.ReportConfiguration; import software.amazon.awssdk.services.timestreamwrite.model.ReportS3Configuration; import software.amazon.awssdk.services.timestreamwrite.model.ScalarMeasureValueType; import software.amazon.awssdk.services.timestreamwrite.model.TimeUnit; import software.amazon.awssdk.services.timestreamwrite.TimestreamWriteClient; public class BatchLoadExample { public static final String DATABASE_NAME = <database name>; public static final String TABLE_NAME = <table name>; public static final String INPUT_BUCKET = <S3 location>; public static final String INPUT_OBJECT_KEY_PREFIX = <CSV filename>; public static final String REPORT_BUCKET = <S3 location>; public static final long HT_TTL_HOURS = 24L; public static final long CT_TTL_DAYS = 7L; TimestreamWriteClient amazonTimestreamWrite; public BatchLoadExample(TimestreamWriteClient client) { this.amazonTimestreamWrite = client; } public String createBatchLoadTask() { System.out.println("Creating batch load task"); CreateBatchLoadTaskRequest request = CreateBatchLoadTaskRequest.builder() .dataModelConfiguration(DataModelConfiguration.builder() .dataModel(DataModel.builder() .timeColumn("timestamp") .timeUnit(TimeUnit.SECONDS) .dimensionMappings(Arrays.asList( Create batch load task 239 Amazon Timestream Developer Guide DimensionMapping.builder() .sourceColumn("vehicle") .build(), DimensionMapping.builder() .sourceColumn("registration") .destinationColumn("license") .build())) .multiMeasureMappings(MultiMeasureMappings.builder() .targetMultiMeasureName("mva_measure_name") .multiMeasureAttributeMappings(Arrays.asList( MultiMeasureAttributeMapping.builder() .sourceColumn("wgt") .targetMultiMeasureAttributeName("weight") .measureValueType(ScalarMeasureValueType.DOUBLE) .build(), MultiMeasureAttributeMapping.builder() .sourceColumn("spd") .targetMultiMeasureAttributeName("speed") .measureValueType(ScalarMeasureValueType.DOUBLE) .build(), MultiMeasureAttributeMapping.builder() .sourceColumn("fuel") .measureValueType(ScalarMeasureValueType.DOUBLE) .build(), MultiMeasureAttributeMapping.builder() .sourceColumn("miles") .measureValueType(ScalarMeasureValueType.DOUBLE) .build())) .build()) .build()) .build()) .dataSourceConfiguration(DataSourceConfiguration.builder() .dataSourceS3Configuration( Create batch load task 240 Amazon Timestream Developer Guide DataSourceS3Configuration.builder() .bucketName(INPUT_BUCKET) .objectKeyPrefix(INPUT_OBJECT_KEY_PREFIX) .build()) .dataFormat("CSV") .build()) .reportConfiguration(ReportConfiguration.builder() .reportS3Configuration(ReportS3Configuration.builder() .bucketName(REPORT_BUCKET) .build()) .build()) .targetDatabaseName(DATABASE_NAME) .targetTableName(TABLE_NAME) .build(); try { final CreateBatchLoadTaskResponse createBatchLoadTaskResponse = amazonTimestreamWrite.createBatchLoadTask(request); String taskId = createBatchLoadTaskResponse.taskId(); System.out.println("Successfully created batch load task: " + taskId); return taskId; } catch (Exception e) { System.out.println("Failed to create batch load task: " + e); throw e; } } } Go package main import ( "fmt" "context" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" "github.com/aws/aws-sdk-go-v2/service/timestreamwrite/types" ) func main() { Create batch load task 241 Amazon Timestream Developer Guide customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{})(aws.Endpoint, error) { if service == timestreamwrite.ServiceID && region == "us-west-2" { return aws.Endpoint{ PartitionID: "aws", URL: <URL>, SigningRegion: "us-west-2", }, nil } return aws.Endpoint{}, & aws.EndpointNotFoundError{} }) cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithEndpointResolverWithOptions(customResolver), config.WithRegion("us- west-2")) if err != nil { log.Fatalf("failed to load configuration, %v", err) } client := timestreamwrite.NewFromConfig(cfg) response, err := client.CreateBatchLoadTask(context.TODO(), & timestreamwrite.CreateBatchLoadTaskInput{ TargetDatabaseName: aws.String("BatchLoadExampleDatabase"), TargetTableName: aws.String("BatchLoadExampleTable"), RecordVersion: aws.Int64(1), DataModelConfiguration: & types.DataModelConfiguration{ DataModel: & types.DataModel{ TimeColumn: aws.String("timestamp"), TimeUnit: types.TimeUnitMilliseconds, DimensionMappings: []types.DimensionMapping{ { SourceColumn: aws.String("registration"), DestinationColumn: aws.String("license"), }, }, MultiMeasureMappings: & types.MultiMeasureMappings{ TargetMultiMeasureName: aws.String("mva_measure_name"), MultiMeasureAttributeMappings: []types.MultiMeasureAttributeMapping{ { SourceColumn: aws.String("wgt"), Create batch load task 242 Amazon Timestream Developer Guide TargetMultiMeasureAttributeName: aws.String("weight"), MeasureValueType: types.ScalarMeasureValueTypeDouble, }, { SourceColumn: aws.String("spd"), TargetMultiMeasureAttributeName: aws.String("speed"), MeasureValueType: types.ScalarMeasureValueTypeDouble, }, { SourceColumn: aws.String("fuel_consumption"), TargetMultiMeasureAttributeName: aws.String("fuel"), MeasureValueType: types.ScalarMeasureValueTypeDouble, }, }, }, }, }, DataSourceConfiguration: & types.DataSourceConfiguration{ DataSourceS3Configuration: & types.DataSourceS3Configuration{ BucketName: aws.String("test-batch-load-west-2"), ObjectKeyPrefix: aws.String("sample.csv"), }, DataFormat: types.BatchLoadDataFormatCsv, }, ReportConfiguration: & types.ReportConfiguration{ ReportS3Configuration: & types.ReportS3Configuration{ BucketName: aws.String("test-batch-load-report-west-2"), EncryptionOption: types.S3EncryptionOptionSseS3, }, }, }) fmt.Println(aws.ToString(response.TaskId)) } Python import boto3 Create batch load task 243 Amazon Timestream Developer Guide from botocore.config import Config INGEST_ENDPOINT = "<URL>" REGION = "us-west-2" HT_TTL_HOURS = 24 CT_TTL_DAYS = 7 DATABASE_NAME = "<database name>" TABLE_NAME = "<table name>" INPUT_BUCKET_NAME = "<S3 location>" INPUT_OBJECT_KEY_PREFIX = "<CSV file name>" REPORT_BUCKET_NAME = "<S3 location>" def create_batch_load_task(client, database_name, table_name, input_bucket_name, input_object_key_prefix, report_bucket_name): try: result = client.create_batch_load_task(TargetDatabaseName=database_name, TargetTableName=table_name, DataModelConfiguration={"DataModel": { "TimeColumn": "timestamp", "TimeUnit": "SECONDS", "DimensionMappings": [ { "SourceColumn": "vehicle" }, { "SourceColumn": "registration", "DestinationColumn": "license" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "metrics", "MultiMeasureAttributeMappings": [ { "SourceColumn": "wgt", "MeasureValueType": "DOUBLE" }, Create batch load task 244 Amazon Timestream Developer Guide { "SourceColumn": "spd", "MeasureValueType": "DOUBLE" }, { "SourceColumn": "fuel_consumption", "TargetMultiMeasureAttributeName": "fuel", "MeasureValueType": "DOUBLE" }, { "SourceColumn": "miles", "MeasureValueType": "DOUBLE" } ]} } }, DataSourceConfiguration={ "DataSourceS3Configuration": { "BucketName": input_bucket_name, "ObjectKeyPrefix": input_object_key_prefix }, "DataFormat": "CSV" }, ReportConfiguration={ "ReportS3Configuration": { "BucketName": report_bucket_name, "EncryptionOption": "SSE_S3" } } ) task_id = result["TaskId"] print("Successfully created batch load task: ", task_id) return task_id Create batch load task 245 Amazon Timestream Developer Guide except Exception as err: print("Create batch load task job failed:", err) return None if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', endpoint_url=INGEST_ENDPOINT, region_name=REGION, config=Config(read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) task_id = create_batch_load_task(write_client, DATABASE_NAME, TABLE_NAME, INPUT_BUCKET_NAME, INPUT_OBJECT_KEY_PREFIX, REPORT_BUCKET_NAME) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3.
|
timestream-061
|
timestream.pdf
| 61 |
}, ReportConfiguration={ "ReportS3Configuration": { "BucketName": report_bucket_name, "EncryptionOption": "SSE_S3" } } ) task_id = result["TaskId"] print("Successfully created batch load task: ", task_id) return task_id Create batch load task 245 Amazon Timestream Developer Guide except Exception as err: print("Create batch load task job failed:", err) return None if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', endpoint_url=INGEST_ENDPOINT, region_name=REGION, config=Config(read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) task_id = create_batch_load_task(write_client, DATABASE_NAME, TABLE_NAME, INPUT_BUCKET_NAME, INPUT_OBJECT_KEY_PREFIX, REPORT_BUCKET_NAME) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. For API details, see Class CreateBatchLoadCommand and CreateBatchLoadTask. import { TimestreamWriteClient, CreateBatchLoadTaskCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "us-west-2", endpoint: "https://gamma-ingest-cell3.timestream.us-west-2.amazonaws.com" }); const params = { TargetDatabaseName: "BatchLoadExampleDatabase", TargetTableName: "BatchLoadExampleTable", RecordVersion: 1, DataModelConfiguration: { DataModel: { TimeColumn: "timestamp", TimeUnit: "MILLISECONDS", DimensionMappings: [ { SourceColumn: "registration", DestinationColumn: "license" } ], MultiMeasureMappings: { Create batch load task 246 Amazon Timestream Developer Guide TargetMultiMeasureName: "mva_measure_name", MultiMeasureAttributeMappings: [ { SourceColumn: "wgt", TargetMultiMeasureAttributeName: "weight", MeasureValueType: "DOUBLE" }, { SourceColumn: "spd", TargetMultiMeasureAttributeName: "speed", MeasureValueType: "DOUBLE" }, { SourceColumn: "fuel_consumption", TargetMultiMeasureAttributeName: "fuel", MeasureValueType: "DOUBLE" } ] } } }, DataSourceConfiguration: { DataSourceS3Configuration: { BucketName: "test-batch-load-west-2", ObjectKeyPrefix: "sample.csv" }, DataFormat: "CSV" }, ReportConfiguration: { ReportS3Configuration: { BucketName: "test-batch-load-report-west-2", EncryptionOption: "SSE_S3" } } }; const command = new CreateBatchLoadTaskCommand(params); try { const data = await writeClient.send(command); console.log(`Created batch load task ` + data.TaskId); } catch (error) { console.log("Error creating table. ", error); throw error; Create batch load task 247 Developer Guide Amazon Timestream } .NET using System; using System.IO; using System.Collections.Generic; using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using System.Threading.Tasks; namespace TimestreamDotNetSample { public class CreateBatchLoadTaskExample { public const string DATABASE_NAME = "<database name>"; public const string TABLE_NAME = "<table name>"; public const string INPUT_BUCKET = "<input bucket name>"; public const string INPUT_OBJECT_KEY_PREFIX = "<CSV file name>"; public const string REPORT_BUCKET = "<report bucket name>"; public const long HT_TTL_HOURS = 24L; public const long CT_TTL_DAYS = 7L; private readonly AmazonTimestreamWriteClient writeClient; public CreateBatchLoadTaskExample(AmazonTimestreamWriteClient writeClient) { this.writeClient = writeClient; } public async Task CreateBatchLoadTask() { try { var createBatchLoadTaskRequest = new CreateBatchLoadTaskRequest { DataModelConfiguration = new DataModelConfiguration { DataModel = new DataModel { TimeColumn = "timestamp", TimeUnit = TimeUnit.SECONDS, DimensionMappings = new List<DimensionMapping>() { Create batch load task 248 Amazon Timestream Developer Guide new() { SourceColumn = "vehicle" }, new() { SourceColumn = "registration", DestinationColumn = "license" } }, MultiMeasureMappings = new MultiMeasureMappings { TargetMultiMeasureName = "mva_measure_name", MultiMeasureAttributeMappings = new List<MultiMeasureAttributeMapping>() { new() { SourceColumn = "wgt", TargetMultiMeasureAttributeName = "weight", MeasureValueType = ScalarMeasureValueType.DOUBLE }, new() { SourceColumn = "spd", TargetMultiMeasureAttributeName = "speed", MeasureValueType = ScalarMeasureValueType.DOUBLE }, new() { SourceColumn = "fuel", TargetMultiMeasureAttributeName = "fuel", MeasureValueType = ScalarMeasureValueType.DOUBLE }, new() { SourceColumn = "miles", Create batch load task 249 Amazon Timestream Developer Guide TargetMultiMeasureAttributeName = "miles", MeasureValueType = ScalarMeasureValueType.DOUBLE } } } } }, DataSourceConfiguration = new DataSourceConfiguration { DataSourceS3Configuration = new DataSourceS3Configuration { BucketName = INPUT_BUCKET, ObjectKeyPrefix = INPUT_OBJECT_KEY_PREFIX }, DataFormat = "CSV" }, ReportConfiguration = new ReportConfiguration { ReportS3Configuration = new ReportS3Configuration { BucketName = REPORT_BUCKET } }, TargetDatabaseName = DATABASE_NAME, TargetTableName = TABLE_NAME }; CreateBatchLoadTaskResponse response = await writeClient.CreateBatchLoadTaskAsync(createBatchLoadTaskRequest); Console.WriteLine($"Task created: " + response.TaskId); } catch (Exception e) { Console.WriteLine("Create batch load task failed:" + e.ToString()); } } } } using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; Create batch load task 250 Amazon Timestream Developer Guide using Amazon; using Amazon.TimestreamQuery; using System.Threading.Tasks; using System; using CommandLine; static class Constants { } namespace TimestreamDotNetSample { class MainClass { public class Options { } public static void Main(string[] args) { Parser.Default.ParseArguments<Options>(args) .WithParsed<Options>(o => { MainAsync().GetAwaiter().GetResult(); }); } static async Task MainAsync() { var writeClientConfig = new AmazonTimestreamWriteConfig { ServiceURL = "<service URL>", Timeout = TimeSpan.FromSeconds(20), MaxErrorRetry = 10 }; var writeClient = new AmazonTimestreamWriteClient(writeClientConfig); var example = new CreateBatchLoadTaskExample(writeClient); await example.CreateBatchLoadTask(); } } } Create batch load task 251 Amazon Timestream Developer Guide Describe batch load task You can use the following code snippets to describe batch load tasks. Java public void describeBatchLoadTask(String taskId) { final DescribeBatchLoadTaskResponse batchLoadTaskResponse = amazonTimestreamWrite .describeBatchLoadTask(DescribeBatchLoadTaskRequest.builder() .taskId(taskId) .build()); System.out.println("Task id: " + batchLoadTaskResponse.batchLoadTaskDescription().taskId()); System.out.println("Status: " + batchLoadTaskResponse.batchLoadTaskDescription().taskStatusAsString()); System.out.println("Records processed: " + batchLoadTaskResponse.batchLoadTaskDescription().progressReport().recordsProcessed()); } Go package main import ( "fmt" "context" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" ) func main() { customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) { if service == timestreamwrite.ServiceID && region == "us-west-2" { return aws.Endpoint{ PartitionID: "aws", URL: <URL>, Describe batch load task 252 Amazon Timestream Developer Guide SigningRegion: "us-west-2", }, nil } return aws.Endpoint{}, &aws.EndpointNotFoundError{} }) cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithEndpointResolverWithOptions(customResolver), config.WithRegion("us- west-2")) if err != nil { log.Fatalf("failed to load configuration, %v", err) } client := timestreamwrite.NewFromConfig(cfg) response, err := client.DescribeBatchLoadTask(context.TODO(), ×treamwrite.DescribeBatchLoadTaskInput{ TaskId: aws.String("<TaskId>"), }) fmt.Println(aws.ToString(response.BatchLoadTaskDescription.TaskId)) } Python import boto3 from botocore.config import Config INGEST_ENDPOINT="<url>" REGION="us-west-2" HT_TTL_HOURS = 24 CT_TTL_DAYS = 7 TASK_ID = "<task id>" def describe_batch_load_task(client, task_id): try: result
|
timestream-062
|
timestream.pdf
| 62 |
{ customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) { if service == timestreamwrite.ServiceID && region == "us-west-2" { return aws.Endpoint{ PartitionID: "aws", URL: <URL>, Describe batch load task 252 Amazon Timestream Developer Guide SigningRegion: "us-west-2", }, nil } return aws.Endpoint{}, &aws.EndpointNotFoundError{} }) cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithEndpointResolverWithOptions(customResolver), config.WithRegion("us- west-2")) if err != nil { log.Fatalf("failed to load configuration, %v", err) } client := timestreamwrite.NewFromConfig(cfg) response, err := client.DescribeBatchLoadTask(context.TODO(), ×treamwrite.DescribeBatchLoadTaskInput{ TaskId: aws.String("<TaskId>"), }) fmt.Println(aws.ToString(response.BatchLoadTaskDescription.TaskId)) } Python import boto3 from botocore.config import Config INGEST_ENDPOINT="<url>" REGION="us-west-2" HT_TTL_HOURS = 24 CT_TTL_DAYS = 7 TASK_ID = "<task id>" def describe_batch_load_task(client, task_id): try: result = client.describe_batch_load_task(TaskId=task_id) print("Successfully described batch load task: ", result) except Exception as err: print("Describe batch load task job failed:", err) Describe batch load task 253 Amazon Timestream Developer Guide if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', \ endpoint_url=INGEST_ENDPOINT, region_name=REGION, \ config=Config(read_timeout=20, max_pool_connections = 5000, retries={'max_attempts': 10})) describe_batch_load_task(write_client, TASK_ID) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. For API details, see Class DescribeBatchLoadCommand and DescribeBatchLoadTask. import { TimestreamWriteClient, DescribeBatchLoadTaskCommand } from "@aws-sdk/ client-timestream-write"; const writeClient = new TimestreamWriteClient({ region: "<region>", endpoint: "<endpoint>" }); const params = { TaskId: "<TaskId>" }; const command = new DescribeBatchLoadTaskCommand(params); try { const data = await writeClient.send(command); console.log(`Batch load task has id ` + data.BatchLoadTaskDescription.TaskId); } catch (error) { if (error.code === 'ResourceNotFoundException') { console.log("Batch load task doesn't exist."); } else { console.log("Describe batch load task failed.", error); throw error; } } .NET using System; Describe batch load task 254 Amazon Timestream Developer Guide using System.IO; using System.Collections.Generic; using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using System.Threading.Tasks; namespace TimestreamDotNetSample { public class DescribeBatchLoadTaskExample { private readonly AmazonTimestreamWriteClient writeClient; public DescribeBatchLoadTaskExample(AmazonTimestreamWriteClient writeClient) { this.writeClient = writeClient; } public async Task DescribeBatchLoadTask(String taskId) { try { var describeBatchLoadTaskRequest = new DescribeBatchLoadTaskRequest { TaskId = taskId }; DescribeBatchLoadTaskResponse response = await writeClient.DescribeBatchLoadTaskAsync(describeBatchLoadTaskRequest); Console.WriteLine($"Task has id: {response.BatchLoadTaskDescription.TaskId}"); } catch (ResourceNotFoundException) { Console.WriteLine("Batch load task does not exist."); } catch (Exception e) { Console.WriteLine("Describe batch load task failed:" + e.ToString()); } } } } Describe batch load task 255 Amazon Timestream Developer Guide using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using Amazon; using Amazon.TimestreamQuery; using System.Threading.Tasks; using System; using CommandLine; static class Constants { } namespace TimestreamDotNetSample { class MainClass { public class Options { } public static void Main(string[] args) { Parser.Default.ParseArguments<Options>(args) .WithParsed<Options>(o => { MainAsync().GetAwaiter().GetResult(); }); } static async Task MainAsync() { var writeClientConfig = new AmazonTimestreamWriteConfig { ServiceURL = "<service URL>", Timeout = TimeSpan.FromSeconds(20), MaxErrorRetry = 10 }; var writeClient = new AmazonTimestreamWriteClient(writeClientConfig); var example = new DescribeBatchLoadTaskExample(writeClient); await example.DescribeBatchLoadTask("<batch load task id>"); } } } Describe batch load task 256 Amazon Timestream List batch load tasks You can use the following code snippets to list batch load tasks. Java Developer Guide public void listBatchLoadTasks() { final ListBatchLoadTasksResponse listBatchLoadTasksResponse = amazonTimestreamWrite .listBatchLoadTasks(ListBatchLoadTasksRequest.builder() .maxResults(15) .build()); for (BatchLoadTask batchLoadTask : listBatchLoadTasksResponse.batchLoadTasks()) { System.out.println(batchLoadTask.taskId()); } } Go package main import ( "fmt" "context" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" ) func main() { customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) { if service == timestreamwrite.ServiceID && region == "us-west-2" { return aws.Endpoint{ PartitionID: "aws", URL: <URL>, SigningRegion: "us-west-2", }, nil } return aws.Endpoint{}, &aws.EndpointNotFoundError{} List batch load tasks 257 Amazon Timestream }) Developer Guide cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithEndpointResolverWithOptions(customResolver), config.WithRegion("us- west-2")) if err != nil { log.Fatalf("failed to load configuration, %v", err) } client := timestreamwrite.NewFromConfig(cfg) listBatchLoadTasksMaxResult := int32(15) response, err := client.ListBatchLoadTasks(context.TODO(), ×treamwrite.ListBatchLoadTasksInput{ MaxResults: &listBatchLoadTasksMaxResult, }) for i, task := range response.BatchLoadTasks { fmt.Println(i, aws.ToString(task.TaskId)) } } Python import boto3 from botocore.config import Config INGEST_ENDPOINT = "<url>" REGION = "us-west-2" HT_TTL_HOURS = 24 CT_TTL_DAYS = 7 def print_batch_load_tasks(batch_load_tasks): for batch_load_task in batch_load_tasks: print(batch_load_task['TaskId']) def list_batch_load_tasks(client): print("\nListing batch load tasks") try: response = client.list_batch_load_tasks(MaxResults=10) List batch load tasks 258 Amazon Timestream Developer Guide print_batch_load_tasks(response['BatchLoadTasks']) next_token = response.get('NextToken', None) while next_token: response = client.list_batch_load_tasks( NextToken=next_token, MaxResults=10) print_batch_load_tasks(response['BatchLoadTasks']) next_token = response.get('NextToken', None) except Exception as err: print("List batch load tasks failed:", err) raise err if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', endpoint_url=INGEST_ENDPOINT, region_name=REGION, config=Config(read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) list_batch_load_tasks(write_client) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. For API details, see Class DescribeBatchLoadCommand and DescribeBatchLoadTask. import { TimestreamWriteClient, ListBatchLoadTasksCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "<region>", endpoint: "<endpoint>" }); const params = { MaxResults: <15> }; const command = new ListBatchLoadTasksCommand(params); getBatchLoadTasksList(null); async function getBatchLoadTasksList(nextToken) { if (nextToken) { List batch load tasks 259 Amazon Timestream Developer Guide params.NextToken = nextToken; } try { const data = await writeClient.send(command); data.BatchLoadTasks.forEach(function (task) { console.log(task.TaskId); }); if (data.NextToken) { return getBatchLoadTasksList(data.NextToken); } } catch (error) {
|
timestream-063
|
timestream.pdf
| 63 |
information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. For API details, see Class DescribeBatchLoadCommand and DescribeBatchLoadTask. import { TimestreamWriteClient, ListBatchLoadTasksCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "<region>", endpoint: "<endpoint>" }); const params = { MaxResults: <15> }; const command = new ListBatchLoadTasksCommand(params); getBatchLoadTasksList(null); async function getBatchLoadTasksList(nextToken) { if (nextToken) { List batch load tasks 259 Amazon Timestream Developer Guide params.NextToken = nextToken; } try { const data = await writeClient.send(command); data.BatchLoadTasks.forEach(function (task) { console.log(task.TaskId); }); if (data.NextToken) { return getBatchLoadTasksList(data.NextToken); } } catch (error) { console.log("Error while listing batch load tasks", error); } } .NET using System; using System.IO; using System.Collections.Generic; using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using System.Threading.Tasks; namespace TimestreamDotNetSample { public class ListBatchLoadTasksExample { private readonly AmazonTimestreamWriteClient writeClient; public ListBatchLoadTasksExample(AmazonTimestreamWriteClient writeClient) { this.writeClient = writeClient; } public async Task ListBatchLoadTasks() { Console.WriteLine("Listing batch load tasks"); try List batch load tasks 260 Amazon Timestream { Developer Guide var listBatchLoadTasksRequest = new ListBatchLoadTasksRequest { MaxResults = 15 }; ListBatchLoadTasksResponse response = await writeClient.ListBatchLoadTasksAsync(listBatchLoadTasksRequest); PrintBatchLoadTasks(response.BatchLoadTasks); var nextToken = response.NextToken; while (nextToken != null) { listBatchLoadTasksRequest.NextToken = nextToken; response = await writeClient.ListBatchLoadTasksAsync(listBatchLoadTasksRequest); PrintBatchLoadTasks(response.BatchLoadTasks); nextToken = response.NextToken; } } catch (Exception e) { Console.WriteLine("List batch load tasks failed:" + e.ToString()); } } private void PrintBatchLoadTasks(List<BatchLoadTask> tasks) { foreach (BatchLoadTask task in tasks) Console.WriteLine($"Task:{task.TaskId}"); } } } using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using Amazon; using Amazon.TimestreamQuery; using System.Threading.Tasks; using System; using CommandLine; static class Constants List batch load tasks 261 Amazon Timestream Developer Guide { } namespace TimestreamDotNetSample { class MainClass { public class Options { } public static void Main(string[] args) { Parser.Default.ParseArguments<Options>(args) .WithParsed<Options>(o => { MainAsync().GetAwaiter().GetResult(); }); } static async Task MainAsync() { var writeClientConfig = new AmazonTimestreamWriteConfig { ServiceURL = "<service URL>", Timeout = TimeSpan.FromSeconds(20), MaxErrorRetry = 10 }; var writeClient = new AmazonTimestreamWriteClient(writeClientConfig); var example = new ListBatchLoadTasksExample(writeClient); await example.ListBatchLoadTasks(); } } } Resume batch load task You can use the following code snippets to resume batch load tasks. Java public void resumeBatchLoadTask(String taskId) { Resume batch load task 262 Amazon Timestream try { amazonTimestreamWrite Developer Guide .resumeBatchLoadTask(ResumeBatchLoadTaskRequest.builder() .taskId(taskId) .build()); System.out.println("Successfully resumed batch load task."); } catch (ValidationException validationException) { System.out.println(validationException.getMessage()); } } Go package main import ( "fmt" "context" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" ) func main() { customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) { if service == timestreamwrite.ServiceID && region == "us-west-2" { return aws.Endpoint{ PartitionID: "aws", URL: <URL>, SigningRegion: "us-west-2", }, nil } return aws.Endpoint{}, &aws.EndpointNotFoundError{} }) cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithEndpointResolverWithOptions(customResolver), config.WithRegion("us- west-2")) Resume batch load task 263 Amazon Timestream if err != nil { Developer Guide log.Fatalf("failed to load configuration, %v", err) } client := timestreamwrite.NewFromConfig(cfg) response, err := client.ResumeBatchLoadTask(context.TODO(), ×treamwrite.ResumeBatchLoadTaskInput{ TaskId: aws.String("TaskId"), }) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Resume batch load task is successful") fmt.Println(response) } } Python import boto3 from botocore.config import Config INGEST_ENDPOINT="<url>" REGION="us-west-2" HT_TTL_HOURS = 24 CT_TTL_DAYS = 7 TASK_ID = "<TaskId>" def resume_batch_load_task(client, task_id): try: result = client.resume_batch_load_task(TaskId=task_id) print("Successfully resumed batch load task: ", result) except Exception as err: print("Resume batch load task failed:", err) if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', \ Resume batch load task 264 Amazon Timestream Developer Guide endpoint_url=INGEST_ENDPOINT, region_name=REGION, \ config=Config(read_timeout=20, max_pool_connections = 5000, retries={'max_attempts': 10})) resume_batch_load_task(write_client, TASK_ID) Node.js The following snippet uses AWS SDK for JavaScript v3. For more information about how to install the client and usage, see Timestream Write Client - AWS SDK for JavaScript v3. For API details, see Class CreateBatchLoadCommand and CreateBatchLoadTask. import { TimestreamWriteClient, ResumeBatchLoadTaskCommand } from "@aws-sdk/client- timestream-write"; const writeClient = new TimestreamWriteClient({ region: "<region>", endpoint: "<endpoint>" }); const params = { TaskId: "<TaskId>" }; const command = new ResumeBatchLoadTaskCommand(params); try { const data = await writeClient.send(command); console.log("Resumed batch load task"); } catch (error) { console.log("Resume batch load task failed.", error); throw error; } .NET using System; using System.IO; using System.Collections.Generic; using Amazon.TimestreamWrite; using Amazon.TimestreamWrite.Model; using System.Threading.Tasks; namespace TimestreamDotNetSample { Resume batch load task 265 Amazon Timestream Developer Guide public class ResumeBatchLoadTaskExample { private readonly AmazonTimestreamWriteClient writeClient; public ResumeBatchLoadTaskExample(AmazonTimestreamWriteClient writeClient) { this.writeClient = writeClient; } public async Task ResumeBatchLoadTask(String taskId) { try { var resumeBatchLoadTaskRequest = new ResumeBatchLoadTaskRequest { TaskId = taskId }; ResumeBatchLoadTaskResponse response = await writeClient.ResumeBatchLoadTaskAsync(resumeBatchLoadTaskRequest); Console.WriteLine("Successfully resumed batch load task."); } catch (ResourceNotFoundException) { Console.WriteLine("Batch load task does not exist."); } catch (Exception e) { Console.WriteLine("Resume batch load task failed: " + e.ToString()); } } } } Create scheduled query You can use the following code snippets to create a scheduled query with multi-measure mapping. Java public static String DATABASE_NAME = "devops_sample_application"; public static String TABLE_NAME = "host_metrics_sample_application"; public static String HOSTNAME = "host-24Gju"; Create scheduled query 266 Amazon Timestream Developer Guide public static String SQ_NAME = "daily-sample"; public static String SCHEDULE_EXPRESSION = "cron(0/2 * * * ? *)"; // Find the average,
|
timestream-064
|
timestream.pdf
| 64 |
await writeClient.ResumeBatchLoadTaskAsync(resumeBatchLoadTaskRequest); Console.WriteLine("Successfully resumed batch load task."); } catch (ResourceNotFoundException) { Console.WriteLine("Batch load task does not exist."); } catch (Exception e) { Console.WriteLine("Resume batch load task failed: " + e.ToString()); } } } } Create scheduled query You can use the following code snippets to create a scheduled query with multi-measure mapping. Java public static String DATABASE_NAME = "devops_sample_application"; public static String TABLE_NAME = "host_metrics_sample_application"; public static String HOSTNAME = "host-24Gju"; Create scheduled query 266 Amazon Timestream Developer Guide public static String SQ_NAME = "daily-sample"; public static String SCHEDULE_EXPRESSION = "cron(0/2 * * * ? *)"; // Find the average, p90, p95, and p99 CPU utilization for a specific EC2 host over the past 2 hours. public static String QUERY = "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " + "ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM " + DATABASE_NAME + "." + TABLE_NAME + " " + "WHERE measure_name = 'metrics' " + "AND hostname = '" + HOSTNAME + "' " + "AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + "ORDER BY binned_timestamp ASC " + "LIMIT 5"; public String createScheduledQuery(String topic_arn, String role_arn, String database_name, String table_name) { System.out.println("Creating Scheduled Query"); List<Pair<String, MeasureValueType>> sourceColToMeasureValueTypes = Arrays.asList( Pair.of("avg_cpu_utilization", DOUBLE), Pair.of("p90_cpu_utilization", DOUBLE), Pair.of("p95_cpu_utilization", DOUBLE), Pair.of("p99_cpu_utilization", DOUBLE)); CreateScheduledQueryRequest createScheduledQueryRequest = new CreateScheduledQueryRequest() .withName(SQ_NAME) .withQueryString(QUERY) .withScheduleConfiguration(new ScheduleConfiguration() .withScheduleExpression(SCHEDULE_EXPRESSION)) .withNotificationConfiguration(new NotificationConfiguration() .withSnsConfiguration(new SnsConfiguration() .withTopicArn(topic_arn))) .withTargetConfiguration(new TargetConfiguration().withTimestreamConfiguration(new TimestreamConfiguration() Create scheduled query 267 Amazon Timestream Developer Guide .withDatabaseName(database_name) .withTableName(table_name) .withTimeColumn("binned_timestamp") .withDimensionMappings(Arrays.asList( new DimensionMapping() .withName("region") .withDimensionValueType("VARCHAR"), new DimensionMapping() .withName("az") .withDimensionValueType("VARCHAR"), new DimensionMapping() .withName("hostname") .withDimensionValueType("VARCHAR") )) .withMultiMeasureMappings(new MultiMeasureMappings() .withTargetMultiMeasureName("multi-metrics") .withMultiMeasureAttributeMappings( sourceColToMeasureValueTypes.stream() .map(pair -> new MultiMeasureAttributeMapping() .withMeasureValueType(pair.getValue().name()) .withSourceColumn(pair.getKey())) .collect(Collectors.toList()))))) .withErrorReportConfiguration(new ErrorReportConfiguration() .withS3Configuration(new S3Configuration() .withBucketName(timestreamDependencyHelper.getS3ErrorReportBucketName()))) .withScheduledQueryExecutionRoleArn(role_arn); try { final CreateScheduledQueryResult createScheduledQueryResult = queryClient.createScheduledQuery(createScheduledQueryRequest); final String scheduledQueryArn = createScheduledQueryResult.getArn(); System.out.println("Successfully created scheduled query : " + scheduledQueryArn); return scheduledQueryArn; } catch (Exception e) { System.out.println("Scheduled Query creation failed: " + e); throw e; } } Create scheduled query 268 Amazon Timestream Java v2 Developer Guide public static String DATABASE_NAME = "testJavaV2DB"; public static String TABLE_NAME = "testJavaV2Table"; public static String HOSTNAME = "host-24Gju"; public static String SQ_NAME = "daily-sample"; public static String SCHEDULE_EXPRESSION = "cron(0/2 * * * ? *)"; // Find the average, p90, p95, and p99 CPU utilization for a specific EC2 host over the past 2 hours. public static String VALID_QUERY = "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " + "ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM " + DATABASE_NAME + "." + TABLE_NAME + " " + "WHERE measure_name = 'metrics' " + "AND hostname = '" + HOSTNAME + "' " + "AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + "ORDER BY binned_timestamp ASC " + "LIMIT 5"; private String createScheduledQueryHelper(String topicArn, String roleArn, String s3ErrorReportBucketName, String query, TargetConfiguration targetConfiguration) { System.out.println("Creating Scheduled Query"); CreateScheduledQueryRequest createScheduledQueryRequest = CreateScheduledQueryRequest.builder() .name(SQ_NAME) .queryString(query) .scheduleConfiguration(ScheduleConfiguration.builder() .scheduleExpression(SCHEDULE_EXPRESSION) .build()) .notificationConfiguration(NotificationConfiguration.builder() .snsConfiguration(SnsConfiguration.builder() .topicArn(topicArn) .build()) .build()) .targetConfiguration(targetConfiguration) .errorReportConfiguration(ErrorReportConfiguration.builder() Create scheduled query 269 Amazon Timestream Developer Guide .s3Configuration(S3Configuration.builder() .bucketName(s3ErrorReportBucketName) .objectKeyPrefix(SCHEDULED_QUERY_EXAMPLE) .build()) .build()) .scheduledQueryExecutionRoleArn(roleArn) .build(); try { final CreateScheduledQueryResponse response = queryClient.createScheduledQuery(createScheduledQueryRequest); final String scheduledQueryArn = response.arn(); System.out.println("Successfully created scheduled query : " + scheduledQueryArn); return scheduledQueryArn; } catch (Exception e) { System.out.println("Scheduled Query creation failed: " + e); throw e; } } public String createScheduledQuery(String topicArn, String roleArn, String databaseName, String tableName, String s3ErrorReportBucketName) { List<Pair<String, MeasureValueType>> sourceColToMeasureValueTypes = Arrays.asList( Pair.of("avg_cpu_utilization", DOUBLE), Pair.of("p90_cpu_utilization", DOUBLE), Pair.of("p95_cpu_utilization", DOUBLE), Pair.of("p99_cpu_utilization", DOUBLE)); TargetConfiguration targetConfiguration = TargetConfiguration.builder() .timestreamConfiguration(TimestreamConfiguration.builder() .databaseName(databaseName) .tableName(tableName) .timeColumn("binned_timestamp") .dimensionMappings(Arrays.asList( DimensionMapping.builder() .name("region") .dimensionValueType("VARCHAR") .build(), DimensionMapping.builder() .name("az") .dimensionValueType("VARCHAR") Create scheduled query 270 Amazon Timestream Developer Guide .build(), DimensionMapping.builder() .name("hostname") .dimensionValueType("VARCHAR") .build() )) .multiMeasureMappings(MultiMeasureMappings.builder() .targetMultiMeasureName("multi-metrics") .multiMeasureAttributeMappings( sourceColToMeasureValueTypes.stream() .map(pair -> MultiMeasureAttributeMapping.builder() .measureValueType(pair.getValue().name()) .sourceColumn(pair.getKey()) .build()) .collect(Collectors.toList())) .build()) .build()) .build(); return createScheduledQueryHelper(topicArn, roleArn, s3ErrorReportBucketName, VALID_QUERY, targetConfiguration); }} Go SQ_ERROR_CONFIGURATION_S3_BUCKET_NAME_PREFIX = "sq-error-configuration-sample-s3- bucket-" HOSTNAME = "host-24Gju" SQ_NAME = "daily-sample" SCHEDULE_EXPRESSION = "cron(0/1 * * * ? *)" QUERY = "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " + "ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM %s.%s " + "WHERE measure_name = 'metrics' " + "AND hostname = '" + HOSTNAME + "' " + "AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + Create scheduled query 271 Amazon Timestream Developer Guide "ORDER BY binned_timestamp ASC " + "LIMIT 5" s3BucketName = utils.SQ_ERROR_CONFIGURATION_S3_BUCKET_NAME_PREFIX + generateRandomStringWithSize(5) func generateRandomStringWithSize(size int) string { rand.Seed(time.Now().UnixNano()) alphaNumericList := []rune("abcdefghijklmnopqrstuvwxyz0123456789") randomPrefix := make([]rune, size) for i := range randomPrefix { randomPrefix[i] = alphaNumericList[rand.Intn(len(alphaNumericList))] } return string(randomPrefix) } func (timestreamBuilder TimestreamBuilder) createScheduledQuery(topicArn string,
|
timestream-065
|
timestream.pdf
| 65 |
0.95), 2) AS p95_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM %s.%s " + "WHERE measure_name = 'metrics' " + "AND hostname = '" + HOSTNAME + "' " + "AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + Create scheduled query 271 Amazon Timestream Developer Guide "ORDER BY binned_timestamp ASC " + "LIMIT 5" s3BucketName = utils.SQ_ERROR_CONFIGURATION_S3_BUCKET_NAME_PREFIX + generateRandomStringWithSize(5) func generateRandomStringWithSize(size int) string { rand.Seed(time.Now().UnixNano()) alphaNumericList := []rune("abcdefghijklmnopqrstuvwxyz0123456789") randomPrefix := make([]rune, size) for i := range randomPrefix { randomPrefix[i] = alphaNumericList[rand.Intn(len(alphaNumericList))] } return string(randomPrefix) } func (timestreamBuilder TimestreamBuilder) createScheduledQuery(topicArn string, roleArn string, s3ErrorReportBucketName string, query string, targetConfiguration timestreamquery.TargetConfiguration) (string, error) { createScheduledQueryInput := ×treamquery.CreateScheduledQueryInput{ Name: aws.String(SQ_NAME), QueryString: aws.String(query), ScheduleConfiguration: ×treamquery.ScheduleConfiguration{ ScheduleExpression: aws.String(SCHEDULE_EXPRESSION), }, NotificationConfiguration: ×treamquery.NotificationConfiguration{ SnsConfiguration: ×treamquery.SnsConfiguration{ TopicArn: aws.String(topicArn), }, }, TargetConfiguration: &targetConfiguration, ErrorReportConfiguration: ×treamquery.ErrorReportConfiguration{ S3Configuration: ×treamquery.S3Configuration{ BucketName: aws.String(s3ErrorReportBucketName), }, }, ScheduledQueryExecutionRoleArn: aws.String(roleArn), } createScheduledQueryOutput, err := timestreamBuilder.QuerySvc.CreateScheduledQuery(createScheduledQueryInput) if err != nil { Create scheduled query 272 Amazon Timestream Developer Guide fmt.Printf("Error: %s", err.Error()) } else { fmt.Println("createScheduledQueryResult is successful") return *createScheduledQueryOutput.Arn, nil } return "", err } func (timestreamBuilder TimestreamBuilder) CreateValidScheduledQuery(topicArn string, roleArn string, s3ErrorReportBucketName string, sqDatabaseName string, sqTableName string, databaseName string, tableName string) (string, error) { targetConfiguration := timestreamquery.TargetConfiguration{ TimestreamConfiguration: ×treamquery.TimestreamConfiguration{ DatabaseName: aws.String(sqDatabaseName), TableName: aws.String(sqTableName), TimeColumn: aws.String("binned_timestamp"), DimensionMappings: []*timestreamquery.DimensionMapping{ { Name: aws.String("region"), DimensionValueType: aws.String("VARCHAR"), }, { Name: aws.String("az"), DimensionValueType: aws.String("VARCHAR"), }, { Name: aws.String("hostname"), DimensionValueType: aws.String("VARCHAR"), }, }, MultiMeasureMappings: ×treamquery.MultiMeasureMappings{ TargetMultiMeasureName: aws.String("multi-metrics"), MultiMeasureAttributeMappings: []*timestreamquery.MultiMeasureAttributeMapping{ { SourceColumn: aws.String("avg_cpu_utilization"), MeasureValueType: aws.String(timestreamquery.MeasureValueTypeDouble), }, { SourceColumn: aws.String("p90_cpu_utilization"), Create scheduled query 273 Amazon Timestream Developer Guide MeasureValueType: aws.String(timestreamquery.MeasureValueTypeDouble), }, { SourceColumn: aws.String("p95_cpu_utilization"), MeasureValueType: aws.String(timestreamquery.MeasureValueTypeDouble), }, { SourceColumn: aws.String("p99_cpu_utilization"), MeasureValueType: aws.String(timestreamquery.MeasureValueTypeDouble), }, }, }, }, } return timestreamBuilder.createScheduledQuery(topicArn, roleArn, s3ErrorReportBucketName, fmt.Sprintf(QUERY, databaseName, tableName), targetConfiguration) } Python HOSTNAME = "host-24Gju" SQ_NAME = "daily-sample" ERROR_BUCKET_NAME = "scheduledquerysamplerrorbucket" + ''.join([choice(ascii_lowercase) for _ in range(5)]) QUERY = \ "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " \ " ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " \ " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " \ " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " \ " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " \ "FROM " + database_name + "." + table_name + " " \ "WHERE measure_name = 'metrics' " \ "AND hostname = '" + self.HOSTNAME + "' " \ "AND time > ago(2h) " \ "GROUP BY region, hostname, az, BIN(time, 15s) " \ "ORDER BY binned_timestamp ASC " \ Create scheduled query 274 Amazon Timestream "LIMIT 5" Developer Guide def create_scheduled_query_helper(self, topic_arn, role_arn, query, target_configuration): print("\nCreating Scheduled Query") schedule_configuration = { 'ScheduleExpression': 'cron(0/2 * * * ? *)' } notification_configuration = { 'SnsConfiguration': { 'TopicArn': topic_arn } } error_report_configuration = { 'S3Configuration': { 'BucketName': ERROR_BUCKET_NAME } } try: create_scheduled_query_response = \ query_client.create_scheduled_query(Name=self.SQ_NAME, QueryString=query, ScheduleConfiguration=schedule_configuration, NotificationConfiguration=notification_configuration, TargetConfiguration=target_configuration, ScheduledQueryExecutionRoleArn=role_arn, ErrorReportConfiguration=error_report_configuration ) print("Successfully created scheduled query : ", create_scheduled_query_response['Arn']) return create_scheduled_query_response['Arn'] except Exception as err: print("Scheduled Query creation failed:", err) raise err def create_valid_scheduled_query(self, topic_arn, role_arn): target_configuration = { 'TimestreamConfiguration': { 'DatabaseName': self.sq_database_name, 'TableName': self.sq_table_name, 'TimeColumn': 'binned_timestamp', 'DimensionMappings': [ {'Name': 'region', 'DimensionValueType': 'VARCHAR'}, Create scheduled query 275 Amazon Timestream Developer Guide {'Name': 'az', 'DimensionValueType': 'VARCHAR'}, {'Name': 'hostname', 'DimensionValueType': 'VARCHAR'} ], 'MultiMeasureMappings': { 'TargetMultiMeasureName': 'target_name', 'MultiMeasureAttributeMappings': [ {'SourceColumn': 'avg_cpu_utilization', 'MeasureValueType': 'DOUBLE', 'TargetMultiMeasureAttributeName': 'avg_cpu_utilization'}, {'SourceColumn': 'p90_cpu_utilization', 'MeasureValueType': 'DOUBLE', 'TargetMultiMeasureAttributeName': 'p90_cpu_utilization'}, {'SourceColumn': 'p95_cpu_utilization', 'MeasureValueType': 'DOUBLE', 'TargetMultiMeasureAttributeName': 'p95_cpu_utilization'}, {'SourceColumn': 'p99_cpu_utilization', 'MeasureValueType': 'DOUBLE', 'TargetMultiMeasureAttributeName': 'p99_cpu_utilization'}, ] } } } return self.create_scheduled_query_helper(topic_arn, role_arn, QUERY, target_configuration) Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. const DATABASE_NAME = 'devops_sample_application'; const TABLE_NAME = 'host_metrics_sample_application'; const SQ_DATABASE_NAME = 'sq_result_database'; const SQ_TABLE_NAME = 'sq_result_table'; const HOSTNAME = "host-24Gju"; const SQ_NAME = "daily-sample"; const SCHEDULE_EXPRESSION = "cron(0/1 * * * ? *)"; // Find the average, p90, p95, and p99 CPU utilization for a specific EC2 host over the past 2 hours. const VALID_QUERY = "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " + Create scheduled query 276 Amazon Timestream Developer Guide " ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " + " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " + " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " + " ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM " + DATABASE_NAME + "." + TABLE_NAME + " " + "WHERE measure_name = 'metrics' " + " AND hostname = '" + HOSTNAME + "' " + " AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + "ORDER BY binned_timestamp ASC " + "LIMIT 5"; async function createScheduledQuery(topicArn, roleArn, s3ErrorReportBucketName) { console.log("Creating Valid Scheduled Query"); const DimensionMappingList = [{ 'Name': 'region', 'DimensionValueType': 'VARCHAR' }, { 'Name': 'az', 'DimensionValueType': 'VARCHAR' }, { 'Name': 'hostname', 'DimensionValueType': 'VARCHAR' } ]; const MultiMeasureMappings = { TargetMultiMeasureName: "multi-metrics", MultiMeasureAttributeMappings: [{ 'SourceColumn': 'avg_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p90_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p95_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p99_cpu_utilization', Create scheduled query 277 Amazon Timestream
|
timestream-066
|
timestream.pdf
| 66 |
AND hostname = '" + HOSTNAME + "' " + " AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + "ORDER BY binned_timestamp ASC " + "LIMIT 5"; async function createScheduledQuery(topicArn, roleArn, s3ErrorReportBucketName) { console.log("Creating Valid Scheduled Query"); const DimensionMappingList = [{ 'Name': 'region', 'DimensionValueType': 'VARCHAR' }, { 'Name': 'az', 'DimensionValueType': 'VARCHAR' }, { 'Name': 'hostname', 'DimensionValueType': 'VARCHAR' } ]; const MultiMeasureMappings = { TargetMultiMeasureName: "multi-metrics", MultiMeasureAttributeMappings: [{ 'SourceColumn': 'avg_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p90_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p95_cpu_utilization', 'MeasureValueType': 'DOUBLE' }, { 'SourceColumn': 'p99_cpu_utilization', Create scheduled query 277 Amazon Timestream Developer Guide 'MeasureValueType': 'DOUBLE' }, ] } const timestreamConfiguration = { DatabaseName: SQ_DATABASE_NAME, TableName: SQ_TABLE_NAME, TimeColumn: "binned_timestamp", DimensionMappings: DimensionMappingList, MultiMeasureMappings: MultiMeasureMappings } const createScheduledQueryRequest = { Name: SQ_NAME, QueryString: VALID_QUERY, ScheduleConfiguration: { ScheduleExpression: SCHEDULE_EXPRESSION }, NotificationConfiguration: { SnsConfiguration: { TopicArn: topicArn } }, TargetConfiguration: { TimestreamConfiguration: timestreamConfiguration }, ScheduledQueryExecutionRoleArn: roleArn, ErrorReportConfiguration: { S3Configuration: { BucketName: s3ErrorReportBucketName } } }; try { const data = await queryClient.createScheduledQuery(createScheduledQueryRequest).promise(); console.log("Successfully created scheduled query: " + data.Arn); return data.Arn; } catch (err) { console.log("Scheduled Query creation failed: ", err); throw err; } Create scheduled query 278 Amazon Timestream } .NET Developer Guide public const string Hostname = "host-24Gju"; public const string SqName = "timestream-sample"; public const string SqDatabaseName = "sq_result_database"; public const string SqTableName = "sq_result_table"; public const string ErrorConfigurationS3BucketNamePrefix = "error-configuration- sample-s3-bucket-"; public const string ScheduleExpression = "cron(0/2 * * * ? *)"; // Find the average, p90, p95, and p99 CPU utilization for a specific EC2 host over the past 2 hours. public const string ValidQuery = "SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, " + "ROUND(AVG(cpu_utilization), 2) AS avg_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.9), 2) AS p90_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.95), 2) AS p95_cpu_utilization, " + "ROUND(APPROX_PERCENTILE(cpu_utilization, 0.99), 2) AS p99_cpu_utilization " + "FROM " + Constants.DATABASE_NAME + "." + Constants.TABLE_NAME + " " + "WHERE measure_name = 'metrics' " + "AND hostname = '" + Hostname + "' " + "AND time > ago(2h) " + "GROUP BY region, hostname, az, BIN(time, 15s) " + "ORDER BY binned_timestamp ASC " + "LIMIT 5"; private async Task<String> CreateValidScheduledQuery(string topicArn, string roleArn, string databaseName, string tableName, string s3ErrorReportBucketName) { List<MultiMeasureAttributeMapping> sourceColToMeasureValueTypes = new List<MultiMeasureAttributeMapping>() { new() { SourceColumn = "avg_cpu_utilization", MeasureValueType = MeasureValueType.DOUBLE.Value }, new() Create scheduled query 279 Amazon Timestream { SourceColumn = "p90_cpu_utilization", MeasureValueType = MeasureValueType.DOUBLE.Value Developer Guide }, new() { SourceColumn = "p95_cpu_utilization", MeasureValueType = MeasureValueType.DOUBLE.Value }, new() { SourceColumn = "p99_cpu_utilization", MeasureValueType = MeasureValueType.DOUBLE.Value } }; TargetConfiguration targetConfiguration = new TargetConfiguration() { TimestreamConfiguration = new TimestreamConfiguration() { DatabaseName = databaseName, TableName = tableName, TimeColumn = "binned_timestamp", DimensionMappings = new List<DimensionMapping>() { new() { Name = "region", DimensionValueType = "VARCHAR" }, new() { Name = "az", DimensionValueType = "VARCHAR" }, new() { Name = "hostname", DimensionValueType = "VARCHAR" } }, MultiMeasureMappings = new MultiMeasureMappings() { TargetMultiMeasureName = "multi-metrics", Create scheduled query 280 Amazon Timestream Developer Guide MultiMeasureAttributeMappings = sourceColToMeasureValueTypes } } }; return await CreateScheduledQuery(topicArn, roleArn, s3ErrorReportBucketName, ScheduledQueryConstants.ValidQuery, targetConfiguration); } private async Task<String> CreateScheduledQuery(string topicArn, string roleArn, string s3ErrorReportBucketName, string query, TargetConfiguration targetConfiguration) { try { Console.WriteLine("Creating Scheduled Query"); CreateScheduledQueryResponse response = await _amazonTimestreamQuery.CreateScheduledQueryAsync( new CreateScheduledQueryRequest() { Name = ScheduledQueryConstants.SqName, QueryString = query, ScheduleConfiguration = new ScheduleConfiguration() { ScheduleExpression = ScheduledQueryConstants.ScheduleExpression }, NotificationConfiguration = new NotificationConfiguration() { SnsConfiguration = new SnsConfiguration() { TopicArn = topicArn } }, TargetConfiguration = targetConfiguration, ErrorReportConfiguration = new ErrorReportConfiguration() { S3Configuration = new S3Configuration() { BucketName = s3ErrorReportBucketName } }, ScheduledQueryExecutionRoleArn = roleArn }); Console.WriteLine($"Successfully created scheduled query : {response.Arn}"); Create scheduled query 281 Amazon Timestream Developer Guide return response.Arn; } catch (Exception e) { Console.WriteLine($"Scheduled Query creation failed: {e}"); throw; } } List scheduled query You can use the following code snippets to list your scheduled queries. Java public void listScheduledQueries() { System.out.println("Listing Scheduled Query"); try { String nextToken = null; List<String> scheduledQueries = new ArrayList<>(); do { ListScheduledQueriesResult listScheduledQueriesResult = queryClient.listScheduledQueries(new ListScheduledQueriesRequest() .withNextToken(nextToken).withMaxResults(10)); List<ScheduledQuery> scheduledQueryList = listScheduledQueriesResult.getScheduledQueries(); printScheduledQuery(scheduledQueryList); nextToken = listScheduledQueriesResult.getNextToken(); } while (nextToken != null); } catch (Exception e) { System.out.println("List Scheduled Query failed: " + e); throw e; } } public void printScheduledQuery(List<ScheduledQuery> scheduledQueryList) { for (ScheduledQuery scheduledQuery: scheduledQueryList) { System.out.println(scheduledQuery.getArn()); List scheduled query 282 Amazon Timestream } } Java v2 Developer Guide public void listScheduledQueries() { System.out.println("Listing Scheduled Query"); try { String nextToken = null; do { ListScheduledQueriesResponse listScheduledQueriesResult = queryClient.listScheduledQueries(ListScheduledQueriesRequest.builder() .nextToken(nextToken).maxResults(10) .build()); List<ScheduledQuery> scheduledQueryList = listScheduledQueriesResult.scheduledQueries(); printScheduledQuery(scheduledQueryList); nextToken = listScheduledQueriesResult.nextToken(); } while (nextToken != null); } catch (Exception e) { System.out.println("List Scheduled Query failed: " + e); throw e; } } public void printScheduledQuery(List<ScheduledQuery> scheduledQueryList) { for (ScheduledQuery scheduledQuery: scheduledQueryList) { System.out.println(scheduledQuery.arn()); } } Go func (timestreamBuilder TimestreamBuilder) ListScheduledQueries() ([]*timestreamquery.ScheduledQuery, error) { var nextToken *string = nil var scheduledQueries []*timestreamquery.ScheduledQuery for ok := true; ok; ok = nextToken != nil { List scheduled query 283 Amazon Timestream Developer
|
timestream-067
|
timestream.pdf
| 67 |
v2 Developer Guide public void listScheduledQueries() { System.out.println("Listing Scheduled Query"); try { String nextToken = null; do { ListScheduledQueriesResponse listScheduledQueriesResult = queryClient.listScheduledQueries(ListScheduledQueriesRequest.builder() .nextToken(nextToken).maxResults(10) .build()); List<ScheduledQuery> scheduledQueryList = listScheduledQueriesResult.scheduledQueries(); printScheduledQuery(scheduledQueryList); nextToken = listScheduledQueriesResult.nextToken(); } while (nextToken != null); } catch (Exception e) { System.out.println("List Scheduled Query failed: " + e); throw e; } } public void printScheduledQuery(List<ScheduledQuery> scheduledQueryList) { for (ScheduledQuery scheduledQuery: scheduledQueryList) { System.out.println(scheduledQuery.arn()); } } Go func (timestreamBuilder TimestreamBuilder) ListScheduledQueries() ([]*timestreamquery.ScheduledQuery, error) { var nextToken *string = nil var scheduledQueries []*timestreamquery.ScheduledQuery for ok := true; ok; ok = nextToken != nil { List scheduled query 283 Amazon Timestream Developer Guide listScheduledQueriesInput := ×treamquery.ListScheduledQueriesInput{ MaxResults: aws.Int64(15), } if nextToken != nil { listScheduledQueriesInput.NextToken = aws.String(*nextToken) } listScheduledQueriesOutput, err := timestreamBuilder.QuerySvc.ListScheduledQueries(listScheduledQueriesInput) if err != nil { fmt.Printf("Error: %s", err.Error()) return nil, err } scheduledQueries = append(scheduledQueries, listScheduledQueriesOutput.ScheduledQueries...) nextToken = listScheduledQueriesOutput.NextToken } return scheduledQueries, nil } Python def list_scheduled_queries(self): print("\nListing Scheduled Queries") try: response = self.query_client.list_scheduled_queries(MaxResults=10) self.print_scheduled_queries(response['ScheduledQueries']) next_token = response.get('NextToken', None) while next_token: response = self.query_client.list_scheduled_queries(NextToken=next_token, MaxResults=10) self.print_scheduled_queries(response['ScheduledQueries']) next_token = response.get('NextToken', None) except Exception as err: print("List scheduled queries failed:", err) raise err @staticmethod def print_scheduled_queries(scheduled_queries): for scheduled_query in scheduled_queries: print(scheduled_query['Arn']) List scheduled query 284 Amazon Timestream Node.js Developer Guide The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function listScheduledQueries() { console.log("Listing Scheduled Query"); try { var nextToken = null; do { var params = { MaxResults: 10, NextToken: nextToken } var data = await queryClient.listScheduledQueries(params).promise(); var scheduledQueryList = data.ScheduledQueries; printScheduledQuery(scheduledQueryList); nextToken = data.NextToken; } while (nextToken != null); } catch (err) { console.log("List Scheduled Query failed: ", err); throw err; } } async function printScheduledQuery(scheduledQueryList) { scheduledQueryList.forEach(element => console.log(element.Arn)); } .NET private async Task ListScheduledQueries() { try { Console.WriteLine("Listing Scheduled Query"); string nextToken; do { ListScheduledQueriesResponse response = await _amazonTimestreamQuery.ListScheduledQueriesAsync(new ListScheduledQueriesRequest()); List scheduled query 285 Amazon Timestream Developer Guide foreach (var scheduledQuery in response.ScheduledQueries) { Console.WriteLine($"{scheduledQuery.Arn}"); } nextToken = response.NextToken; } while (nextToken != null); } catch (Exception e) { Console.WriteLine($"List Scheduled Query failed: {e}"); throw; } } Describe scheduled query You can use the following code snippets to describe a scheduled query. Java public void describeScheduledQueries(String scheduledQueryArn) { System.out.println("Describing Scheduled Query"); try { DescribeScheduledQueryResult describeScheduledQueryResult = queryClient.describeScheduledQuery(new DescribeScheduledQueryRequest().withScheduledQueryArn(scheduledQueryArn)); System.out.println(describeScheduledQueryResult); } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Describe Scheduled Query failed: " + e); throw e; } } Java v2 public void describeScheduledQueries(String scheduledQueryArn) { Describe scheduled query 286 Amazon Timestream Developer Guide System.out.println("Describing Scheduled Query"); try { DescribeScheduledQueryResponse describeScheduledQueryResult = queryClient.describeScheduledQuery(DescribeScheduledQueryRequest.builder() .scheduledQueryArn(scheduledQueryArn) .build()); System.out.println(describeScheduledQueryResult); } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Describe Scheduled Query failed: " + e); throw e; } } Go func (timestreamBuilder TimestreamBuilder) DescribeScheduledQuery(scheduledQueryArn string) error { describeScheduledQueryInput := ×treamquery.DescribeScheduledQueryInput{ ScheduledQueryArn: aws.String(scheduledQueryArn), } describeScheduledQueryOutput, err := timestreamBuilder.QuerySvc.DescribeScheduledQuery(describeScheduledQueryInput) if err != nil { if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case timestreamquery.ErrCodeResourceNotFoundException: fmt.Println(timestreamquery.ErrCodeResourceNotFoundException, aerr.Error()) default: fmt.Printf("Error: %s", err.Error()) } } else { fmt.Printf("Error: %s", aerr.Error()) } return err Describe scheduled query 287 Amazon Timestream } else { Developer Guide fmt.Println("DescribeScheduledQuery is successful, below is the output:") fmt.Println(describeScheduledQueryOutput.ScheduledQuery) return nil } } Python def describe_scheduled_query(self, scheduled_query_arn): print("\nDescribing Scheduled Query") try: response = self.query_client.describe_scheduled_query(ScheduledQueryArn=scheduled_query_arn) if 'ScheduledQuery' in response: response = response['ScheduledQuery'] for key in response: print("{} :{}".format(key, response[key])) except self.query_client.exceptions.ResourceNotFoundException as err: print("Scheduled Query doesn't exist") raise err except Exception as err: print("Scheduled Query describe failed:", err) raise err Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function describeScheduledQuery(scheduledQueryArn) { console.log("Describing Scheduled Query"); var params = { ScheduledQueryArn: scheduledQueryArn } try { const data = await queryClient.describeScheduledQuery(params).promise(); console.log(data.ScheduledQuery); } catch (err) { console.log("Describe Scheduled Query failed: ", err); throw err; } } Describe scheduled query 288 Amazon Timestream .NET Developer Guide private async Task DescribeScheduledQuery(string scheduledQueryArn) { try { Console.WriteLine("Describing Scheduled Query"); DescribeScheduledQueryResponse response = await _amazonTimestreamQuery.DescribeScheduledQueryAsync( new DescribeScheduledQueryRequest() { ScheduledQueryArn = scheduledQueryArn }); Console.WriteLine($"{JsonConvert.SerializeObject(response.ScheduledQuery)}"); } catch (ResourceNotFoundException e) { Console.WriteLine($"Scheduled Query doesn't exist: {e}"); throw; } catch (Exception e) { Console.WriteLine($"Describe Scheduled Query failed: {e}"); throw; } } Execute scheduled query You can use the following code snippets to run a scheduled query. Java public void executeScheduledQueries(String scheduledQueryArn, Date invocationTime) { System.out.println("Executing Scheduled Query"); try { ExecuteScheduledQueryResult executeScheduledQueryResult = queryClient.executeScheduledQuery(new ExecuteScheduledQueryRequest() .withScheduledQueryArn(scheduledQueryArn) .withInvocationTime(invocationTime) ); Execute scheduled query 289 Amazon Timestream Developer Guide } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Execution Scheduled Query failed: " + e); throw e; } } Java v2 public void executeScheduledQuery(String scheduledQueryArn) { System.out.println("Executing Scheduled Query"); try { ExecuteScheduledQueryResponse executeScheduledQueryResult = queryClient.executeScheduledQuery(ExecuteScheduledQueryRequest.builder() .scheduledQueryArn(scheduledQueryArn) .invocationTime(Instant.now()) .build() ); System.out.println("Execute ScheduledQuery response code: " +
|
timestream-068
|
timestream.pdf
| 68 |
throw; } } Execute scheduled query You can use the following code snippets to run a scheduled query. Java public void executeScheduledQueries(String scheduledQueryArn, Date invocationTime) { System.out.println("Executing Scheduled Query"); try { ExecuteScheduledQueryResult executeScheduledQueryResult = queryClient.executeScheduledQuery(new ExecuteScheduledQueryRequest() .withScheduledQueryArn(scheduledQueryArn) .withInvocationTime(invocationTime) ); Execute scheduled query 289 Amazon Timestream Developer Guide } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Execution Scheduled Query failed: " + e); throw e; } } Java v2 public void executeScheduledQuery(String scheduledQueryArn) { System.out.println("Executing Scheduled Query"); try { ExecuteScheduledQueryResponse executeScheduledQueryResult = queryClient.executeScheduledQuery(ExecuteScheduledQueryRequest.builder() .scheduledQueryArn(scheduledQueryArn) .invocationTime(Instant.now()) .build() ); System.out.println("Execute ScheduledQuery response code: " + executeScheduledQueryResult.sdkHttpResponse().statusCode()); } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Execution Scheduled Query failed: " + e); throw e; } } Go func (timestreamBuilder TimestreamBuilder) ExecuteScheduledQuery(scheduledQueryArn string, invocationTime time.Time) error { Execute scheduled query 290 Amazon Timestream Developer Guide executeScheduledQueryInput := ×treamquery.ExecuteScheduledQueryInput{ ScheduledQueryArn: aws.String(scheduledQueryArn), InvocationTime: aws.Time(invocationTime), } executeScheduledQueryOutput, err := timestreamBuilder.QuerySvc.ExecuteScheduledQuery(executeScheduledQueryInput) if err != nil { if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case timestreamquery.ErrCodeResourceNotFoundException: fmt.Println(timestreamquery.ErrCodeResourceNotFoundException, aerr.Error()) default: fmt.Printf("Error: %s", aerr.Error()) } } else { fmt.Printf("Error: %s", err.Error()) } return err } else { fmt.Println("ExecuteScheduledQuery is successful, below is the output:") fmt.Println(executeScheduledQueryOutput.GoString()) return nil } } Python def execute_scheduled_query(self, scheduled_query_arn, invocation_time): print("\nExecuting Scheduled Query") try: self.query_client.execute_scheduled_query(ScheduledQueryArn=scheduled_query_arn, InvocationTime=invocation_time) print("Successfully started executing scheduled query") except self.query_client.exceptions.ResourceNotFoundException as err: print("Scheduled Query doesn't exist") raise err except Exception as err: print("Scheduled Query execution failed:", err) raise err Execute scheduled query 291 Amazon Timestream Node.js Developer Guide The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function executeScheduledQuery(scheduledQueryArn, invocationTime) { console.log("Executing Scheduled Query"); var params = { ScheduledQueryArn: scheduledQueryArn, InvocationTime: invocationTime } try { await queryClient.executeScheduledQuery(params).promise(); } catch (err) { console.log("Execute Scheduled Query failed: ", err); throw err; } } .NET private async Task ExecuteScheduledQuery(string scheduledQueryArn, DateTime invocationTime) { try { Console.WriteLine("Running Scheduled Query"); await _amazonTimestreamQuery.ExecuteScheduledQueryAsync(new ExecuteScheduledQueryRequest() { ScheduledQueryArn = scheduledQueryArn, InvocationTime = invocationTime }); Console.WriteLine("Successfully started manual run of scheduled query"); } catch (ResourceNotFoundException e) { Console.WriteLine($"Scheduled Query doesn't exist: {e}"); throw; } catch (Exception e) { Console.WriteLine($"Execute Scheduled Query failed: {e}"); Execute scheduled query 292 Amazon Timestream throw; } } Developer Guide Update scheduled query You can use the following code snippets to update a scheduled query. Java public void updateScheduledQueries(String scheduledQueryArn) { System.out.println("Updating Scheduled Query"); try { queryClient.updateScheduledQuery(new UpdateScheduledQueryRequest() .withScheduledQueryArn(scheduledQueryArn) .withState(ScheduledQueryState.DISABLED)); System.out.println("Successfully update scheduled query state"); } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); throw e; } catch (Exception e) { System.out.println("Execution Scheduled Query failed: " + e); throw e; } } Java v2 public void updateScheduledQuery(String scheduledQueryArn, ScheduledQueryState state) { System.out.println("Updating Scheduled Query"); try { queryClient.updateScheduledQuery(UpdateScheduledQueryRequest.builder() .scheduledQueryArn(scheduledQueryArn) .state(state) .build()); System.out.println("Successfully update scheduled query state"); } catch (ResourceNotFoundException e) { System.out.println("Scheduled Query doesn't exist"); Update scheduled query 293 Amazon Timestream Developer Guide throw e; } catch (Exception e) { System.out.println("Execution Scheduled Query failed: " + e); throw e; } } Go func (timestreamBuilder TimestreamBuilder) UpdateScheduledQuery(scheduledQueryArn string) error { updateScheduledQueryInput := ×treamquery.UpdateScheduledQueryInput{ ScheduledQueryArn: aws.String(scheduledQueryArn), State: aws.String(timestreamquery.ScheduledQueryStateDisabled), } _, err := timestreamBuilder.QuerySvc.UpdateScheduledQuery(updateScheduledQueryInput) if err != nil { if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case timestreamquery.ErrCodeResourceNotFoundException: fmt.Println(timestreamquery.ErrCodeResourceNotFoundException, aerr.Error()) default: fmt.Printf("Error: %s", aerr.Error()) } } else { fmt.Printf("Error: %s", err.Error()) } return err } else { fmt.Println("UpdateScheduledQuery is successful") return nil } } Python def update_scheduled_query(self, scheduled_query_arn, state): print("\nUpdating Scheduled Query") Update scheduled query 294 Amazon Timestream try: Developer Guide self.query_client.update_scheduled_query(ScheduledQueryArn=scheduled_query_arn, State=state) print("Successfully update scheduled query state to", state) except self.query_client.exceptions.ResourceNotFoundException as err: print("Scheduled Query doesn't exist") raise err except Exception as err: print("Scheduled Query deletion failed:", err) raise err Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function updateScheduledQueries(scheduledQueryArn) { console.log("Updating Scheduled Query"); var params = { ScheduledQueryArn: scheduledQueryArn, State: "DISABLED" } try { await queryClient.updateScheduledQuery(params).promise(); console.log("Successfully update scheduled query state"); } catch (err) { console.log("Update Scheduled Query failed: ", err); throw err; } } .NET private async Task UpdateScheduledQuery(string scheduledQueryArn, ScheduledQueryState state) { try { Console.WriteLine("Updating Scheduled Query"); await _amazonTimestreamQuery.UpdateScheduledQueryAsync(new UpdateScheduledQueryRequest() { Update scheduled query 295 Amazon Timestream Developer Guide ScheduledQueryArn = scheduledQueryArn, State = state }); Console.WriteLine("Successfully update scheduled query state"); } catch (ResourceNotFoundException e) { Console.WriteLine($"Scheduled Query doesn't exist: {e}"); throw; } catch (Exception e) { Console.WriteLine($"Update Scheduled Query failed: {e}"); throw; } } Delete scheduled query You can use the following code snippets to delete a scheduled query. Java public void deleteScheduledQuery(String scheduledQueryArn) { System.out.println("Deleting Scheduled Query"); try { queryClient.deleteScheduledQuery(new DeleteScheduledQueryRequest().withScheduledQueryArn(scheduledQueryArn)); System.out.println("Successfully deleted scheduled query"); } catch (Exception e) { System.out.println("Scheduled Query deletion failed: " + e); } } Java v2 public void deleteScheduledQuery(String scheduledQueryArn) { System.out.println("Deleting Scheduled Query"); try { Delete scheduled query
|
timestream-069
|
timestream.pdf
| 69 |
Timestream Developer Guide ScheduledQueryArn = scheduledQueryArn, State = state }); Console.WriteLine("Successfully update scheduled query state"); } catch (ResourceNotFoundException e) { Console.WriteLine($"Scheduled Query doesn't exist: {e}"); throw; } catch (Exception e) { Console.WriteLine($"Update Scheduled Query failed: {e}"); throw; } } Delete scheduled query You can use the following code snippets to delete a scheduled query. Java public void deleteScheduledQuery(String scheduledQueryArn) { System.out.println("Deleting Scheduled Query"); try { queryClient.deleteScheduledQuery(new DeleteScheduledQueryRequest().withScheduledQueryArn(scheduledQueryArn)); System.out.println("Successfully deleted scheduled query"); } catch (Exception e) { System.out.println("Scheduled Query deletion failed: " + e); } } Java v2 public void deleteScheduledQuery(String scheduledQueryArn) { System.out.println("Deleting Scheduled Query"); try { Delete scheduled query 296 Amazon Timestream Developer Guide queryClient.deleteScheduledQuery(DeleteScheduledQueryRequest.builder() .scheduledQueryArn(scheduledQueryArn).build()); System.out.println("Successfully deleted scheduled query"); } catch (Exception e) { System.out.println("Scheduled Query deletion failed: " + e); } } Go func (timestreamBuilder TimestreamBuilder) DeleteScheduledQuery(scheduledQueryArn string) error { deleteScheduledQueryInput := ×treamquery.DeleteScheduledQueryInput{ ScheduledQueryArn: aws.String(scheduledQueryArn), } _, err := timestreamBuilder.QuerySvc.DeleteScheduledQuery(deleteScheduledQueryInput) if err != nil { fmt.Println("Error:") if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case timestreamquery.ErrCodeResourceNotFoundException: fmt.Println(timestreamquery.ErrCodeResourceNotFoundException, aerr.Error()) default: fmt.Printf("Error: %s", aerr.Error()) } } else { fmt.Printf("Error: %s", err.Error()) } return err } else { fmt.Println("DeleteScheduledQuery is successful") return nil } } Python def delete_scheduled_query(self, scheduled_query_arn): Delete scheduled query 297 Amazon Timestream Developer Guide print("\nDeleting Scheduled Query") try: self.query_client.delete_scheduled_query(ScheduledQueryArn=scheduled_query_arn) print("Successfully deleted scheduled query :", scheduled_query_arn) except Exception as err: print("Scheduled Query deletion failed:", err) raise err Node.js The following snippet uses the AWS SDK for JavaScript V2 style. It is based on the sample application at Node.js sample Amazon Timestream for LiveAnalytics application on GitHub. async function deleteScheduleQuery(scheduledQueryArn) { console.log("Deleting Scheduled Query"); const params = { ScheduledQueryArn: scheduledQueryArn } try { await queryClient.deleteScheduledQuery(params).promise(); console.log("Successfully deleted scheduled query"); } catch (err) { console.log("Scheduled Query deletion failed: ", err); } } .NET private async Task DeleteScheduledQuery(string scheduledQueryArn) { try { Console.WriteLine("Deleting Scheduled Query"); await _amazonTimestreamQuery.DeleteScheduledQueryAsync(new DeleteScheduledQueryRequest() { ScheduledQueryArn = scheduledQueryArn }); Console.WriteLine($"Successfully deleted scheduled query : {scheduledQueryArn}"); } catch (Exception e) Delete scheduled query 298 Amazon Timestream { Developer Guide Console.WriteLine($"Scheduled Query deletion failed: {e}"); throw; } } Using batch load in Timestream for LiveAnalytics With batch load for Amazon Timestream for LiveAnalytics, you can ingest CSV files stored in Amazon S3 into Timestream in batches. With this new functionality, you can have your data in Timestream for LiveAnalytics without having to rely on other tools or write custom code. You can use batch load for backfilling data with flexible wait times, such as data that isn't immediately required for querying or analysis. You can create batch load tasks by using the AWS Management Console, the AWS CLI, and the AWS SDKs. For more information, see Using batch load with the console, Using batch load with the AWS CLI, and Using batch load with the AWS SDKs. In addition to batch load, you can write multiple records at the same time with the WriteRecords API operation. For guidance about which to use, see Choosing between the WriteRecords API operation and batch load. Topics • Batch load concepts in Timestream • Batch load prerequisites • Batch load best practices • Preparing a batch load data file • Data model mappings for batch load • Using batch load with the console • Using batch load with the AWS CLI • Using batch load with the AWS SDKs • Using batch load error reports Batch load concepts in Timestream Review the following concepts to better understand batch load functionality. Using batch load 299 Amazon Timestream Developer Guide Batch load task – The task that defines your source data and destination in Amazon Timestream. You specify additional configuration such as the data model when you create the batch load task. You can create batch load tasks through the AWS Management Console, the AWS CLI, and the AWS SDKs. Import destination – The destination database and table in Timestream. For information about creating databases and tables, see Create a database and Create a table. Data source – The source CSV file that is stored in an S3 bucket. For information about preparing the data file, see Preparing a batch load data file. For information about S3 pricing, see Amazon S3 pricing. Batch load error report – A report that stores information about the errors of a batch load task. You define the S3 location for batch load error reports as part of a batch load task. For information about information in the reports, see Using batch load error reports. Data model mapping – A batch load mapping for time, dimensions, and measures that is from a data source in an S3 location to a target Timestream for LiveAnalytics table. For more information, see Data model mappings for batch load. Batch load prerequisites This is a list of prerequisites for using batch load. For best practices, see Batch load best practices. • Batch load source data is stored in Amazon S3 in CSV
|
timestream-070
|
timestream.pdf
| 70 |
S3 location for batch load error reports as part of a batch load task. For information about information in the reports, see Using batch load error reports. Data model mapping – A batch load mapping for time, dimensions, and measures that is from a data source in an S3 location to a target Timestream for LiveAnalytics table. For more information, see Data model mappings for batch load. Batch load prerequisites This is a list of prerequisites for using batch load. For best practices, see Batch load best practices. • Batch load source data is stored in Amazon S3 in CSV format with headers. • For each Amazon S3 source bucket, you must have the following permissions in an attached policy: "s3:GetObject", "s3:GetBucketAcl" "s3:ListBucket" Similarly, for each Amazon S3 output bucket where reports are written, you must have the following permissions in an attached policy: "s3:PutObject", "s3:GetBucketAcl" For example: Prerequisites 300 Amazon Timestream Developer Guide { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetObject", "s3:GetBucketAcl", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::amzn-s3-demo-source-bucket1”, "arn:aws:s3:::amzn-s3-demo-source-bucket2” ], "Effect": "Allow" }, { "Action": [ "s3:PutObject", "s3:GetBucketAcl" ], "Resource": [ "arn:aws:s3:::amzn-s3-demo-destination-bucket” ] "Effect": "Allow" } ] } • Timestream for LiveAnalytics parses the CSV by mapping information that's provided in the data model to CSV headers. The data must have a column that represents the timestamp, at least one dimension column, and at least one measure column. • The S3 buckets used with batch load must be in the same region and from the same account as the Timestream for LiveAnalytics table that is used in batch load. • The timestamp column must be a long data type that represents the time since the Unix epoch. For example, the timestamp 2021-03-25T08:45:21Z would be represented as 1616661921. Timestream supports seconds, milliseconds, microseconds, and nanoseconds for the timestamp precision. When using the query language, you can convert between formats with functions such as to_unixtime. For more information, see Date / time functions. • Timestream supports the string data type for dimension values. It supports long, double, string, and boolean data types for measure columns. Prerequisites 301 Amazon Timestream Developer Guide For batch load limits and quotas, see Batch load. Batch load best practices Batch load works best (high throughput) when adhering to the following conditions and recommendations: 1. CSV files submitted for ingestion are small, specifically with a file size of 100 MB–1 GB, to improve parallelism and speed of ingestion. 2. Avoid simultaneously ingesting data into the same table (e.g. using the WriteRecords API operation, or a scheduled query) when the batch load is in progress. This might lead to throttles, and the batch load task will fail. 3. Do not add, modify, or remove files from the S3 bucket used in batch load while the batch load task is running. 4. Do not delete or revoke permissions from tables or source, or report S3 buckets that have scheduled or in-progress batch load tasks. 5. When ingesting data with a high cardinality set of dimension values, follow guidance at Recommendations for partitioning multi-measure records. 6. Make sure you test the data for correctness by submitting a small file. You will be charged for any data submitted to batch load regardless of correctness. For more information about pricing, see Amazon Timestream pricing. 7. Do not resume a batch load task unless ActiveMagneticStorePartitions are below 250. The job may be throttled and fail. Submiting multiple jobs at the same time for the same database should reduce the number. The following are console best practices: 1. Use the builder only for simpler data modeling that uses only one measure name for multi- measure records. 2. For more complex data modeling, use JSON. For example, use JSON when you use multiple measure names when using multi-measure records. For additional Timestream for LiveAnalytics best practices, see Best practices. Best practices 302 Amazon Timestream Developer Guide Preparing a batch load data file A source data file has delimiter-separated values. The more specific term, comma-separated values (CSV) is used generically. Valid column separators include commas and pipes. Records are separated by new lines. Files must be stored in Amazon S3. When you create a new batch load task, the location of the source data is specified by an ARN for the file. A file contains headers. One column represents the timestamp. At least one other column represents a measure. The S3 buckets used with batch load must be in the same Region as the Timestream for LiveAnalytics table that is used in batch load. Don't add or remove files from the S3 bucket used in batch load after the batch load task has been submitted. For information about working with S3 buckets, see Getting started with Amazon S3. Note CSV files that are generated by some applications such as Excel might contain a byte order mark
|
timestream-071
|
timestream.pdf
| 71 |
by an ARN for the file. A file contains headers. One column represents the timestamp. At least one other column represents a measure. The S3 buckets used with batch load must be in the same Region as the Timestream for LiveAnalytics table that is used in batch load. Don't add or remove files from the S3 bucket used in batch load after the batch load task has been submitted. For information about working with S3 buckets, see Getting started with Amazon S3. Note CSV files that are generated by some applications such as Excel might contain a byte order mark (BOM) that conflicts with the expected encoding. Timestream for LiveAnalytics batch load tasks that reference a CSV file with a BOM throw an error when they're processed programmatically. To avoid this, you can remove the BOM, which is an invisible character. For example, you can save the file from an application such as Notepad++ that lets you specify a new encoding. You can also use a programmatic option that reads the first line, removes the character from the line, and writes the new value over the first line in the file. When saving from Excel, there are multiple CSV options. Saving with a different CSV option might prevent the described issue. But you should check the result because a change in encoding can affect some characters. CSV format parameters You use escape characters when you're representing a value that is otherwise reserved by the format parameters. For example, if the quote character is a double quote, to represent a double quote in the data, place the escape character before the double quote. For information about when to specify these when creating a batch load task, see Create a batch load task. Preparing a batch load data file 303 Amazon Timestream Developer Guide Parameter Column separator Escape character Quote character Null value Trim white space Options (Comma (',') | Pipe ('|') | Semicolon (';') | Tab ('/ t') | Blank space (' ')) none Console: (Double quote (") | Single quote (')) Blank space (' ') Console: (No | Yes) Data model mappings for batch load The following discusses the schema for data model mappings and gives and example. Data model mappings schema The CreateBatchLoadTask request syntax and a BatchLoadTaskDescription object returned by a call to DescribeBatchLoadTask include a DataModelConfiguration object that includes the DataModel for batch loading. The DataModel defines mappings from source data that's stored in CSV format in an S3 location to a target Timestream for LiveAnalytics database and table. The TimeColumn field indicates the source data's location for the value to be mapped to the destination table's time column in Timestream for LiveAnalytics. The TimeUnit specifies the unit for the TimeColumn, and can be one of MILLISECONDS, SECONDS, MICROSECONDS, or NANOSECONDS. There are also mappings for dimensions and measures. Dimension mappings are composed of source columns and target fields. For more information, see DimensionMapping. The mappings for measures have two options, MixedMeasureMappings and MultiMeasureMappings. To summarize, a DataModel contains mappings from a data source in an S3 location to a target Timestream for LiveAnalytics table for the following. • Time • Dimensions Data model mappings 304 Amazon Timestream • Measures Developer Guide If possible, we recommend that you map measure data to multi-measure records in Timestream for LiveAnalytics. For information about the benefits of multi-measure records, see Multi-measure records. If multiple measures in the source data are stored in one row, you can map those multiple measures to multi-measure records in Timestream for LiveAnalytics using MultiMeasureMappings. If there are values that must map to a single-measure record, you can use MixedMeasureMappings. MixedMeasureMappings and MultiMeasureMappings both include MultiMeasureAttributeMappings. Multi-measure records are supported regardless of whether single-measure records are needed. If only multi-measure target records are needed in Timestream for LiveAnalytics, you can define measure mappings in the following structure. CreateBatchLoadTask MeasureNameColumn MultiMeasureMappings TargetMultiMeasureName MultiMeasureAttributeMappings array Note We recommend using MultiMeasureMappings whenever possible. If single-measure target records are needed in Timestream for LiveAnalytics, you can define measure mappings in the following structure. CreateBatchLoadTask MeasureNameColumn MixedMeasureMappings array MixedMeasureMapping MeasureName MeasureValueType SourceColumn TargetMeasureName Data model mappings 305 Amazon Timestream Developer Guide MultiMeasureAttributeMappings array When you use MultiMeasureMappings, the MultiMeasureAttributeMappings array is always required. When you use the MixedMeasureMappings array, if the MeasureValueType is MULTI for a given MixedMeasureMapping, MultiMeasureAttributeMappings is required for that MixedMeasureMapping. Otherwise, MeasureValueType indicates the measure type for the single-measure record. Either way, there is an array of MultiMeasureAttributeMapping available. You define the mappings to multi-measure records in each MultiMeasureAttributeMapping as follows: SourceColumn The column in the source data that is located in Amazon S3. TargetMultiMeasureAttributeName The name of the target multi-measure name in the destination table. This input is required when MeasureNameColumn is not provided. If MeasureNameColumn is provided, the value from that column is used as
|
timestream-072
|
timestream.pdf
| 72 |
always required. When you use the MixedMeasureMappings array, if the MeasureValueType is MULTI for a given MixedMeasureMapping, MultiMeasureAttributeMappings is required for that MixedMeasureMapping. Otherwise, MeasureValueType indicates the measure type for the single-measure record. Either way, there is an array of MultiMeasureAttributeMapping available. You define the mappings to multi-measure records in each MultiMeasureAttributeMapping as follows: SourceColumn The column in the source data that is located in Amazon S3. TargetMultiMeasureAttributeName The name of the target multi-measure name in the destination table. This input is required when MeasureNameColumn is not provided. If MeasureNameColumn is provided, the value from that column is used as the multi-measure name. MeasureValueType One of DOUBLE, BIGINT BOOLEAN, VARCHAR, or TIMESTAMP. Data model mappings with MultiMeasureMappings example This example demonstrates mapping to multi-measure records, the preferred approach, which store each measure value in a dedicated column. You can download a sample CSV at sample CSV. The sample has the following headings to map to a target column in a Timestream for LiveAnalytics table. • time • measure_name • region • location • hostname • memory_utilization • cpu_utilization Data model mappings 306 Amazon Timestream Developer Guide Identify the time and measure_name columns in the CSV file. In this case these map directly to the Timestream for LiveAnalytics table columns of the same names. • time maps to time • measure_name maps to measure_name (or your chosen value) When using the API, you specify time in the TimeColumn field and a supported time unit value such as MILLISECONDS in the TimeUnit field. These correspond to Source columnn name and Timestamp time input in the console. You can group or partition records using measure_name which is defined with the MeasureNameColumn key. In the sample, region, location, and hostname are dimensions. Dimensions are mapped in an array of DimensionMapping objects. For measures, the value TargetMultiMeasureAttributeName will become a column in the Timestream for LiveAnalytics table. You can keep the same name such as in this example. Or you can specify a new one. MeasureValueType is one of DOUBLE, BIGINT, BOOLEAN, VARCHAR, or TIMESTAMP. { "TimeColumn": "time", "TimeUnit": "MILLISECONDS", "DimensionMappings": [ { "SourceColumn": "region", "DestinationColumn": "region" }, { "SourceColumn": "location", "DestinationColumn": "location" }, { "SourceColumn": "hostname", "DestinationColumn": "hostname" } ], "MeasureNameColumn": "measure_name", "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ { "SourceColumn": "memory_utilization", Data model mappings 307 Amazon Timestream Developer Guide "TargetMultiMeasureAttributeName": "memory_utilization", "MeasureValueType": "DOUBLE" }, { "SourceColumn": "cpu_utilization", "TargetMultiMeasureAttributeName": "cpu_utilization", "MeasureValueType": "DOUBLE" } ] } } Data model mappings with MixedMeasureMappings example We recommend that you only use this approach when you need to map to single-measure records in Timestream for LiveAnalytics. Using batch load with the console Following are steps for using batch load with the AWS Management Console. You can download a sample CSV at sample CSV. Topics • Access batch load • Create a batch load task Using batch load with the console 308 Amazon Timestream • Resume a batch load task • Using the visual builder Access batch load Developer Guide Follow these steps to access batch load using the AWS Management Console. 1. Open the Amazon Timestream console. 2. 3. In the navigation pane, choose Management Tools, and then choose Batch load tasks. From here, you can view the list of batch load tasks and drill into a given task for more details. You can also create and resume tasks. Create a batch load task Follow these steps to create a batch load task using the AWS Management Console. 1. Open the Amazon Timestream console. 2. In the navigation pane, choose Management Tools, and then choose Batch load tasks. 3. Choose Create batch load task. 4. In Import destination, choose the following. • Target database – Select the name of the database created in Create a database. • Target table – Select the name of the table created in Create a table. If necessary, you can add a table from this panel with the Create new table button. 5. 6. From Data source S3 location in Data source, select the S3 bucket where the source data is stored. Use the Browse S3 button to view S3 resources the active AWS account has access to, or enter the S3 location URL. The data source must be located in the same region. In File format settings (expandable section), you can use the default settings to parse input data. You can also choose Advanced settings. From there you can choose CSV format parameters, and select parameters to parse input data. For information about these parameters, see CSV format parameters. 7. From Configure data model mapping, configure the data model. For additional data model guidance, see Data model mappings for batch load Using batch load with the console 309 Amazon Timestream Developer Guide • From Data model mapping, choose Mapping configuration input, and choose one of the following. • Visual builder – To map data visually,
|
timestream-073
|
timestream.pdf
| 73 |
format settings (expandable section), you can use the default settings to parse input data. You can also choose Advanced settings. From there you can choose CSV format parameters, and select parameters to parse input data. For information about these parameters, see CSV format parameters. 7. From Configure data model mapping, configure the data model. For additional data model guidance, see Data model mappings for batch load Using batch load with the console 309 Amazon Timestream Developer Guide • From Data model mapping, choose Mapping configuration input, and choose one of the following. • Visual builder – To map data visually, choose TargetMultiMeasureName or MeasureNameColumn. Then from Visual builder, map the columns. Visual builder automatically detects and loads the source column headers from the data source file when a single CSV file is selected as the data source. Choose the attribute and data type to create your mapping. For information about using the visual builder, see Using the visual builder. • JSON editor – A freeform JSON editor for configuring your data model. Choose this option if you're familiar with Timestream for LiveAnalytics and want to build advanced data model mappings. • JSON file from S3 – Select a JSON model file you have stored in S3. Choose this option if you've already configured a data model and want to reuse it for additional batch loads. 8. From Error logs S3 location in Error log report, select the S3 location that will be used to report errors. For information about how to use this report, see Using batch load error reports. 9. For Encryption key type, choose one of the following. • Amazon S3-managed key (SSE-S3) – An encryption key that Amazon S3 creates, manages, and uses for you. • AWS KMS key (SSE-KMS) – An encryption key protected by AWS Key Management Service (AWS KMS). 10. Choose Next. 11. On the Review and create page, review the settings and edit as necessary. Note You can't change batch load task settings after the task has been created. Task completion times will vary based on the amount of data being imported. 12. Choose Create batch load task. Using batch load with the console 310 Amazon Timestream Resume a batch load task Developer Guide When you select a batch load task with a status of "Progress stopped" which is still resumable, you are prompted to resume the task. There is also a banner with a Resume task button when you view the details for those tasks. Resumable tasks have a "resume by" date. After that date expires, tasks cannot be resumed. Using the visual builder You can use the visual builder to map source data columns one or more CSV file(s) stored in an S3 bucket to destination columns in a Timestream for LiveAnalytics table. Note Your role will need the SelectObjectContent permission for the file. Without this, you will need to add and delete columns manually. Auto load source columns mode Timestream for LiveAnalytics can automatically scan the source CSV file for column names if you specify one bucket only. When there are no existing mappings, you can choose Import source columns. 1. With the Visual builder option selected from the Mapping configuration input settings, set the Timestamp time input. Milliseconds is the default setting. 2. Click the Load source columns button to import the column headers found in the source data file. The table will be populated with the source column header names from the data source file. 3. Choose the Target table column name, Timestream attribute type, and Data type for each source column. For details about these columns and possible values, see Mapping fields. 4. Use the drag-to-fill feature to set the value for multiple columns at once. Using batch load with the console 311 Amazon Timestream Manually add source columns Developer Guide If you're using a bucket or CSV prefix and not a single CSV, you can add and delete column mappings from the visual editor with the Add column mapping and Delete column mapping buttons. There is also a button to reset mappings. Mapping fields • Source column name – The name of a column in the source file that represents a measure to import. Timestream for LiveAnalytics can populate this value automatically when you use Import source columns. • Target table column name – Optional input that indicates the column name for the measure in the target table. • Timestream attribute type – The attribute type of the data in the specified source column such as DIMENSION. • TIMESTAMP – Specifies when a measure was collected. • MULTI – Multiple measures are represented. • DIMENSION – Time series metadata. • MEASURE_NAME – For single-measure records, this is the measure name. • Data type – The type of Timestream column, such as BOOLEAN. • BIGINT – A
|
timestream-074
|
timestream.pdf
| 74 |
LiveAnalytics can populate this value automatically when you use Import source columns. • Target table column name – Optional input that indicates the column name for the measure in the target table. • Timestream attribute type – The attribute type of the data in the specified source column such as DIMENSION. • TIMESTAMP – Specifies when a measure was collected. • MULTI – Multiple measures are represented. • DIMENSION – Time series metadata. • MEASURE_NAME – For single-measure records, this is the measure name. • Data type – The type of Timestream column, such as BOOLEAN. • BIGINT – A 64-bit integer. • BOOLEAN – The two truth values of logic—true and false. • DOUBLE – 64-bit variable-precision number. • TIMESTAMP – An instance in time that uses nanosecond precision time in UTC, and tracks the time since the Unix epoch. Using batch load with the AWS CLI Setup To start using batch load, go through the following steps. 1. Install the AWS CLI using the instructions at Accessing Amazon Timestream for LiveAnalytics using the AWS CLI. 2. Run the following command to verify that the Timestream CLI commands have been updated. Verify that create-batch-load-task is in the list. Using batch load with the CLI 312 Amazon Timestream Developer Guide aws timestream-write help 3. Prepare a data source using the instructions at Preparing a batch load data file. 4. Create a database and table using the instructions at Accessing Amazon Timestream for LiveAnalytics using the AWS CLI. 5. Create an S3 bucket for report output. The bucket must be in the same Region. For more information about buckets, see Creating, configuring, and working with Amazon S3 buckets. 6. Create a batch load task. For steps, see Create a batch load task. 7. Confirm the status of the task. For steps, see Describe batch load task. Create a batch load task You can create a batch load task with the create-batch-load-task command. When you create a batch load task using the CLI, you can use a JSON parameter, cli-input-json, which lets you aggregate the parameters into a single JSON fragment. You can also break those details apart using several other parameters including data-model-configuration, data-source- configuration, report-configuration, target-database-name, and target-table- name. For an example, see Create batch load task example Describe batch load task You can retrieve a batch load task description as follows. aws timestream-write describe-batch-load-task --task-id <value> Following is an example response. { "BatchLoadTaskDescription": { "TaskId": "<TaskId>", "DataSourceConfiguration": { "DataSourceS3Configuration": { "BucketName": "test-batch-load-west-2", "ObjectKeyPrefix": "sample.csv" }, "CsvConfiguration": {}, "DataFormat": "CSV" Using batch load with the CLI 313 Developer Guide Amazon Timestream }, "ProgressReport": { "RecordsProcessed": 2, "RecordsIngested": 0, "FileParseFailures": 0, "RecordIngestionFailures": 2, "FileFailures": 0, "BytesIngested": 119 }, "ReportConfiguration": { "ReportS3Configuration": { "BucketName": "test-batch-load-west-2", "ObjectKeyPrefix": "<ObjectKeyPrefix>", "EncryptionOption": "SSE_S3" } }, "DataModelConfiguration": { "DataModel": { "TimeColumn": "timestamp", "TimeUnit": "SECONDS", "DimensionMappings": [ { "SourceColumn": "vehicle", "DestinationColumn": "vehicle" }, { "SourceColumn": "registration", "DestinationColumn": "license" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "test", "MultiMeasureAttributeMappings": [ { "SourceColumn": "wgt", "TargetMultiMeasureAttributeName": "weight", "MeasureValueType": "DOUBLE" }, { "SourceColumn": "spd", "TargetMultiMeasureAttributeName": "speed", "MeasureValueType": "DOUBLE" }, { Using batch load with the CLI 314 Amazon Timestream Developer Guide "SourceColumn": "fuel", "TargetMultiMeasureAttributeName": "fuel", "MeasureValueType": "DOUBLE" }, { "SourceColumn": "miles", "TargetMultiMeasureAttributeName": "miles", "MeasureValueType": "DOUBLE" } ] } } }, "TargetDatabaseName": "BatchLoadExampleDatabase", "TargetTableName": "BatchLoadExampleTable", "TaskStatus": "FAILED", "RecordVersion": 1, "CreationTime": 1677167593.266, "LastUpdatedTime": 1677167602.38 } } List batch load tasks You can list batch load tasks as follows. aws timestream-write list-batch-load-tasks An output appears as follows. { "BatchLoadTasks": [ { "TaskId": "<TaskId>", "TaskStatus": "FAILED", "DatabaseName": "BatchLoadExampleDatabase", "TableName": "BatchLoadExampleTable", "CreationTime": 1677167593.266, "LastUpdatedTime": 1677167602.38 } ] } Using batch load with the CLI 315 Developer Guide Amazon Timestream Resume batch load task You can resume a batch load task as follows. aws timestream-write resume-batch-load-task --task-id <value> A response can indicate success or contain error information. Create batch load task example Example 1. Create a Timestream for LiveAnalytics database named BatchLoad and a table named BatchLoadTest. Verify and, if necessary, adjust the values for MemoryStoreRetentionPeriodInHours and MagneticStoreRetentionPeriodInDays. aws timestream-write create-database --database-name BatchLoad \ aws timestream-write create-table --database-name BatchLoad \ --table-name BatchLoadTest \ --retention-properties "{\"MemoryStoreRetentionPeriodInHours\": 12, \"MagneticStoreRetentionPeriodInDays\": 100}" 2. Using the console, create an S3 bucket and copy the sample.csv file to that location. You can download a sample CSV at sample CSV. 3. Using the console create an S3 bucket for Timestream for LiveAnalytics to write a report if the batch load task completes with errors. 4. Create a batch load task. Make sure to replace $INPUT_BUCKET and $REPORT_BUCKET with the buckets that you created in the preceding steps. aws timestream-write create-batch-load-task \ --data-model-configuration "{\ \"DataModel\": {\ \"TimeColumn\": \"timestamp\",\ \"TimeUnit\": \"SECONDS\",\ \"DimensionMappings\": [\ {\ \"SourceColumn\": \"vehicle\"\ },\ {\ \"SourceColumn\": \"registration\",\ Using batch load with the CLI 316 Amazon Timestream Developer Guide \"DestinationColumn\": \"license\"\ }\ ], \"MultiMeasureMappings\": {\ \"TargetMultiMeasureName\": \"mva_measure_name\",\ \"MultiMeasureAttributeMappings\": [\ {\ \"SourceColumn\": \"wgt\",\ \"TargetMultiMeasureAttributeName\": \"weight\",\ \"MeasureValueType\": \"DOUBLE\"\ },\ {\
|
timestream-075
|
timestream.pdf
| 75 |
sample CSV. 3. Using the console create an S3 bucket for Timestream for LiveAnalytics to write a report if the batch load task completes with errors. 4. Create a batch load task. Make sure to replace $INPUT_BUCKET and $REPORT_BUCKET with the buckets that you created in the preceding steps. aws timestream-write create-batch-load-task \ --data-model-configuration "{\ \"DataModel\": {\ \"TimeColumn\": \"timestamp\",\ \"TimeUnit\": \"SECONDS\",\ \"DimensionMappings\": [\ {\ \"SourceColumn\": \"vehicle\"\ },\ {\ \"SourceColumn\": \"registration\",\ Using batch load with the CLI 316 Amazon Timestream Developer Guide \"DestinationColumn\": \"license\"\ }\ ], \"MultiMeasureMappings\": {\ \"TargetMultiMeasureName\": \"mva_measure_name\",\ \"MultiMeasureAttributeMappings\": [\ {\ \"SourceColumn\": \"wgt\",\ \"TargetMultiMeasureAttributeName\": \"weight\",\ \"MeasureValueType\": \"DOUBLE\"\ },\ {\ \"SourceColumn\": \"spd\",\ \"TargetMultiMeasureAttributeName\": \"speed\",\ \"MeasureValueType\": \"DOUBLE\"\ },\ {\ \"SourceColumn\": \"fuel_consumption\",\ \"TargetMultiMeasureAttributeName\": \"fuel\",\ \"MeasureValueType\": \"DOUBLE\"\ },\ {\ \"SourceColumn\": \"miles\",\ \"MeasureValueType\": \"BIGINT\"\ }\ ]\ }\ }\ }" \ --data-source-configuration "{ \"DataSourceS3Configuration\": {\ \"BucketName\": \"$INPUT_BUCKET\",\ \"ObjectKeyPrefix\": \"$INPUT_OBJECT_KEY_PREFIX\" },\ \"DataFormat\": \"CSV\"\ }" \ --report-configuration "{\ \"ReportS3Configuration\": {\ \"BucketName\": \"$REPORT_BUCKET\",\ \"EncryptionOption\": \"SSE_S3\"\ }\ }" \ --target-database-name BatchLoad \ Using batch load with the CLI 317 Amazon Timestream Developer Guide --target-table-name BatchLoadTest The preceding command returns the following output. { "TaskId": "TaskId " } 5. Check on the progress of the task. Make sure you replace $TASK_ID with the task id that was returned in the preceding step. aws timestream-write describe-batch-load-task --task-id $TASK_ID Example output { "BatchLoadTaskDescription": { "ProgressReport": { "BytesIngested": 1024, "RecordsIngested": 2, "FileFailures": 0, "RecordIngestionFailures": 0, "RecordsProcessed": 2, "FileParseFailures": 0 }, "DataModelConfiguration": { "DataModel": { "DimensionMappings": [ { "SourceColumn": "vehicle", "DestinationColumn": "vehicle" }, { "SourceColumn": "registration", "DestinationColumn": "license" } ], "TimeUnit": "SECONDS", "TimeColumn": "timestamp", "MultiMeasureMappings": { "MultiMeasureAttributeMappings": [ Using batch load with the CLI 318 Amazon Timestream { Developer Guide "TargetMultiMeasureAttributeName": "weight", "SourceColumn": "wgt", "MeasureValueType": "DOUBLE" }, { "TargetMultiMeasureAttributeName": "speed", "SourceColumn": "spd", "MeasureValueType": "DOUBLE" }, { "TargetMultiMeasureAttributeName": "fuel", "SourceColumn": "fuel_consumption", "MeasureValueType": "DOUBLE" }, { "TargetMultiMeasureAttributeName": "miles", "SourceColumn": "miles", "MeasureValueType": "DOUBLE" } ], "TargetMultiMeasureName": "mva_measure_name" } } }, "TargetDatabaseName": "BatchLoad", "CreationTime": 1672960381.735, "TaskStatus": "SUCCEEDED", "RecordVersion": 1, "TaskId": "TaskId ", "TargetTableName": "BatchLoadTest", "ReportConfiguration": { "ReportS3Configuration": { "EncryptionOption": "SSE_S3", "ObjectKeyPrefix": "ObjectKeyPrefix ", "BucketName": "amzn-s3-demo-bucket" } }, "DataSourceConfiguration": { "DataSourceS3Configuration": { "ObjectKeyPrefix": "sample.csv", "BucketName": "amzn-s3-demo-source-bucket" }, "DataFormat": "CSV", Using batch load with the CLI 319 Amazon Timestream Developer Guide "CsvConfiguration": {} }, "LastUpdatedTime": 1672960387.334 } } Using batch load with the AWS SDKs For examples of how to create, describe, and list batch load tasks with the AWS SDKs, see Create batch load task, Describe batch load task, List batch load tasks, and Resume batch load task. Using batch load error reports Batch load tasks have one of the following status values: • CREATED (Created) – Task is created. • IN_PROGRESS (In progress) – Task is in progress. • FAILED (Failed) – Task has completed. But one or more errors was detected. • SUCCEEDED (Completed) – Task has completed with no errors. • PROGRESS_STOPPED (Progress stopped) – Task has stopped but not completed. You can attempt to resume the task. • PENDING_RESUME (Pending resume) – The task is pending to resume. When there are errors, an error log report is created in the S3 bucket defined for that. Errors are categorized as taskErrors or fileErrors in separate arrays. Following is an example error report. { "taskId": "9367BE28418C5EF902676482220B631C", "taskErrors": [], "fileErrors": [ { "fileName": "example.csv", "errors": [ { "reason": "The record timestamp is outside the time range of the data ingestion window.", "lineRanges": [ [ 2, Using batch load with the SDKs 320 Amazon Timestream Developer Guide 3 ] ] } ] } ] } Using scheduled queries in Timestream for LiveAnalytics The scheduled query feature in Amazon Timestream for LiveAnalytics is a fully managed, serverless, and scalable solution for calculating and storing aggregates, rollups, and other forms of preprocessed data typically used for operational dashboards, business reports, ad-hoc analytics, and other applications. Scheduled queries make real-time analytics more performant and cost- effective, so you can derive additional insights from your data, and can continue to make better business decisions. With scheduled queries, you define the real-time analytics queries that compute aggregates, rollups, and other operations on the data—and Amazon Timestream for LiveAnalytics periodically and automatically runs these queries and reliably writes the query results into a separate table. The data is typically calculated and updated into these tables within a few minutes. You can then point your dashboards and reports to query the tables that contain aggregated data instead of querying the considerably larger source tables. This leads to performance and cost gains that can exceed orders of magnitude. This is because the tables with aggregated data contain much less data than the source tables, so they offer faster queries and cheaper data storage. Additionally, tables with scheduled queries offer all of the existing functionality of a Timestream for LiveAnalytics table. For example, you can query the tables using SQL. You can visualize the data stored in the tables using Grafana. You can also ingest data into the
|
timestream-076
|
timestream.pdf
| 76 |
and reports to query the tables that contain aggregated data instead of querying the considerably larger source tables. This leads to performance and cost gains that can exceed orders of magnitude. This is because the tables with aggregated data contain much less data than the source tables, so they offer faster queries and cheaper data storage. Additionally, tables with scheduled queries offer all of the existing functionality of a Timestream for LiveAnalytics table. For example, you can query the tables using SQL. You can visualize the data stored in the tables using Grafana. You can also ingest data into the table using Amazon Kinesis, Amazon MSK, AWS IoT Core, and Telegraf. You can configure data retention policies on these tables for automatic data lifecycle management. Because the data retention of the tables that contain aggregated data is fully decoupled from that of source tables, you can also choose to reduce the data retention of the source tables and keep the aggregate data for a much longer duration, at a fraction of the data storage cost. Scheduled queries make real-time analytics faster, cheaper, and therefore more accessible to many more customers, so they can monitor their applications and drive better data-driven business decisions. Using scheduled queries 321 Amazon Timestream Topics • Scheduled query benefits • Scheduled query use cases Developer Guide • Example: Using real-time analytics to detect fraudulent payments and make better business decisions • Scheduled query concepts • Schedule expressions for scheduled queries • Data model mappings for scheduled queries • Scheduled query notification messages • Scheduled query error reports • Scheduled query patterns and examples Scheduled query benefits The following are the benefits of scheduled queries: • Operational ease – Scheduled queries are serverless and fully managed. • Performance and cost – Because scheduled queries precompute the aggregates, rollups, or other real-time analytics operations for your data and store the results in a table, queries that access tables populated by scheduled queries contain less data than the source tables. Therefore, queries that are run on these tables are faster and cheaper. Tables populated by scheduled computations contain less data than their source tables, and therefore help reduce the storage cost. You can also retain this data for a longer duration in the memory store at a fraction of the cost of retaining the source data in the memory store. • Interoperability – Tables populated by scheduled queries offer all of the existing functionality of Timestream for LiveAnalytics tables and can be used with all of the services and tools that work with Timestream for LiveAnalytics. See Working with Other Services for details. Scheduled query use cases You can use scheduled queries for business reports that summarize the end-user activity from your applications, so you can train machine learning models for personalization. You can also use scheduled queries for alarms that detect anomalies, network intrusions, or fraudulent activity, so you can take immediate remedial actions. Benefits 322 Amazon Timestream Developer Guide Additionally, you can use scheduled queries for more effective data governance. You can do this by granting source table access exclusively to the scheduled queries, and providing your developers access to only the tables populated by scheduled queries. This minimizes the impact of unintentional, long-running queries. Example: Using real-time analytics to detect fraudulent payments and make better business decisions Consider a payment system that processes transactions sent from multiple point-of-sale terminals distributed across major metropolitan cities in the United States. You want to use Amazon Timestream for LiveAnalytics to store and analyze the transaction data, so you can detect fraudulent transactions and run real-time analytics queries. These queries can help you answer business questions such as identifying the busiest and least used point-of-sale terminals per hour, the busiest hour of the day for each city, and the city with most transactions per hour. The system process ~100K transactions per minute. Each transaction stored in Amazon Timestream for LiveAnalytics is 100 bytes. You've configured 10 queries that run every minute to detect various kinds of fraudulent payments. You've also created 25 queries that aggregate and slice/dice your data along various dimensions to help answer your business questions. Each of these queries processes the last hour's data. You've created a dashboard to display the data generated by these queries. The dashboard contains 25 widgets, it is refreshed every hour, and it is typically accessed by 10 users at any given time. Finally, your memory store is configured with a 2-hour data retention period and the magnetic store is configured to have a 6-month data retention period. In this case, you can use real-time analytics queries that recompute the data every time the dashboard is accessed and refreshed, or use derived tables for the dashboard. The query cost for dashboards based on real-time analytics queries will be $120.70 per month. In
|
timestream-077
|
timestream.pdf
| 77 |
dashboard to display the data generated by these queries. The dashboard contains 25 widgets, it is refreshed every hour, and it is typically accessed by 10 users at any given time. Finally, your memory store is configured with a 2-hour data retention period and the magnetic store is configured to have a 6-month data retention period. In this case, you can use real-time analytics queries that recompute the data every time the dashboard is accessed and refreshed, or use derived tables for the dashboard. The query cost for dashboards based on real-time analytics queries will be $120.70 per month. In contrast, the cost of dashboarding queries powered by derived tables will be $12.27 per month (see Amazon Timestream for LiveAnalytics pricing). In this case, using derived tables reduces the query cost by ~10 times. Scheduled query concepts Query string - This is the query whose result you are pre-computing and storing in another Timestream for LiveAnalytics table. You can define a scheduled query using the full SQL surface area of Timestream for LiveAnalytics, which provides you the flexibility of writing queries with Example 323 Amazon Timestream Developer Guide common table expressions, nested queries, window functions, or any kind of aggregate and scalar functions that are supported by Timestream for LiveAnalytics query language. Schedule expression - Allows you to specify when your scheduled query instances are run. You can specify the expressions using a cron expression (such as run at 8 AM UTC every day) or rate expression (such as run every 10 minutes). Target configuration - Allows you to specify how you map the result of a scheduled query into the destination table where the results of this scheduled query will be stored. Notification configuration -Timestream for LiveAnalytics automatically runs instances of a scheduled query based on your schedule expression. You receive a notification for every such query run on an SNS topic that you configure when you create a scheduled query. This notification specifies whether the instance was successfully run or encountered any errors. In addition, it provides information such as the bytes metered, data written to the target table, next invocation time, and so on. The following is an example of this kind of notification message. { "type":"AUTO_TRIGGER_SUCCESS", "arn":"arn:aws:timestream:us-east-1:123456789012:scheduled-query/ PT1mPerMinutePerRegionMeasureCount-9376096f7309", "nextInvocationEpochSecond":1637302500, "scheduledQueryRunSummary": { "invocationEpochSecond":1637302440, "triggerTimeMillis":1637302445697, "runStatus":"AUTO_TRIGGER_SUCCESS", "executionStats": { "executionTimeInMillis":21669, "dataWrites":36864, "bytesMetered":13547036820, "recordsIngested":1200, "queryResultRows":1200 } } } In this notification message, bytesMetered is the bytes that the query scanned on the source table, and dataWrites is the bytes written to the target table. Concepts 324 Amazon Timestream Note Developer Guide If you are consuming these notifications programmatically, be aware that new fields could be added to the notification message in the future. Error report location - Scheduled queries asynchronously run and store data in the target table. If an instance encounters any errors (for example, invalid data which could not be stored), the records that encountered errors are written to an error report in the error report location you specify at creation of a scheduled query. You specify the S3 bucket and prefix for the location. Timestream for LiveAnalytics appends the scheduled query name and invocation time to this prefix to help you identify the errors associated with a specific instance of a scheduled query. Tagging - You can optionally specify tags that you can associate with a scheduled query. For more details, see Tagging Timestream for LiveAnalytics Resources. Example In the following example, you compute a simple aggregate using a scheduled query: SELECT region, bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 10m AND @scheduled_runtime + 1m GROUP BY bin(time, 1m), region @scheduled_runtime parameter - In this example, you will notice the query accepting a special named parameter @scheduled_runtime. This is a special parameter (of type Timestamp) that the service sets when invoking a specific instance of a scheduled query so that you can deterministically control the time range for which a specific instance of a scheduled query analyzes the data in the source table. You can use @scheduled_runtime in your query in any location where a Timestamp type is expected. Consider an example where you set a schedule expression: cron(0/5 * * * ? *) where the scheduled query will run at minute 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55 of every hour. For the instance that is triggered at 2021-12-01 00:05:00, the @scheduled_runtime parameter is initialized to this value, such that the instance at this time operates on data in the range 2021-11-30 23:55:00 to 2021-12-01 00:06:00. Concepts 325 Amazon Timestream Developer Guide Instances with overlapping time ranges - As you will see in this example, two subsequent instances of a scheduled query can overlap in their time ranges. This is something you can control based
|
timestream-078
|
timestream.pdf
| 78 |
* ? *) where the scheduled query will run at minute 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55 of every hour. For the instance that is triggered at 2021-12-01 00:05:00, the @scheduled_runtime parameter is initialized to this value, such that the instance at this time operates on data in the range 2021-11-30 23:55:00 to 2021-12-01 00:06:00. Concepts 325 Amazon Timestream Developer Guide Instances with overlapping time ranges - As you will see in this example, two subsequent instances of a scheduled query can overlap in their time ranges. This is something you can control based on your requirements, the time predicates you specify, and the schedule expression. In this case, this overlap allows these computations to update the aggregates based on any data whose arrival was slightly delayed, up to 10 minutes in this example. The query run triggered at 2021-12-01 00:00:00 will cover the time range 2021-11-30 23:50:00 to 2021-12-30 00:01:00 and the query run triggered at 2021-12-01 00:05:00 will cover the range 2021-11-30 23:55:00 to 2021-12-01 00:06:00. To ensure correctness and to make sure that the aggregates stored in the target table match the aggregates computed from the source table, Timestream for LiveAnalytics ensures that the computation at 2021-12-01 00:05:00 will be performed only after the computation at 2021-12-01 00:00:00 has completed. The results of the latter computations can update any previously materialized aggregate if a newer value is generated. Internally, Timestream for LiveAnalytics uses record versions where records generated by latter instances of a scheduled query will be assigned a higher version number. Therefore, the aggregates computed by the invocation at 2021-12-01 00:05:00 can update the aggregates computed by the invocation at 2021-12-01 00:00:00, assuming newer data is available on the source table. Automatic triggers vs. manual triggers - After a scheduled query is created, Timestream for LiveAnalytics will automatically run the instances based on the specified schedule. Such automated triggers are managed entirely by the service. However, there might be scenarios where you might want to manually initiate some instances of a scheduled query. Examples include if a specific instance failed in a query run, if there was late-arriving data or updates in the source table after the automated schedule run, or if you want to update the target table for time ranges that are not covered by automated query runs (for example, for time ranges before creation of a scheduled query). You can use the ExecuteScheduledQuery API to manually initiate a specific instance of a scheduled query by passing the InvocationTime parameter, which is a value used for the @scheduled_runtime parameter. The following are a few important considerations when using the ExecuteScheduledQuery API: • If you are triggering multiple of these invocations, you need to make sure that these invocations do not generate results in overlapping time ranges. If you cannot ensure non-overlapping time ranges, then make sure that these query runs are initiated sequentially one after the other. If you concurrently initiate multiple query runs that overlap in their time ranges, then you can see trigger failures where you might see version conflicts in the error reports for these query runs. Concepts 326 Amazon Timestream Developer Guide • You can initiate the invocations with any timestamp value for @scheduled_runtime. So it is your responsibility to appropriately set the values so the appropriate time ranges are updated in the target table corresponding to the ranges where data was updated in the source table. • The ExecuteScheduledQuery API operates asynchronously. Upon a successful call, the service sends a 200 response and proceeds to execute the query. However, if there are multiple scheduled query executions concurrently running, anticipate potential delays in executing manually triggered scheduled executions. Schedule expressions for scheduled queries You can create scheduled queries on an automated schedule by using Amazon Timestream for LiveAnalytics scheduled queries that use cron or rate expressions. All scheduled queries use the UTC time zone, and the minimum possible precision for schedules is 1 minute. Two ways to specify the schedule expressions are cron and rate. Cron expressions offer more fine grained schedule control, while rate expressions are simpler to express but lack the fine-grained control. For example, with a cron expression, you can define a scheduled query that gets triggered at a specified time on a certain day of each week or month, or a specified minute every hour only on Monday - Friday, and so on. In contrast, rate expressions initiate a scheduled query at a regular rate, such as once every minute, hour, or day, starting from the exact time when the scheduled query is created. Cron expression • Syntax cron(fields) Cron expressions have six required fields, which are separated by white space. Field Minutes Hours Values 0-59 0-23 Wildcards , - * / , - * / Schedule expressions 327 Amazon
|
timestream-079
|
timestream.pdf
| 79 |
a scheduled query that gets triggered at a specified time on a certain day of each week or month, or a specified minute every hour only on Monday - Friday, and so on. In contrast, rate expressions initiate a scheduled query at a regular rate, such as once every minute, hour, or day, starting from the exact time when the scheduled query is created. Cron expression • Syntax cron(fields) Cron expressions have six required fields, which are separated by white space. Field Minutes Hours Values 0-59 0-23 Wildcards , - * / , - * / Schedule expressions 327 Amazon Timestream Field Day-of-month Values 1-31 Wildcards , - * ? / L W Developer Guide Month 1-12 or JAN-DEC , - * / Day-of-week 1-7 or SUN-SAT , - * ? L # Year 1970-2199 , - * / Wildcard characters • The *,* (comma) wildcard includes additional values. In the Month field, JAN,FEB,MAR would include January, February, and March. • The *-* (dash) wildcard specifies ranges. In the Day field, 1-15 would include days 1 through 15 of the specified month. • The *** (asterisk) wildcard includes all values in the field. In the Hours field, *** would include every hour. You cannot use *** in both the Day-of-month and Day-of-week fields. If you use it in one, you must use *?* in the other. • The */* (forward slash) wildcard specifies increments. In the Minutes field, you could enter 1/10 to specify every 10th minute, starting from the first minute of the hour (for example, the 11th, 21st, and 31st minute, and so on). • The *?* (question mark) wildcard specifies one or another. In the Day-of-month field you could enter *7* and if you didn't care what day of the week the 7th was, you could enter *?* in the Day-of-week field. • The *L* wildcard in the Day-of-month or Day-of-week fields specifies the last day of the month or week. • The W wildcard in the Day-of-month field specifies a weekday. In the Day-of-month field, 3W specifies the weekday closest to the third day of the month. • The *#* wildcard in the Day-of-week field specifies a certain instance of the specified day of the week within a month. For example, 3#2 would be the second Tuesday of the month: the 3 refers to Tuesday because it is the third day of each week, and the 2 refers to the second day of that type within the month. Schedule expressions 328 Amazon Timestream Note Developer Guide If you use a '#' character, you can define only one expression in the day-of-week field. For example, "3#1,6#3" is not valid because it is interpreted as two expressions. Limitations • You can't specify the Day-of-month and Day-of-week fields in the same cron expression. If you specify a value (or a *) in one of the fields, you must use a *?* (question mark) in the other. • Cron expressions that lead to rates faster than 1 minute are not supported. Examples Minutes Hours Day of month Month Day of week Year Meaning 0 10 15 12 0 18 * * ? * * * ? ? * * MON-FRI * Run at 10:00 am (UTC) every day. Run at 12:15 pm (UTC) every day. Run at 6:00 pm (UTC) every Monday through Friday. Schedule expressions 329 Amazon Timestream Minutes Hours Day of month Month Day of week Year Meaning Developer Guide 0 8 1 0/15 0/10 * * 0/5 8-17 * * ? * * * * ? ? MON-FRI * * * MON-FRI * Run at 8:00 am (UTC) every first day of the month. Run every 15 minutes. Run every 10 minutes Monday through Friday. Run every 5 minutes Monday through Friday between 8:00 am and 5:55 pm (UTC). Rate expressions • A rate expression starts when you create the scheduled event rule, and then runs on its defined schedule. Rate expressions have two required fields. Fields are separated by white space. Syntax Schedule expressions 330 Amazon Timestream rate(value unit) • value: A positive number. Developer Guide • unit: The unit of time. Different units are required for values of 1 (for example, minute) and values over 1 (for example, minutes). Valid values: minute | minutes | hour | hours | day | days Data model mappings for scheduled queries Timestream for LiveAnalytics supports flexible modeling of data in its tables and this same flexibility applies to results of scheduled queries that are materialized into another Timestream for LiveAnalytics table. With scheduled queries, you can query any table, whether it has data in multi- measure records or single-measure records and write the query results using either multi-measure or single-measure records. You use the TargetConfiguration in the specification of a scheduled query to map the query results to
|
timestream-080
|
timestream.pdf
| 80 |
1 (for example, minutes). Valid values: minute | minutes | hour | hours | day | days Data model mappings for scheduled queries Timestream for LiveAnalytics supports flexible modeling of data in its tables and this same flexibility applies to results of scheduled queries that are materialized into another Timestream for LiveAnalytics table. With scheduled queries, you can query any table, whether it has data in multi- measure records or single-measure records and write the query results using either multi-measure or single-measure records. You use the TargetConfiguration in the specification of a scheduled query to map the query results to the appropriate columns in the destination derived table. The following sections describe the different ways of specifying this TargetConfiguration to achieve different data models in the derived table. Specifically, you will see: • How to write to multi-measure records when the query result does not have a measure name and you specify the target measure name in the TargetConfiguration. • How you use measure name in the query result to write multi-measure records. • How you can define a model to write multiple records with different multi-measure attributes. • How you can define a model to write to single-measure records in the derived table. • How you can query single-measure records and/or multi-measure records in a scheduled query and have the results materialized to either a single-measure record or a multi-measure record, which allows you to choose the flexibility of data models. Example: Target measure name for multi-measure records In this example, you will see that the query is reading data from a table with multi- measure data and is writing the results into another table using multi-measure records. The scheduled query result does not have a natural measure name column. Here, you specify the measure name in the derived table using the TargetMultiMeasureName property in the TargetConfiguration.TimestreamConfiguration. Data model mappings 331 Amazon Timestream Developer Guide { "Name" : "CustomMultiMeasureName", "QueryString" : "SELECT region, bin(time, 1h) as hour, AVG(memory_cached) as avg_mem_cached_1h, MIN(memory_free) as min_mem_free_1h, MAX(memory_used) as max_mem_used_1h, SUM(disk_io_writes) as sum_1h, AVG(disk_used) as avg_disk_used_1h, AVG(disk_free) as avg_disk_free_1h, MAX(cpu_user) as max_cpu_user_1h, MIN(cpu_idle) as min_cpu_idle_1h, MAX(cpu_system) as max_cpu_system_1h FROM raw_data.devops_multi WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h AND measure_name = 'metrics' GROUP BY region, bin(time, 1h)", "ScheduleConfiguration" : { "ScheduleExpression" : "cron(0 0/1 * * ? *)" }, "NotificationConfiguration" : { "SnsConfiguration" : { "TopicArn" : "******" } }, "ScheduledQueryExecutionRoleArn": "******", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName" : "derived", "TableName" : "dashboard_metrics_1h_agg_1", "TimeColumn" : "hour", "DimensionMappings" : [ { "Name": "region", "DimensionValueType" : "VARCHAR" } ], "MultiMeasureMappings" : { "TargetMultiMeasureName": "dashboard-metrics", "MultiMeasureAttributeMappings" : [ { "SourceColumn" : "avg_mem_cached_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName" : "avgMemCached" }, { "SourceColumn" : "min_mem_free_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_mem_used_1h", Data model mappings 332 Amazon Timestream Developer Guide "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "sum_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName" : "totalDiskWrites" }, { "SourceColumn" : "avg_disk_used_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "avg_disk_free_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_cpu_user_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName" : "CpuUserP100" }, { "SourceColumn" : "min_cpu_idle_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_cpu_system_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName" : "CpuSystemP100" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } } } Data model mappings 333 Amazon Timestream Developer Guide The mapping in this example creates one multi-measure record with measure name dashboard- metrics and attribute names avgMemCached, min_mem_free_1h, max_mem_used_1h, totalDiskWrites, avg_disk_used_1h, avg_disk_free_1h, CpuUserP100, min_cpu_idle_1h, CpuSystemP100. Notice the optional use of TargetMultiMeasureAttributeName to rename the query output columns to a different attribute name used for result materialization. The following is the schema for the destination table once this scheduled query is materialized. As you can see from the Timestream for LiveAnalytics attribute type in the following result, the results are materialized into a multi-measure record with a single-measure name dashboard-metrics, as shown in the measure schema. Column region measure_name Type varchar varchar Timestream for LiveAnalytics attribute type DIMENSION MEASURE_NAME time timestamp TIMESTAMP CpuSystemP100 avgMemCached min_cpu_idle_1h avg_disk_free_1h avg_disk_used_1h totalDiskWrites max_mem_used_1h min_mem_free_1h CpuUserP100 double double double double double double double double double MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI The following are the corresponding measures obtained with a SHOW MEASURES query. Data model mappings 334 Amazon Timestream Developer Guide measure_name data_type Dimensions dashboard-metrics multi [{'dimension_name': 'region', 'data_type': 'varchar'}] Example: Using measure name from scheduled query in multi-measure records In this example, you will see a query reading from a table with single-measure records and materializing the results into multi-measure records. In this case, the scheduled query result has a column whose values can be used as measure names in the target table where the results of the scheduled query is materialized. Then you can specify the measure name for the multi-measure record in the derived table
|
timestream-081
|
timestream.pdf
| 81 |
with a SHOW MEASURES query. Data model mappings 334 Amazon Timestream Developer Guide measure_name data_type Dimensions dashboard-metrics multi [{'dimension_name': 'region', 'data_type': 'varchar'}] Example: Using measure name from scheduled query in multi-measure records In this example, you will see a query reading from a table with single-measure records and materializing the results into multi-measure records. In this case, the scheduled query result has a column whose values can be used as measure names in the target table where the results of the scheduled query is materialized. Then you can specify the measure name for the multi-measure record in the derived table using the MeasureNameColumn property in TargetConfiguration.TimestreamConfiguration. { "Name" : "UsingMeasureNameFromQueryResult", "QueryString" : "SELECT region, bin(time, 1h) as hour, measure_name, AVG(CASE WHEN measure_name IN ('memory_cached', 'disk_used', 'disk_free') THEN measure_value::double ELSE NULL END) as avg_1h, MIN(CASE WHEN measure_name IN ('memory_free', 'cpu_idle') THEN measure_value::double ELSE NULL END) as min_1h, SUM(CASE WHEN measure_name IN ('disk_io_writes') THEN measure_value::double ELSE NULL END) as sum_1h, MAX(CASE WHEN measure_name IN ('memory_used', 'cpu_user', 'cpu_system') THEN measure_value::double ELSE NULL END) as max_1h FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h AND measure_name IN ('memory_free', 'memory_used', 'memory_cached', 'disk_io_writes', 'disk_used', 'disk_free', 'cpu_user', 'cpu_system', 'cpu_idle') GROUP BY region, measure_name, bin(time, 1h)", "ScheduleConfiguration" : { "ScheduleExpression" : "cron(0 0/1 * * ? *)" }, "NotificationConfiguration" : { "SnsConfiguration" : { "TopicArn" : "******" } }, "ScheduledQueryExecutionRoleArn": "******", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName" : "derived", "TableName" : "dashboard_metrics_1h_agg_2", Data model mappings 335 Amazon Timestream Developer Guide "TimeColumn" : "hour", "DimensionMappings" : [ { "Name": "region", "DimensionValueType" : "VARCHAR" } ], "MeasureNameColumn" : "measure_name", "MultiMeasureMappings" : { "MultiMeasureAttributeMappings" : [ { "SourceColumn" : "avg_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "min_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName": "p0_1h" }, { "SourceColumn" : "sum_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName": "p100_1h" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } } } The mapping in this example will create multi-measure records with attributes avg_1h, p0_1h, sum_1h, p100_1h and will use the values of the measure_name column in the query result as Data model mappings 336 Amazon Timestream Developer Guide the measure name for the multi-measure records in the destination table. Additionally note that the previous examples optionally use the TargetMultiMeasureAttributeName with a subset of the mappings to rename the attributes. For instance, min_1h was renamed to p0_1h and max_1h is renamed to p100_1h. The following is the schema for the destination table once this scheduled query is materialized. As you can see from the Timestream for LiveAnalytics attribute type in the following result, the results are materialized into a multi-measure record. If you look at the measure schema, there were nine different measure names that were ingested which correspond to the values seen in the query results. Column region measure_name time sum_1h p100_1h p0_1h avg_1h Type varchar varchar Timestream for LiveAnalytics attribute type DIMENSION MEASURE_NAME timestamp TIMESTAMP double double double double MULTI MULTI MULTI MULTI The following are corresponding measures obtained with a SHOW MEASURES query. measure_name data_type Dimensions cpu_idle cpu_system multi multi [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] Data model mappings 337 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_user disk_free disk_io_writes disk_used memory_cached memory_free memory_free multi multi multi multi multi multi multi [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] Example: Mapping results to different multi-measure records with different attributes The following example shows how you can map different columns in your query result into different multi-measure records with different measure names. If you see the following scheduled query definition, the result of the query has the following columns: region, hour, avg_mem_cached_1h, min_mem_free_1h, max_mem_used_1h, total_disk_io_writes_1h, avg_disk_used_1h, avg_disk_free_1h, max_cpu_user_1h, max_cpu_system_1h, min_cpu_system_1h. region is mapped to dimension, and hour is mapped to the time column. The MixedMeasureMappings property in TargetConfiguration.TimestreamConfiguration specifies how to map the measures to multi-measure records in the derived table. In this specific example, avg_mem_cached_1h, min_mem_free_1h, max_mem_used_1h are used in one multi-measure record with measure name of mem_aggregates, total_disk_io_writes_1h, avg_disk_used_1h, avg_disk_free_1h are used in another multi-measure record with measure name Data model mappings 338 Amazon Timestream Developer Guide of disk_aggregates, and finally max_cpu_user_1h, max_cpu_system_1h, min_cpu_system_1h are used in another multi-measure record with measure name cpu_aggregates. In these mappings, you can also optionally use TargetMultiMeasureAttributeName to rename the query result column to have a different attribute name in the destination table. For instance, the result column avg_mem_cached_1h gets renamed to avgMemCached, total_disk_io_writes_1h gets renamed to totalIOWrites, etc. When you're defining the mappings for multi-measure records, Timestream for LiveAnalytics inspects every row in the query results and automatically ignores the column values that have NULL values. As a result, in the case of mappings
|
timestream-082
|
timestream.pdf
| 82 |
mappings 338 Amazon Timestream Developer Guide of disk_aggregates, and finally max_cpu_user_1h, max_cpu_system_1h, min_cpu_system_1h are used in another multi-measure record with measure name cpu_aggregates. In these mappings, you can also optionally use TargetMultiMeasureAttributeName to rename the query result column to have a different attribute name in the destination table. For instance, the result column avg_mem_cached_1h gets renamed to avgMemCached, total_disk_io_writes_1h gets renamed to totalIOWrites, etc. When you're defining the mappings for multi-measure records, Timestream for LiveAnalytics inspects every row in the query results and automatically ignores the column values that have NULL values. As a result, in the case of mappings with multiple measures names, if all the column values for that group in the mapping are NULL for a given row, then no value for that measure name is ingested for that row. For example, in the following mapping, avg_mem_cached_1h, min_mem_free_1h, and max_mem_used_1h are mapped to measure name mem_aggregates. If for a given row of the query result, all these of the column values are NULL, Timestream for LiveAnalytics won't ingest the measure mem_aggregates for that row. If all nine columns for a given row are NULL, then you will see an user error reported in your error report. { "Name" : "AggsInDifferentMultiMeasureRecords", "QueryString" : "SELECT region, bin(time, 1h) as hour, AVG(CASE WHEN measure_name = 'memory_cached' THEN measure_value::double ELSE NULL END) as avg_mem_cached_1h, MIN(CASE WHEN measure_name = 'memory_free' THEN measure_value::double ELSE NULL END) as min_mem_free_1h, MAX(CASE WHEN measure_name = 'memory_used' THEN measure_value::double ELSE NULL END) as max_mem_used_1h, SUM(CASE WHEN measure_name = 'disk_io_writes' THEN measure_value::double ELSE NULL END) as total_disk_io_writes_1h, AVG(CASE WHEN measure_name = 'disk_used' THEN measure_value::double ELSE NULL END) as avg_disk_used_1h, AVG(CASE WHEN measure_name = 'disk_free' THEN measure_value::double ELSE NULL END) as avg_disk_free_1h, MAX(CASE WHEN measure_name = 'cpu_user' THEN measure_value::double ELSE NULL END) as max_cpu_user_1h, MAX(CASE WHEN measure_name = 'cpu_system' THEN measure_value::double ELSE NULL END) as max_cpu_system_1h, MIN(CASE WHEN measure_name = 'cpu_idle' THEN measure_value::double ELSE NULL END) as min_cpu_system_1h FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h AND measure_name IN ('memory_cached', 'memory_free', 'memory_used', 'disk_io_writes', 'disk_used', 'disk_free', 'cpu_user', 'cpu_system', 'cpu_idle') GROUP BY region, bin(time, 1h)", "ScheduleConfiguration" : { "ScheduleExpression" : "cron(0 0/1 * * ? *)" }, "NotificationConfiguration" : { Data model mappings 339 Amazon Timestream Developer Guide "SnsConfiguration" : { "TopicArn" : "******" } }, "ScheduledQueryExecutionRoleArn": "******", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName" : "derived", "TableName" : "dashboard_metrics_1h_agg_3", "TimeColumn" : "hour", "DimensionMappings" : [ { "Name": "region", "DimensionValueType" : "VARCHAR" } ], "MixedMeasureMappings" : [ { "MeasureValueType" : "MULTI", "TargetMeasureName" : "mem_aggregates", "MultiMeasureAttributeMappings" : [ { "SourceColumn" : "avg_mem_cached_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName": "avgMemCached" }, { "SourceColumn" : "min_mem_free_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_mem_used_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName": "maxMemUsed" } ] }, { "MeasureValueType" : "MULTI", "TargetMeasureName" : "disk_aggregates", "MultiMeasureAttributeMappings" : [ { "SourceColumn" : "total_disk_io_writes_1h", "MeasureValueType" : "DOUBLE", Data model mappings 340 Amazon Timestream Developer Guide "TargetMultiMeasureAttributeName": "totalIOWrites" }, { "SourceColumn" : "avg_disk_used_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "avg_disk_free_1h", "MeasureValueType" : "DOUBLE" } ] }, { "MeasureValueType" : "MULTI", "TargetMeasureName" : "cpu_aggregates", "MultiMeasureAttributeMappings" : [ { "SourceColumn" : "max_cpu_user_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "max_cpu_system_1h", "MeasureValueType" : "DOUBLE" }, { "SourceColumn" : "min_cpu_idle_1h", "MeasureValueType" : "DOUBLE", "TargetMultiMeasureAttributeName": "minCpuIdle" } ] } ] } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } } } Data model mappings 341 Amazon Timestream Developer Guide The following is the schema for the destination table once this scheduled query is materialized. Column region measure_name time minCpuIdle max_cpu_system_1h max_cpu_user_1h avgMemCached maxMemUsed min_mem_free_1h avg_disk_free_1h avg_disk_used_1h totalIOWrites Type varchar varchar Timestream for LiveAnalytics attribute type DIMENSION MEASURE_NAME timestamp TIMESTAMP double double double double double double double double double MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI The following are the corresponding measures obtained with a SHOW MEASURES query. measure_name data_type Dimensions cpu_aggregates disk_aggregates multi multi [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] Data model mappings 342 Amazon Timestream Developer Guide measure_name data_type Dimensions mem_aggregates multi [{'dimension_name': 'region', 'data_type': 'varchar'}] Example: Mapping results to single-measure records with measure name from query results The following is an example of a scheduled query whose results are materialized into single- measure records. In this example, the query result has the measure_name column whose values will be used as measure names in the target table. You use the MixedMeasureMappings attribute in the TargetConfiguration.TimestreamConfiguration to specify the mapping of the query result column to the scalar measure in the target table. In the following example definition, the query result is expected to nine distinct measure_name values. You list out all these measure names in the mapping and specify which column to use for the single-measure value for that measure name. For example, in this mapping, if measure name of memory_cached is seen for a given result row, then the value in the avg_1h
|
timestream-083
|
timestream.pdf
| 83 |
column whose values will be used as measure names in the target table. You use the MixedMeasureMappings attribute in the TargetConfiguration.TimestreamConfiguration to specify the mapping of the query result column to the scalar measure in the target table. In the following example definition, the query result is expected to nine distinct measure_name values. You list out all these measure names in the mapping and specify which column to use for the single-measure value for that measure name. For example, in this mapping, if measure name of memory_cached is seen for a given result row, then the value in the avg_1h column is used as the value for the measure when the data is written to the target table. You can optionally use TargetMeasureName to provide a new measure name for this value. { "Name" : "UsingMeasureNameColumnForSingleMeasureMapping", "QueryString" : "SELECT region, bin(time, 1h) as hour, measure_name, AVG(CASE WHEN measure_name IN ('memory_cached', 'disk_used', 'disk_free') THEN measure_value::double ELSE NULL END) as avg_1h, MIN(CASE WHEN measure_name IN ('memory_free', 'cpu_idle') THEN measure_value::double ELSE NULL END) as min_1h, SUM(CASE WHEN measure_name IN ('disk_io_writes') THEN measure_value::double ELSE NULL END) as sum_1h, MAX(CASE WHEN measure_name IN ('memory_used', 'cpu_user', 'cpu_system') THEN measure_value::double ELSE NULL END) as max_1h FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h AND measure_name IN ('memory_free', 'memory_used', 'memory_cached', 'disk_io_writes', 'disk_used', 'disk_free', 'cpu_user', 'cpu_system', 'cpu_idle') GROUP BY region, bin(time, 1h), measure_name", "ScheduleConfiguration" : { "ScheduleExpression" : "cron(0 0/1 * * ? *)" }, "NotificationConfiguration" : { "SnsConfiguration" : { "TopicArn" : "******" Data model mappings 343 Amazon Timestream } Developer Guide }, "ScheduledQueryExecutionRoleArn": "******", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName" : "derived", "TableName" : "dashboard_metrics_1h_agg_4", "TimeColumn" : "hour", "DimensionMappings" : [ { "Name": "region", "DimensionValueType" : "VARCHAR" } ], "MeasureNameColumn" : "measure_name", "MixedMeasureMappings" : [ { "MeasureName" : "memory_cached", "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_1h", "TargetMeasureName" : "AvgMemCached" }, { "MeasureName" : "disk_used", "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_1h" }, { "MeasureName" : "disk_free", "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_1h" }, { "MeasureName" : "memory_free", "MeasureValueType" : "DOUBLE", "SourceColumn" : "min_1h", "TargetMeasureName" : "MinMemFree" }, { "MeasureName" : "cpu_idle", "MeasureValueType" : "DOUBLE", "SourceColumn" : "min_1h" }, { Data model mappings 344 Amazon Timestream Developer Guide "MeasureName" : "disk_io_writes", "MeasureValueType" : "DOUBLE", "SourceColumn" : "sum_1h", "TargetMeasureName" : "total-disk-io-writes" }, { "MeasureName" : "memory_used", "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_1h", "TargetMeasureName" : "maxMemUsed" }, { "MeasureName" : "cpu_user", "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_1h" }, { "MeasureName" : "cpu_system", "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_1h" } ] } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } } } The following is the schema for the destination table once this scheduled query is materialized. As you can see from the schema, the table is using single-measure records. If you list the measure schema for the table, you will see the nine measures written to based on the mapping provided in the specification. Column region Type varchar Timestream for LiveAnalytics attribute type DIMENSION Data model mappings 345 Amazon Timestream Developer Guide Column Type Timestream for LiveAnalytics attribute type measure_name varchar MEASURE_NAME time timestamp TIMESTAMP measure_value::double double MEASURE_VALUE The following are the corresponding measures obtained with a SHOW MEASURES query. measure_name data_type Dimensions AvgMemCached double MinMemFree double cpu_idle cpu_system cpu_user disk_free disk_used double double double double double maxMemUsed double total-disk-io-writes double [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] Data model mappings 346 Amazon Timestream Developer Guide Example: Mapping results to single-measure records with query result columns as measure names In this example, you have a query whose results do not have a measure name column. Instead, you want the query result column name as the measure name when mapping the output to single- measure records. Earlier there was an example where a similar result was written to a multi- measure record. In this example, you will see how to map it to single-measure records if that fits your application scenario. Again, you specify this mapping using the MixedMeasureMappings property in TargetConfiguration.TimestreamConfiguration. In the following example, you see that the query result has nine columns. You use the result columns as measure names and the values as the single-measure values. For example, for a given row in the query result, the column name avg_mem_cached_1h is used as the column name and value associated with column, and avg_mem_cached_1h is used as the measure value for the single-measure record. You can also use TargetMeasureName to use a different measure name in the target table. For instance, for values in column sum_1h, the mapping specifies to use total_disk_io_writes_1h as the measure name in the target table. If any column's value is NULL, then the corresponding measure
|
timestream-084
|
timestream.pdf
| 84 |
nine columns. You use the result columns as measure names and the values as the single-measure values. For example, for a given row in the query result, the column name avg_mem_cached_1h is used as the column name and value associated with column, and avg_mem_cached_1h is used as the measure value for the single-measure record. You can also use TargetMeasureName to use a different measure name in the target table. For instance, for values in column sum_1h, the mapping specifies to use total_disk_io_writes_1h as the measure name in the target table. If any column's value is NULL, then the corresponding measure is ignored. { "Name" : "SingleMeasureMappingWithoutMeasureNameColumnInQueryResult", "QueryString" : "SELECT region, bin(time, 1h) as hour, AVG(CASE WHEN measure_name = 'memory_cached' THEN measure_value::double ELSE NULL END) as avg_mem_cached_1h, AVG(CASE WHEN measure_name = 'disk_used' THEN measure_value::double ELSE NULL END) as avg_disk_used_1h, AVG(CASE WHEN measure_name = 'disk_free' THEN measure_value::double ELSE NULL END) as avg_disk_free_1h, MIN(CASE WHEN measure_name = 'memory_free' THEN measure_value::double ELSE NULL END) as min_mem_free_1h, MIN(CASE WHEN measure_name = 'cpu_idle' THEN measure_value::double ELSE NULL END) as min_cpu_idle_1h, SUM(CASE WHEN measure_name = 'disk_io_writes' THEN measure_value::double ELSE NULL END) as sum_1h, MAX(CASE WHEN measure_name = 'memory_used' THEN measure_value::double ELSE NULL END) as max_mem_used_1h, MAX(CASE WHEN measure_name = 'cpu_user' THEN measure_value::double ELSE NULL END) as max_cpu_user_1h, MAX(CASE WHEN measure_name = 'cpu_system' THEN measure_value::double ELSE NULL END) as max_cpu_system_1h FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h AND measure_name IN ('memory_free', 'memory_used', 'memory_cached', 'disk_io_writes', 'disk_used', 'disk_free', 'cpu_user', 'cpu_system', 'cpu_idle') GROUP BY region, bin(time, 1h)", "ScheduleConfiguration" : { "ScheduleExpression" : "cron(0 0/1 * * ? *)" Data model mappings 347 Developer Guide Amazon Timestream }, "NotificationConfiguration" : { "SnsConfiguration" : { "TopicArn" : "******" } }, "ScheduledQueryExecutionRoleArn": "******", "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName" : "derived", "TableName" : "dashboard_metrics_1h_agg_5", "TimeColumn" : "hour", "DimensionMappings" : [ { "Name": "region", "DimensionValueType" : "VARCHAR" } ], "MixedMeasureMappings" : [ { "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_mem_cached_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_disk_used_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "avg_disk_free_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "min_mem_free_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "min_cpu_idle_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "sum_1h", "TargetMeasureName" : "total_disk_io_writes_1h" }, Data model mappings 348 Developer Guide Amazon Timestream { "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_mem_used_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_cpu_user_1h" }, { "MeasureValueType" : "DOUBLE", "SourceColumn" : "max_cpu_system_1h" } ] } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } } } The following is the schema for the destination table once this scheduled query is materialized. As you can see that the target table is storing records with single-measure values of type double. Similarly, the measure schema for the table shows the nine measure names. Also notice that the measure name total_disk_io_writes_1h is present since the mapping renamed sum_1h to total_disk_io_writes_1h. Column region measure_name Type varchar varchar Timestream for LiveAnalytics attribute type DIMENSION MEASURE_NAME time timestamp TIMESTAMP measure_value::double double MEASURE_VALUE Data model mappings 349 Amazon Timestream Developer Guide The following are the corresponding measures obtained with a SHOW MEASURES query. measure_name data_type Dimensions avg_disk_free_1h double avg_disk_used_1h double avg_mem_cached_1h double max_cpu_system_1h double max_cpu_user_1h double max_mem_used_1h double min_cpu_idle_1h double min_mem_free_1h double total-disk-io-writes double [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] [{'dimension_name': 'region', 'data_type': 'varchar'}] Scheduled query notification messages This section describes the messages sent by Timestream for LiveAnalytics when creating, deleting, running, or updating the state of a scheduled query. Notification messages 350 Amazon Timestream Developer Guide Notification message name Structure Description CreatingNotificationMessage CreatingNotificati onMessage { String arn; NotificationType type; } UpdateNotificationMessage UpdateNotification Message { String arn; NotificationType type; QueryState state; } This notification message is sent before sending the response for CreateSch eduledQuery . The scheduled query is enabled after sending this notification. arn - The ARN of the scheduled query that is being created. type - SCHEDULED _QUERY_CREATING This notification message is sent when a scheduled query is updated. Timestream for LiveAnalytics can disable the scheduled query, automatic ally, in case non-recoverable error is encountered, such as: • AssumeRole failure • Any 4xx errors encounter ed when communicating with KMS when a customer managed KMS key is specified. • Any 4xx errors encounter ed during running of the scheduled query. • Any 4xx errors encountered during ingestion of query results Notification messages 351 Amazon Timestream Developer Guide Notification message name Structure Description DeleteNotificationMessage DeletionNotificati onMessage { String arn; NotificationType type; } arn - The ARN of the scheduled query that is being updated. type - SCHEDULED _QUERY_UPDATE state - ENABLED or DISABLED This notification message is sent when a scheduled query has been deleted. arn - The ARN of the scheduled query
|
timestream-085
|
timestream.pdf
| 85 |
4xx errors encounter ed when communicating with KMS when a customer managed KMS key is specified. • Any 4xx errors encounter ed during running of the scheduled query. • Any 4xx errors encountered during ingestion of query results Notification messages 351 Amazon Timestream Developer Guide Notification message name Structure Description DeleteNotificationMessage DeletionNotificati onMessage { String arn; NotificationType type; } arn - The ARN of the scheduled query that is being updated. type - SCHEDULED _QUERY_UPDATE state - ENABLED or DISABLED This notification message is sent when a scheduled query has been deleted. arn - The ARN of the scheduled query that is being created. type - SCHEDULED _QUERY_DELETED Notification messages 352 Amazon Timestream Developer Guide Notification message name Structure Description SuccessNotificationMessage SuccessNotificatio nMessage { This notification message is sent after the scheduled NotificationType query is run and the results type; String arn; Date nextInvoc ationEpochSecond; ScheduledQueryRunS ummary runSummary; } ScheduledQueryRunSumm ary { Date invocatio nTime; Date triggerTime; String runStatus; ExecutionStats executionstats; ErrorReportLocatio n errorReportLocatio n; are successfully ingested. ARN - The ARN of the scheduled query that is being deleted. NotificationType - AUTO_TRIGGER_SUCCESS or MANUAL_TRIGGER_SUCCESS. nextInvocationEpochSecond - The next time Timestream for LiveAnalytics will run the scheduled query. runSummary - Information about the scheduled query String failureRe run. ason; } ExecutionStats { Long bytesMetered; Long dataWrites; Long queryResu ltRows; Long recordsIn gested; Long execution TimeInMillis; } ErrorReportLocation { Notification messages 353 Amazon Timestream Developer Guide Notification message name Structure Description S3ReportLocation s3ReportLocation; } S3ReportLocation { String bucketName; String objectKey; } Notification messages 354 Amazon Timestream Developer Guide Notification message name Structure Description FailureNotificationMessage FailureNotificatio nMessage { This notification message is sent when a failure is NotificationType encountered during a type; String arn; ScheduledQueryRunS ummary runSummary; } scheduled query run or when ingesting the query results. arn - The ARN of the scheduled query that is being ScheduledQueryRunSumm run. type - AUTO_TRIGGER_FAILU RE or MANUAL_TRIGGER_FAI LURE. runSummary - Information about the scheduled query run. ary { Date invocatio nTime; Date triggerTime; String runStatus; ExecutionStats executionstats; ErrorReportLocatio n errorReportLocatio n; String failureRe ason; } ExecutionStats { Long bytesMetered; Long dataWrites; Long queryResu ltRows; Long recordsIn gested; Long execution TimeInMillis; } ErrorReportLocation { S3ReportLocation s3ReportLocation; } Notification messages 355 Amazon Timestream Developer Guide Notification message name Structure Description S3ReportLocation { String bucketName; String objectKey; } Scheduled query error reports This section describes the location, format, and reasons for error reports generated by Timestream for LiveAnalytics when errors are encountered by running scheduled queries. Topics • Scheduled query error reports reasons • Scheduled query error reports location • Scheduled query error reports format • Scheduled query error types • Scheduled query error reports example Scheduled query error reports reasons Error reports are generated for recoverable errors. Error reports are not generated for non- recoverable errors. Timestream for LiveAnalytics can disable the scheduled queries automatically when non-recoverable errors are encountered. These include: • AssumeRole failure • Any 4xx errors encountered when communicating with KMS when a customer-managed KMS key is specified • Any 4xx errors encountered when a scheduled query runs • Any 4xx errors encountered during ingestion of query results Error reports 356 Amazon Timestream Developer Guide For non-recoverable errors, Timestream for LiveAnalytics sends a failure notification with a non- recoverable error message. An update notification is also sent which indicates that the scheduled query is disabled. Scheduled query error reports location A scheduled query error report location has the following naming convention: s3://customer-bucket/customer-prefix/ Following is an example scheduled query ARN: arn:aws:timestream:us-east-1:000000000000:scheduled-query/test-query-hd734tegrgfd s3://customer-bucket/customer-prefix/test-query-hd734tegrgfd/<InvocationTime>/<Auto or Manual>/<Actual Trigger Time> Auto indicates scheduled queries automatically scheduled by Timestream for LiveAnalytics and Manual indicates scheduled queries manually triggered by a user via ExecuteScheduledQuery API action in Amazon Timestream for LiveAnalytics Query. For more information about ExecuteScheduledQuery, see ExecuteScheduledQuery. Scheduled query error reports format The error reports have the following JSON format: { "reportId": <String>, // A unique string ID for all error reports belonging to a particular scheduled query run "errors": [ <Error>, ... ], // One or more errors } Scheduled query error types The Error object can be one of three types: • Records Ingestion Errors { "reason": <String>, // The error message String "records": [ <Record>, ... ], // One or more rejected records ) Error reports 357 Amazon Timestream } • Row Parse and Validation Errors { Developer Guide "reason": <String>, // The error message String "rawLine": <String>, // [Optional] The raw line String that is being parsed into record(s) to be ingested. This line has encountered the above-mentioned parse error. } • General Errors { "reason": <String>, // The error message } Scheduled query error reports example The following is an example of an error report that was produced due to ingestion errors. { "reportId": "C9494AABE012D1FBC162A67EA2C18255", "errors": [ { "reason": "The record timestamp is outside the time range [2021-11-12T14:18:13.354Z, 2021-11-12T16:58:13.354Z) of the memory
|
timestream-086
|
timestream.pdf
| 86 |
Error reports 357 Amazon Timestream } • Row Parse and Validation Errors { Developer Guide "reason": <String>, // The error message String "rawLine": <String>, // [Optional] The raw line String that is being parsed into record(s) to be ingested. This line has encountered the above-mentioned parse error. } • General Errors { "reason": <String>, // The error message } Scheduled query error reports example The following is an example of an error report that was produced due to ingestion errors. { "reportId": "C9494AABE012D1FBC162A67EA2C18255", "errors": [ { "reason": "The record timestamp is outside the time range [2021-11-12T14:18:13.354Z, 2021-11-12T16:58:13.354Z) of the memory store.", "records": [ { "dimensions": [ { "name": "dim0", "value": "d0_1", "dimensionValueType": null }, { "name": "dim1", "value": "d1_1", "dimensionValueType": null } ], "measureName": "random_measure_value", Error reports 358 Amazon Timestream Developer Guide "measureValue": "3.141592653589793", "measureValues": null, "measureValueType": "DOUBLE", "time": "1637166175635000000", "timeUnit": "NANOSECONDS", "version": null }, { "dimensions": [ { "name": "dim0", "value": "d0_2", "dimensionValueType": null }, { "name": "dim1", "value": "d1_2", "dimensionValueType": null } ], "measureName": "random_measure_value", "measureValue": "6.283185307179586", "measureValues": null, "measureValueType": "DOUBLE", "time": "1637166175636000000", "timeUnit": "NANOSECONDS", "version": null }, { "dimensions": [ { "name": "dim0", "value": "d0_3", "dimensionValueType": null }, { "name": "dim1", "value": "d1_3", "dimensionValueType": null } ], "measureName": "random_measure_value", "measureValue": "9.42477796076938", "measureValues": null, Error reports 359 Amazon Timestream Developer Guide "measureValueType": "DOUBLE", "time": "1637166175637000000", "timeUnit": "NANOSECONDS", "version": null }, { "dimensions": [ { "name": "dim0", "value": "d0_4", "dimensionValueType": null }, { "name": "dim1", "value": "d1_4", "dimensionValueType": null } ], "measureName": "random_measure_value", "measureValue": "12.566370614359172", "measureValues": null, "measureValueType": "DOUBLE", "time": "1637166175638000000", "timeUnit": "NANOSECONDS", "version": null } ] } ] } Scheduled query patterns and examples This section describes the usage patterns for scheduled queries as well as end-to-end examples. Topics • Scheduled queries sample schema • Scheduled query patterns • Scheduled query examples Patterns and examples 360 Amazon Timestream Developer Guide Scheduled queries sample schema In this example we will use a sample application mimicking a DevOps scenario monitoring metrics from a large fleet of servers. Users want to alert on anomalous resource usage, create dashboards on aggregate fleet behavior and utilization, and perform sophisticated analysis on recent and historical data to find correlations. The following diagram provides an illustration of the setup where a set of monitored instances emit metrics to Timestream for LiveAnalytics. Another set of concurrent users issues queries for alerts, dashboards, or ad-hoc analysis, where queries and ingestion run in parallel. The application being monitored is modeled as a highly scaled-out service that is deployed in several regions across the globe. Each region is further subdivided into a number of scaling units called cells that have a level of isolation in terms of infrastructure within the region. Each cell is further subdivided into silos, which represent a level of software isolation. Each silo has five Patterns and examples 361 Amazon Timestream Developer Guide microservices that comprise one isolated instance of the service. Each microservice has several servers with different instance types and OS versions, which are deployed across three availability zones. These attributes that identify the servers emitting the metrics are modeled as dimensions in Timestream for LiveAnalytics. In this architecture, we have a hierarchy of dimensions (such as region, cell, silo, and microservice_name) and other dimensions that cut across the hierarchy (such as instance_type and availability_zone). The application emits a variety of metrics (such as cpu_user and memory_free) and events (such as task_completed and gc_reclaimed). Each metric or event is associated with eight dimensions (such as region or cell) that uniquely identify the server emitting it. Data is written with the 20 metrics stored together in a multi-measure record with measure name metrics and all the 5 events are stored together in another multi-measure record with measure name events. The data model, schema, and data generation can be found in the open-sourced data generator. In addition to the schema and data distributions, the data generator provides an example of using multiple writers to ingest data in parallel, using the ingestion scaling of Timestream for LiveAnalytics to ingest millions of measurements per second. Below we show the schema (table and measure schema) and some sample data from the data set. Topics • Multi-measure records • Single-measure records Multi-measure records Table Schema Below is the table schema once the data is ingested using multi-measure records. It is the output of DESCRIBE query. Assuming the data is ingested into a database raw_data and table devops, below is the query. DESCRIBE "raw_data"."devops" Column Type Timestream for LiveAnalytics attribute type availability_zone varchar DIMENSION Patterns and examples 362 Amazon Timestream Column microservice_name instance_name process_name os_version jdk_version cell region silo instance_type measure_name Type varchar varchar varchar varchar varchar varchar varchar varchar varchar varchar Developer Guide Timestream for LiveAnalytics attribute type DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION MEASURE_NAME time timestamp TIMESTAMP memory_free cpu_steal cpu_iowait cpu_user memory_cached disk_io_reads cpu_hi latency_per_read
|
timestream-087
|
timestream.pdf
| 87 |
schema once the data is ingested using multi-measure records. It is the output of DESCRIBE query. Assuming the data is ingested into a database raw_data and table devops, below is the query. DESCRIBE "raw_data"."devops" Column Type Timestream for LiveAnalytics attribute type availability_zone varchar DIMENSION Patterns and examples 362 Amazon Timestream Column microservice_name instance_name process_name os_version jdk_version cell region silo instance_type measure_name Type varchar varchar varchar varchar varchar varchar varchar varchar varchar varchar Developer Guide Timestream for LiveAnalytics attribute type DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION MEASURE_NAME time timestamp TIMESTAMP memory_free cpu_steal cpu_iowait cpu_user memory_cached disk_io_reads cpu_hi latency_per_read network_bytes_out double double double double double double double double double MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI Patterns and examples 363 Amazon Timestream Column cpu_idle disk_free memory_used cpu_system Type double double double double file_descriptors_in_use double disk_used cpu_nice disk_io_writes cpu_si latency_per_write network_bytes_in task_end_state gc_pause task_completed gc_reclaimed Measure Schema double double double double double double varchar double bigint double Developer Guide Timestream for LiveAnalytics attribute type MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI MULTI Below is the measure schema returned by the SHOW MEASURES query. SHOW MEASURES FROM "raw_data"."devops" Patterns and examples 364 Amazon Timestream Developer Guide measure_name data_type Dimensions events multi metrics multi [{"data_type":"varchar","di mension_name":"availability _zone"},{"data_type":"varch ar","dimension_name":"micro service_name"},{"data_type" :"varchar","dimension_name" :"instance_name"},{"data_ty pe":"varchar","dimension_na me":"process_name"},{"data_ type":"varchar","dimension_ name":"jdk_version"},{"data _type":"varchar","dimension _name":"cell"},{"data_type" :"varchar","dimension_name" :"region"},{"data_type":"va rchar","dimension_name":"si lo"}] [{"data_type":"varchar","di mension_name":"availability _zone"},{"data_type":"varch ar","dimension_name":"micro service_name"},{"data_type" :"varchar","dimension_name" :"instance_name"},{"data_ty pe":"varchar","dimension_na me":"os_version"},{"data_ty pe":"varchar","dimension_na me":"cell"},{"data_type":"v archar","dimension_name":"r egion"},{"data_type":"varch ar","dimension_name":"silo" },{"data_type":"varchar","d Patterns and examples 365 Amazon Timestream Developer Guide measure_name data_type Dimensions imension_name":"instance_ty pe"}] Example Data regionCellSiloavailabil ity_zone process_n measure_n instance_ jdk_versi os_versio microserv instance_ ame on ame n ice_name type name cpu_idle cpu_syste Timecpu_user m cpu_hi cpu_nice cpu_iowai t disk_io_w latency_p memory_ca latency_p disk_io_r cpu_simemory_us er_write rites er_read eads ched gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed n_use AL2012 m5.8xlarg athena i- us- us- us- us- e zaZswmJ east-2 east-2 east-2 east-2 k- -1 - - 1 12:43 metrics11/12/202 62.80.40834.20.9720.08770.1030.5670.84457.688.952.691.931.72.2563.529.285.349.832.357.6 cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m metrics11/12/202 560.92339.90.7990.5320.6550.8510.31790.531.956.637.12593.352.233.17.1453.765.920.4 AL2012 m5.8xlarg athena i- us- us- us- us- e zaZswmJ east-2 east-2 east-2 east-2 k- -1 - - athena- cell-1- cell-1 us- s east-2 ilo-2 - 1 12:41 cell-1- s ilo-2-000 00216.ama Patterns and examples 366 Amazon Timestream Developer Guide regionCellSiloavailabil ity_zone process_n measure_n instance_ jdk_versi os_versio microserv instance_ ame on ame n ice_name type name cpu_idle cpu_syste Timecpu_user m cpu_hi cpu_nice cpu_iowai t disk_io_w latency_p memory_ca latency_p disk_io_r cpu_simemory_us er_write rites er_read eads ched gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed zonaws.co m n_use AL2012 m5.8xlarg i- athena us- us- us- us- e zaZswmJ east-2 east-2 east-2 east-2 1 12:39 metrics11/12/202 48.50.80148.20.180.9430.03160.8440.5497.441.455.132.786.233.772.761.580.85.1544.38.5 AL2012 m5.8xlarg i- athena us- us- us- us- e zaZswmJ east-2 east-2 east-2 east-2 1 12:38 metrics11/12/202 37.50.72358.80.3170.6080.8590.7910.3934.8478.920.341.446.83.8784.660.621.111.82.7610 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m Patterns and examples 367 Amazon Timestream Developer Guide regionCellSiloavailabil ity_zone process_n measure_n instance_ jdk_versi os_versio microserv instance_ ame on ame n ice_name type name cpu_idle cpu_syste Timecpu_user m cpu_hi cpu_nice cpu_iowai t disk_io_w latency_p memory_ca latency_p disk_io_r cpu_simemory_us er_write rites er_read eads ched gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed n_use AL2012 m5.8xlarg i- athena us- us- us- us- e zaZswmJ east-2 east-2 east-2 east-2 1 12:36 metrics11/12/202 580.78638.70.2190.4360.8290.3310.7345136.881.850.577.917.882.3647.6966.556.231.3 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m 85.534864.8 75.8SUCCESS_W ITH_NO_RE SULT i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:43 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m Patterns and examples 368 Amazon Timestream Developer Guide regionCellSiloavailabil ity_zone process_n measure_n instance_ jdk_versi os_versio microserv instance_ ame on ame n ice_name type name cpu_idle cpu_syste Timecpu_user m cpu_hi cpu_nice cpu_iowai t disk_io_w latency_p memory_ca latency_p disk_io_r cpu_simemory_us er_write rites er_read eads ched gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed n_use 22.8427.45 7.47SUCCESS_W ITH_RESUL T 6.7724972.3 64.1SUCCESS_W ITH_RESUL T i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:41 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:39 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m Patterns and examples 369
|
timestream-088
|
timestream.pdf
| 88 |
gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed n_use 22.8427.45 7.47SUCCESS_W ITH_RESUL T 6.7724972.3 64.1SUCCESS_W ITH_RESUL T i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:41 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:39 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m Patterns and examples 369 Amazon Timestream Developer Guide regionCellSiloavailabil ity_zone process_n measure_n instance_ jdk_versi os_versio microserv instance_ ame on ame n ice_name type name cpu_idle cpu_syste Timecpu_user m cpu_hi cpu_nice cpu_iowai t disk_io_w latency_p memory_ca latency_p disk_io_r cpu_simemory_us er_write rites er_read eads ched gc_pause gc_reclai task_comp task_end_ file_desc network_b memory_fr network_b disk_free med leted state ee riptors_i ytes_out ytes_in disk_used cpu_steal ed n_use 23SUCCESS_W 53.313899 ITH_RESUL T 79.625482.9 39.4SUCCESS_W ITH_NO_RE SULT i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger host_mana JDK_8events11/12/202 1 12:38 - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m host_mana JDK_8events11/12/202 1 12:36 i- athena us- us- us- us- zaZswmJ east-2 east-2 east-2 east-2 ger - - -1 k- cell-1 cell-1- athena- s us- ilo-2 east-2 - cell-1- s ilo-2-000 00216.ama zonaws.co m Single-measure records Timestream for LiveAnalytics also allows you to ingest the data with one measure per time series record. Below are the schema details when ingested using single measure records. Patterns and examples 370 Amazon Timestream Table Schema Developer Guide Below is the table schema once the data is ingested using multi-measure records. It is the output of DESCRIBE query. Assuming the data is ingested into a database raw_data and table devops, below is the query. DESCRIBE "raw_data"."devops_single" Column availability_zone microservice_name instance_name process_name os_version jdk_version cell region silo instance_type measure_name Type varchar varchar varchar varchar varchar varchar varchar varchar varchar varchar varchar Timestream for LiveAnalytics attribute type DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION DIMENSION MEASURE_NAME time timestamp TIMESTAMP measure_value::double double MEASURE_VALUE measure_value::bigint bigint MEASURE_VALUE measure_value::varchar varchar MEASURE_VALUE Patterns and examples 371 Amazon Timestream Measure Schema Developer Guide Below is the measure schema returned by the SHOW MEASURES query. SHOW MEASURES FROM "raw_data"."devops_single" measure_name data_type Dimensions cpu_hi double cpu_idle double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, Patterns and examples 372 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_iowait double {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 373 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_nice double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 374 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_si double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 375 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_steal double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 376 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_system double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 377 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_user double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 378 Amazon Timestream Developer Guide measure_name data_type Dimensions disk_free double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name':
|
timestream-089
|
timestream.pdf
| 89 |
'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 377 Amazon Timestream Developer Guide measure_name data_type Dimensions cpu_user double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 378 Amazon Timestream Developer Guide measure_name data_type Dimensions disk_free double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 379 Amazon Timestream Developer Guide measure_name data_type Dimensions disk_io_reads double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 380 Amazon Timestream Developer Guide measure_name data_type Dimensions disk_io_writes double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 381 Amazon Timestream Developer Guide measure_name data_type Dimensions disk_used double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 382 Amazon Timestream Developer Guide measure_name data_type Dimensions file_descriptors_in_use double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 383 Amazon Timestream Developer Guide measure_name data_type Dimensions gc_pause double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar'}, {'dimension_name': 'process_ name', 'data_type': 'varchar'}, {'dimension_name': 'jdk_vers ion', 'data_type': 'varchar' }, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}] Patterns and examples 384 Amazon Timestream Developer Guide measure_name data_type Dimensions gc_reclaimed double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar'}, {'dimension_name': 'process_ name', 'data_type': 'varchar'}, {'dimension_name': 'jdk_vers ion', 'data_type': 'varchar' }, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}] Patterns and examples 385 Amazon Timestream Developer Guide measure_name data_type Dimensions latency_per_read double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 386 Amazon Timestream Developer Guide measure_name data_type Dimensions latency_per_write double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 387 Amazon Timestream Developer Guide measure_name data_type Dimensions memory_cached double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 388 Amazon Timestream Developer Guide measure_name data_type Dimensions memory_free double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar'}, {'dimension_name': 'process_ name', 'data_type': 'varchar'}, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'jdk_vers ion', 'data_type': 'varchar' }, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 389 Amazon Timestream Developer Guide measure_name data_type Dimensions memory_used double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 390 Amazon Timestream Developer Guide measure_name data_type Dimensions network_bytes_in double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 391 Amazon Timestream Developer Guide measure_name data_type Dimensions network_bytes_out double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 392 Amazon Timestream Developer Guide measure_name data_type Dimensions task_completed bigint [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name':
|
timestream-090
|
timestream.pdf
| 90 |
'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 391 Amazon Timestream Developer Guide measure_name data_type Dimensions network_bytes_out double [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar' }, {'dimension_name': 'os_versi on', 'data_type': 'varchar'}, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}, {'dimension_name': 'instance _type', 'data_type': 'varchar'}] Patterns and examples 392 Amazon Timestream Developer Guide measure_name data_type Dimensions task_completed bigint [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar'}, {'dimension_name': 'process_ name', 'data_type': 'varchar'}, {'dimension_name': 'jdk_vers ion', 'data_type': 'varchar' }, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}] Patterns and examples 393 Amazon Timestream Developer Guide measure_name data_type Dimensions task_end_state varchar Example Data [{'dimension_name': 'availabi lity_zone', 'data_type': 'varchar'}, {'dimension_name': 'microservice_name', 'data_type': 'varchar'}, {'dimension_name': 'instance _name', 'data_type': 'varchar'}, {'dimension_name': 'process_ name', 'data_type': 'varchar'}, {'dimension_name': 'jdk_vers ion', 'data_type': 'varchar' }, {'dimension_name': 'cell', 'data_type': 'varchar'}, {'dimension_name': 'region', 'data_type': 'varchar'}, {'dimension_name': 'silo', 'data_type': 'varchar'}] availabil ity_zone microserv ice_name process_n instance_ ame name Cell jdk_versi os_versio on n regionSilo measure_n instance_ ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char cpu_hi34:57.20.87169 eu- west-1 - cell-9 r5.4xlarg eu- eu- e west-1 west-1 - cell-9- s ilo-2 herculesi- AL2012 eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - Patterns and examples 394 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou silo-2-0 0000027.a mazonaws. com ble int char AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_idle34:57.23.46266 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 395 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char AL2012 AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_iowai t 34:57.20.10226 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_nice34:57.20.63013 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 396 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char AL2012 AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_si 34:57.20.16441 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_steal34:57.20.10729 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 397 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char AL2012 AL2012 eu- west-1 eu- west-1 cpu_syste r5.4xlarg eu- m e west-1 34:57.20.45709 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 cpu_user34:57.294.20448 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 398 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 AL2012 ble int char eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_free34:57.272.51895 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_io_r eads 34:57.281.73383 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 399 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 AL2012 ble int char eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_io_w rites 34:57.277.11665 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_used34:57.289.42235 - cell-9 - cell-9- s ilo-2 herculesi- eu-
|
timestream-091
|
timestream.pdf
| 91 |
hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 399 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 AL2012 ble int char eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_io_w rites 34:57.277.11665 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 disk_used34:57.289.42235 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 400 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 file_desc riptors_i 34:57.230.08254 ble int char n_use - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- server JDK_8 eu- gc_pause34:57.260.28679 eu- west-1 eu- west-1 west-1 - cell-9 - cell-9- s ilo-2 eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 401 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou herculesi- server JDK_8 eu- eu- west-1 eu- west-1 west-1 eu- west-1 -1 ble int char 34:57.275.28839 gc_reclai med zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com - cell-9 - cell-9- s ilo-2 AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 latency_p er_read 34:57.28.07605 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 402 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char AL2012 AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 latency_p er_write 34:57.258.11223 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 memory_ca ched 34:57.287.56481 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 403 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou herculesi- server JDK_8 eu- eu- west-1 eu- west-1 west-1 eu- west-1 -1 ble int char 34:57.218.95768 memory_fr ee zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com - cell-9 - cell-9- s ilo-2 AL2012 eu- west-1 eu- west-1 r5.4xlarg memory_fr eu- ee e west-1 34:57.297.20523 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 404 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 AL2012 ble int char eu- west-1 eu- west-1 memory_us r5.4xlarg eu- ed e west-1 34:57.212.37723 - cell-9 - cell-9- s ilo-2 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 network_b ytes_in 34:57.231.02065 - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 405 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou AL2012 eu- west-1 eu- west-1 r5.4xlarg eu- e west-1 network_b ytes_out 34:57.20.51424 ble int char - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- server JDK_8 eu- eu- west-1 eu- west-1 west-1 34:57.2 task_comp leted 69 eu- west-1 -1 - cell-9 - cell-9- s ilo-2 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 406 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char
|
timestream-092
|
timestream.pdf
| 92 |
network_b ytes_out 34:57.20.51424 ble int char - cell-9 - cell-9- s ilo-2 herculesi- eu- west-1 -1 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com herculesi- server JDK_8 eu- eu- west-1 eu- west-1 west-1 34:57.2 task_comp leted 69 eu- west-1 -1 - cell-9 - cell-9- s ilo-2 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Patterns and examples 406 Amazon Timestream Developer Guide availabil ity_zone microserv ice_name instance_ process_n ame name Cell jdk_versi os_versio on n regionSilo instance_ measure_n ame type Time measure_v measure_v alue::big measure_v alue::var alue::dou ble int char 34:57.2 task_end_ state SUCCESS_W ITH_RESUL T herculesi- server JDK_8 eu- eu- west-1 -1 eu- west-1 eu- west-1 west-1 - cell-9 - cell-9- s ilo-2 zaZswmJ k- hercule s- eu- west -1- cell-9 - silo-2-0 0000027.a mazonaws. com Scheduled query patterns In this section you will find some common patterns of how you can use Amazon Timestream for LiveAnalytics Scheduled Queries to optimize your dashboards to load faster and at reduced costs. The examples below use a DevOps application scenario to illustrate the key concepts which apply to scheduled queries in general, irrespective of the application scenario. Scheduled Queries in Timestream for LiveAnalytics allow you to express your queries using the full SQL surface area of Timestream for LiveAnalytics. Your query can include one or more source tables, perform aggregations or any other query allowed by Timestream for LiveAnalytics's SQL language, and then materialize the results of the query in another destination table in Timestream for LiveAnalytics. For ease of exposition, this section refers to this target table of a scheduled query as a derived table. The following are the key points that are covered in this section. • Using a simple fleet-level aggregate to explain how you can define a scheduled query and understand some basic concepts. Patterns and examples 407 Amazon Timestream Developer Guide • How you can combine results from the target of a scheduled query (the derived table) with the results from the source table to get the cost and performance benefits of scheduled query. • What are your trade-offs when configuring the refresh period of the scheduled queries. • Using scheduled queries for some common scenarios. • Tracking the last data point from every instance before a specific date. • Distinct values for a dimension to use for populating variables in a dashboard. • How you handle late arriving data in the context of scheduled queries. • How you can use one-off manual executions to handle a variety of scenarios not directly covered by automated triggers for scheduled queries. Topics • Scenario • Simple fleet-level aggregates • Last point from each device • Unique dimension values • Handling late-arriving data • Back-filling historical pre-computations Scenario The following examples use a DevOps monitoring scenario which is outlined in Scheduled queries sample schema. The examples provide the scheduled query definition where you can plug in the appropriate configurations for where to receive execution status notifications for scheduled queries, where to receive reports for errors encountered during execution of a scheduled query, and the IAM role the scheduled query uses to perform its operations. You can create these scheduled queries after filling in the preceding options, creating the target (or derived) table, and executing the through the AWS CLI. For example, assume that a scheduled query definition is stored in a file, scheduled_query_example.json. You can create the query using the CLI command. aws timestream-query create-scheduled-query --cli-input-json file:// scheduled_query_example.json --profile aws_profile --region us-east-1 Patterns and examples 408 Amazon Timestream Developer Guide In the preceding command, the profile passed using the --profile option must have the appropriate permissions to create scheduled queries. See Identity-based policies for Scheduled Queries for detailed instructions for the policies and permissions. Simple fleet-level aggregates This first example walks you through some of the basic concepts when working with scheduled queries using a simple example computing fleet-level aggregates. Using this example, you will learn the following. • How to take your dashboard query that is used to obtain aggregate statistics and map it to a scheduled query. • How Timestream for LiveAnalytics manages the execution of the different instances of your scheduled query. • How you can have different instances of scheduled queries overlap in time ranges and how the correctness of data is maintained on the target table to ensure that your dashboard using the results of the scheduled query gives you results that match with the same aggregate computed on the raw data. • How to set the time range and refresh cadence for your scheduled query. • How you can self-serve track the results of the scheduled queries to tune them so that the execution latency for the query instances are within the acceptable delays of refreshing your dashboards. Topics • Aggregate from source
|
timestream-093
|
timestream.pdf
| 93 |
have different instances of scheduled queries overlap in time ranges and how the correctness of data is maintained on the target table to ensure that your dashboard using the results of the scheduled query gives you results that match with the same aggregate computed on the raw data. • How to set the time range and refresh cadence for your scheduled query. • How you can self-serve track the results of the scheduled queries to tune them so that the execution latency for the query instances are within the acceptable delays of refreshing your dashboards. Topics • Aggregate from source tables • Scheduled query to pre-compute aggregates • Aggregate from derived table • Aggregate combining source and derived tables • Aggregate from frequently refreshed scheduled computation Aggregate from source tables In this example, you are tracking the number of metrics emitted by the servers within a given region in every minute. The graph below is an example plotting this time series for the region us- east-1. Patterns and examples 409 Amazon Timestream Developer Guide Below is an example query to compute this aggregate from the raw data. It filters the rows for the region us-east-1 and then computes the per minute sum by accounting for the 20 metrics (if measure_name is metrics) or 5 events (if measure_name is events). In this example, the graph illustration shows that the number of metrics emitted vary between 1.5 Million to 6 Million per minute. When plotting this time series for several hours (past 12 hours in this figure), this query over the raw data analyzes hundreds of millions of rows. WITH grouped_data AS ( SELECT region, bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636699996445) AND from_milliseconds(1636743196445) AND region = 'us-east-1' GROUP BY region, measure_name, bin(time, 1m) ) SELECT minute, SUM(numDataPoints) AS numDataPoints FROM grouped_data GROUP BY minute ORDER BY 1 desc, 2 desc Scheduled query to pre-compute aggregates If you would like to optimize your dashboards to load faster and lower your costs by scanning less data, you can use a scheduled query to pre-compute these aggregates. Scheduled queries in Timestream for LiveAnalytics allows you to materialize these pre-computations in another Timestream for LiveAnalytics table, which you can subsequently use for your dashboards. Patterns and examples 410 Amazon Timestream Developer Guide The first step in creating a scheduled query is to identify the query you want to pre-compute. Note that the preceding dashboard was drawn for region us-east-1. However, a different user may want the same aggregate for a different region, say us-west-2 or eu-west-1. To avoid creating a scheduled query for each such query, you can pre-compute the aggregate for each region and materialize the per-region aggregates in another Timestream for LiveAnalytics table. The query below provides an example of the corresponding pre-computation. As you can see, it is similar to the common table expression grouped_data used in the query on the raw data, except for two differences: 1) it does not use a region predicate, so that we can use one query to pre- compute for all regions; and 2) it uses a parameterized time predicate with a special parameter @scheduled_runtime which is explained in details below. SELECT region, bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 10m AND @scheduled_runtime + 1m GROUP BY bin(time, 1m), region The preceding query can be converted into a scheduled query using the following specification. The scheduled query is assigned a Name, which is a user-friendly mnemonic. It then includes the QueryString, a ScheduleConfiguration, which is a cron expression. It specifies the TargetConfiguration which maps the query results to the destination table in Timestream for LiveAnalytics. Finally, it specifies a number of other configurations, such as the NotificationConfiguration, where notifications are sent for individual executions of the query, ErrorReportConfiguration where a report is written in case the query encounters any errors, and the ScheduledQueryExecutionRoleArn, which is the role used to perform operations for the scheduled query. { "Name": "MultiPT5mPerMinutePerRegionMeasureCount", "QueryString": "SELECT region, bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 10m AND @scheduled_runtime + 1m GROUP BY bin(time, 1m), region", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/5 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" Patterns and examples 411 Developer Guide Amazon Timestream } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "per_minute_aggs_pt5m", "TimeColumn": "minute", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "numDataPoints", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } In the example,
|
timestream-094
|
timestream.pdf
| 94 |
as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 10m AND @scheduled_runtime + 1m GROUP BY bin(time, 1m), region", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/5 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" Patterns and examples 411 Developer Guide Amazon Timestream } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "per_minute_aggs_pt5m", "TimeColumn": "minute", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "numDataPoints", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } In the example, the ScheduleExpression cron(0/5 * * * ? *) implies that the query is executed once every 5 minutes at the 5th, 10th, 15th, .. minutes of every hour of every day. These timestamps when a specific instance of this query is triggered is what translates to the @scheduled_runtime parameter used in the query. For instance, consider the instance of this scheduled query executing on 2021-12-01 00:00:00. For this instance, the @scheduled_runtime parameter is initialized to the timestamp 2021-12-01 00:00:00 when invoking the query. Therefore, this specific instance will execute at timestamp 2021-12-01 00:00:00 and will compute the per-minute aggregates from time range 2021-11-30 23:50:00 to 2021-12-01 00:01:00. Similarly, the next instance of this Patterns and examples 412 Amazon Timestream Developer Guide query is triggered at timestamp 2021-12-01 00:05:00 and in that case, the query will compute per- minute aggregates from the time range 2021-11-30 23:55:00 to 2021-12-01 00:06:00. Hence, the @scheduled_runtime parameter provides a scheduled query to pre-compute the aggregates for the configured time ranges using the invocation time for the queries. Note that two subsequent instances of the query overlap in their time ranges. This is something you can control based on your requirements. In this case, this overlap allows these queries to update the aggregates based on any data whose arrival was slightly delayed, up to 5 minutes in this example. To ensure correctness of the materialized queries, Timestream for LiveAnalytics ensures that the query at 2021-12-01 00:05:00 will be performed only after the query at 2021-12-01 00:00:00 has completed and the results of the latter queries can update any previously materialized aggregate using if a newer value is generated. For example, if some data at timestamp 2021-11-30 23:59:00 arrived after the query for 2021-12-01 00:00:00 executed but before the query for 2021-12-01 00:05:00, then the execution at 2021-12-01 00:05:00 will recompute the aggregates for the minute 2021-11-30 23:59:00 and this will result in the previous aggregate being updated with the newly-computed value. You can rely on these semantics of the scheduled queries to strike a trade-off between how quickly you update your pre-computations versus how you can gracefully handle some data with delayed arrival. Additional considerations are discussed below on how you trade-off this refresh cadence with freshness of the data and how you address updating the aggregates for data that arrives even more delayed or if your source of the scheduled computation has updated values which would require the aggregates to be recomputed. Every scheduled computation has a notification configuration where Timestream for LiveAnalytics sends notification of every execution of a scheduled configuration. You can configure an SNS topic for to receive notifications for each invocation. In addition to the success or failure status of a specific instance, it also has several statistics such as the time this computation took to execute, the number of bytes the computation scanned, and the number of bytes the computation wrote to its destination table. You can use these statistics to further tune your query, schedule configuration, or track the spend for your scheduled queries. One aspect worth noting is the execution time for an instance. In this example, the scheduled computation is configured to execute the every 5 minutes. The execution time will determine the delay with which the pre-computation will be available, which will also define the lag in your dashboard when you're using the pre-computed data in your dashboards. Furthermore, if this delay is consistently higher than the refresh interval, for example, if the execution time is more than 5 minutes for a computation configured to refresh every 5 minutes, it is important to tune your computation to run faster to avoid further lag in your dashboards. Patterns and examples 413 Amazon Timestream Aggregate from derived table Developer Guide Now that you have set up the scheduled queries and the aggregates are pre-computed and materialized to another Timestream for LiveAnalytics table specified in the target configuration of the scheduled computation, you can use the data in that table to write SQL queries to power your dashboards. Below is an equivalent of the query that uses the materialized pre-aggregates to generate the per minute data point count aggregate for us-east-1. SELECT bin(time, 1m) as minute, SUM(numDataPoints) as numDatapoints
|
timestream-095
|
timestream.pdf
| 95 |
to run faster to avoid further lag in your dashboards. Patterns and examples 413 Amazon Timestream Aggregate from derived table Developer Guide Now that you have set up the scheduled queries and the aggregates are pre-computed and materialized to another Timestream for LiveAnalytics table specified in the target configuration of the scheduled computation, you can use the data in that table to write SQL queries to power your dashboards. Below is an equivalent of the query that uses the materialized pre-aggregates to generate the per minute data point count aggregate for us-east-1. SELECT bin(time, 1m) as minute, SUM(numDataPoints) as numDatapoints FROM "derived"."per_minute_aggs_pt5m" WHERE time BETWEEN from_milliseconds(1636699996445) AND from_milliseconds(1636743196445) AND region = 'us-east-1' GROUP BY bin(time, 1m) ORDER BY 1 desc The previous figure plots the aggregate computed from the aggregate table. Comparing this panel with the panel computed from the raw source data, you will notice that they match up exactly, albeit these aggregates are delayed by a few minute, controlled by the refresh interval you configured for the scheduled computation plus the time to execute it. This query over the pre-computed data scans several orders of magnitude lesser data compared to the aggregates computed over the raw source data. Depending on the granularity of aggregations, this reduction can easily result in 100X lower cost and query latency. There is a cost to executing this scheduled computation. However, depending on how frequently these dashboards are refreshed and how many concurrent users load these dashboards, you end up significantly reducing Patterns and examples 414 Amazon Timestream Developer Guide your overall costs by using these pre-computations. And this is on top of 10-100X faster load times for the dashboards. Aggregate combining source and derived tables Dashboards created using the derived tables can have a lag. If your application scenario requires the dashboards to have the most recent data, then you can use the power and flexibility of Timestream for LiveAnalytics's SQL support to combine the latest data from the source table with the historical aggregates from the derived table to form a merged view. This merged view uses the union semantics of SQL and non-overlapping time ranges from the source and the derived table. In the example below, we are using the "derived"."per_minute_aggs_pt5m" derived table. Since the scheduled computation for that derived table refreshes once every 5 minutes (per the schedule expression specification), this query below uses the most recent 15 minutes of data from the source table, and any data older than 15 minutes from the derived table and then unions the results to create the merged view that has the best of both worlds: the economics and low latency by reading older pre-computed aggregates from the derived table and the freshness of the aggregates from the source table to power your real time analytics use cases. Note that this union approach will have slightly higher query latency compared to only querying the derived table and also have slightly higher data scanned, since it is aggregating the raw data in real time to fill in the most recent time interval. However, this merged view will still be significantly faster and cheaper compared to aggregating on the fly from the source table, especially for dashboards rendering days or weeks of data. You can tune the time ranges for this example to suite your application's refresh needs and delay tolerance. WITH aggregated_source_data AS ( SELECT bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDatapoints FROM "raw_data"."devops" WHERE time BETWEEN bin(from_milliseconds(1636743196439), 1m) - 15m AND from_milliseconds(1636743196439) AND region = 'us-east-1' GROUP BY bin(time, 1m) ), aggregated_derived_data AS ( SELECT bin(time, 1m) as minute, SUM(numDataPoints) as numDatapoints FROM "derived"."per_minute_aggs_pt5m" WHERE time BETWEEN from_milliseconds(1636699996439) AND bin(from_milliseconds(1636743196439), 1m) - 15m AND region = 'us-east-1' GROUP BY bin(time, 1m) Patterns and examples 415 Developer Guide Amazon Timestream ) SELECT minute, numDatapoints FROM ( ( SELECT * FROM aggregated_derived_data ) UNION ( SELECT * FROM aggregated_source_data ) ) ORDER BY 1 desc Below is the dashboard panel with this unified merged view. As you can see, the dashboard looks almost identical to the view computed from the derived table, except for that it will have the most up-to-date aggregate at the rightmost tip. Aggregate from frequently refreshed scheduled computation Depending on how frequently your dashboards are loaded and how much latency you want for your dashboard, there is another approach to obtaining fresher results in your dashboard: having the scheduled computation refresh the aggregates more frequently. For instance, below is configuration of the same scheduled computation, except that it refreshes once every minute (note the schedule express cron(0/1 * * * ? *)). With this setup, the derived table per_minute_aggs_pt1m will have much more recent aggregates compared to the scenario where the computation specified a refresh schedule of once every 5 minutes.
|
timestream-096
|
timestream.pdf
| 96 |
tip. Aggregate from frequently refreshed scheduled computation Depending on how frequently your dashboards are loaded and how much latency you want for your dashboard, there is another approach to obtaining fresher results in your dashboard: having the scheduled computation refresh the aggregates more frequently. For instance, below is configuration of the same scheduled computation, except that it refreshes once every minute (note the schedule express cron(0/1 * * * ? *)). With this setup, the derived table per_minute_aggs_pt1m will have much more recent aggregates compared to the scenario where the computation specified a refresh schedule of once every 5 minutes. Patterns and examples 416 Amazon Timestream Developer Guide { "Name": "MultiPT1mPerMinutePerRegionMeasureCount", "QueryString": "SELECT region, bin(time, 1m) as minute, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 10m AND @scheduled_runtime + 1m GROUP BY bin(time, 1m), region", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/1 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "per_minute_aggs_pt1m", "TimeColumn": "minute", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "numDataPoints", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, Patterns and examples 417 Amazon Timestream Developer Guide "ScheduledQueryExecutionRoleArn": "******" } SELECT bin(time, 1m) as minute, SUM(numDataPoints) as numDatapoints FROM "derived"."per_minute_aggs_pt1m" WHERE time BETWEEN from_milliseconds(1636699996446) AND from_milliseconds(1636743196446) AND region = 'us-east-1' GROUP BY bin(time, 1m), region ORDER BY 1 desc Since the derived table has more recent aggregates, you can now directly query the derived table per_minute_aggs_pt1m to get fresher aggregates, as can be seen from the previous query and the dashboard snapshot below. Note that refreshing the scheduled computation at a faster schedule (say 1 minute compared to 5 minutes) will increase the maintenance costs for the scheduled computation. The notification message for every computation's execution provides statistics for how much data was scanned and how much was written to the derived table. Similarly, if you use the merged view to union the derived table, you query costs on the merged view and the dashboard load latency will be higher compared to only querying the derived table. Therefore, the approach you pick will depend on how frequently your dashboards are refreshed and the maintenance costs for the scheduled queries. If you have tens of users refreshing the dashboards once every minute or so, having a more frequent refresh of your derived table will likely result in overall lower costs. Patterns and examples 418 Amazon Timestream Last point from each device Developer Guide Your application may require you to read the last measurement emitted by a device. There can be more general use cases to obtain the last measurement for a device before a given date/time or the first measurement for a device after a given date/time. When you have millions of devices and years of data, this search might require scanning large amounts of data. Below you will see an example of how you can use scheduled queries to optimize searching for the last point emitted by a device. You can use the same pattern to optimize the first point query as well if your application needs them. Topics • Computed from source table • Derived table to precompute at daily granularity • Computed from derived table • Combining from source and derived table Computed from source table Below is an example query to find the last measurement emitted by the services in a specific deployment (for example, servers for a given micro-service within a given region, cell, silo, and availability_zone). In the example application, this query will return the last measurement for hundreds of servers. Also note that this query has an unbounded time predicate and looks for any data older than a given timestamp. Note For information about the max and max_by functions, see Aggregate functions. SELECT instance_name, MAX(time) AS time, MAX_BY(gc_pause, time) AS last_measure FROM "raw_data"."devops" WHERE time < from_milliseconds(1636685271872) AND measure_name = 'events' AND region = 'us-east-1' AND cell = 'us-east-1-cell-10' AND silo = 'us-east-1-cell-10-silo-3' AND availability_zone = 'us-east-1-1' AND microservice_name = 'hercules' Patterns and examples 419 Amazon Timestream Developer Guide GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ORDER BY instance_name, time DESC Derived table to precompute at daily granularity You can convert the preceding use case into a scheduled computation. If your application requirements are such that you may need to obtain these values for your entire fleet across multiple regions, cells, silos, availability zones and microservices, you can use one schedule computation to pre-compute the values for your entire fleet. That is the power of Timestream for LiveAnalytics's serverless scheduled queries that allows these queries to scale with your application's scaling requirements. Below is a query
|
timestream-097
|
timestream.pdf
| 97 |
GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ORDER BY instance_name, time DESC Derived table to precompute at daily granularity You can convert the preceding use case into a scheduled computation. If your application requirements are such that you may need to obtain these values for your entire fleet across multiple regions, cells, silos, availability zones and microservices, you can use one schedule computation to pre-compute the values for your entire fleet. That is the power of Timestream for LiveAnalytics's serverless scheduled queries that allows these queries to scale with your application's scaling requirements. Below is a query to pre-compute the last point across all the servers for a given day. Note that the query only has a time predicate and not a predicate on the dimensions. The time predicate limits the query to the past day from the time when the computation is triggered based on the specified schedule expression. SELECT region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version, MAX(time) AS time, MAX_BY(gc_pause, time) AS last_measure FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1d) - 1d AND bin(@scheduled_runtime, 1d) AND measure_name = 'events' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version Below is a configuration for the scheduled computation using the preceding query which executes that query at 01:00 hrs UTC every day to compute the aggregate for the past day. The schedule expression cron(0 1 * * ? *) controls this behavior and runs an hour after the day has ended to consider any data arriving up to a day late. { "Name": "PT1DPerInstanceLastpoint", "QueryString": "SELECT region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version, MAX(time) AS time, MAX_BY(gc_pause, time) AS last_measure FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1d) - 1d AND bin(@scheduled_runtime, 1d) AND measure_name = 'events' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version", "ScheduleConfiguration": { "ScheduleExpression": "cron(0 1 * * ? *)" Patterns and examples 420 Developer Guide Amazon Timestream }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "per_timeseries_lastpoint_pt1d", "TimeColumn": "time", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" }, { "Name": "cell", "DimensionValueType": "VARCHAR" }, { "Name": "silo", "DimensionValueType": "VARCHAR" }, { "Name": "availability_zone", "DimensionValueType": "VARCHAR" }, { "Name": "microservice_name", "DimensionValueType": "VARCHAR" }, { "Name": "instance_name", "DimensionValueType": "VARCHAR" }, { "Name": "process_name", "DimensionValueType": "VARCHAR" }, { "Name": "jdk_version", "DimensionValueType": "VARCHAR" } Patterns and examples 421 Developer Guide Amazon Timestream ], "MultiMeasureMappings": { "TargetMultiMeasureName": "last_measure", "MultiMeasureAttributeMappings": [ { "SourceColumn": "last_measure", "MeasureValueType": "DOUBLE" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Computed from derived table Once you define the derived table using the preceding configuration and at least one instance of the scheduled query has materialized data into the derived table, you can now query the derived table to get the latest measurement. Below is an example query on the derived table. SELECT instance_name, MAX(time) AS time, MAX_BY(last_measure, time) AS last_measure FROM "derived"."per_timeseries_lastpoint_pt1d" WHERE time < from_milliseconds(1636746715649) AND measure_name = 'last_measure' AND region = 'us-east-1' AND cell = 'us-east-1-cell-10' AND silo = 'us-east-1-cell-10-silo-3' AND availability_zone = 'us-east-1-1' AND microservice_name = 'hercules' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ORDER BY instance_name, time DESC Patterns and examples 422 Amazon Timestream Developer Guide Combining from source and derived table Similar to the previous example, any data from the derived table will not have the most recent writes. Therefore, you can again use a similar pattern as earlier to merge the data from the derived table for the older data and use the source data for the remaining tip. Below is an example of such a query using the similar UNION approach. Since the application requirement is to find the latest measurement before a time period, and this start time can be in past, the way you write this query is to use the provided time, use the source data for up to a day old from the specified time, and then use the derived table on the older data. As you can see from the query example below, the time predicate on the source data is bounded. That ensures efficient processing on the source table which has significantly higher volume of data, and then the unbounded time predicate is on the derived table. WITH last_point_derived AS ( SELECT instance_name, MAX(time) AS time, MAX_BY(last_measure, time) AS last_measure FROM "derived"."per_timeseries_lastpoint_pt1d" WHERE time < from_milliseconds(1636746715649) AND measure_name = 'last_measure' AND region = 'us-east-1' AND cell = 'us-east-1-cell-10' AND silo = 'us-east-1-cell-10-silo-3' AND availability_zone = 'us-east-1-1' AND microservice_name = 'hercules' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ), last_point_source AS ( SELECT instance_name, MAX(time) AS time, MAX_BY(gc_pause, time) AS last_measure FROM "raw_data"."devops" WHERE time < from_milliseconds(1636746715649) AND time > from_milliseconds(1636746715649) - 26h AND measure_name = 'events' AND region = 'us-east-1' AND
|
timestream-098
|
timestream.pdf
| 98 |
of data, and then the unbounded time predicate is on the derived table. WITH last_point_derived AS ( SELECT instance_name, MAX(time) AS time, MAX_BY(last_measure, time) AS last_measure FROM "derived"."per_timeseries_lastpoint_pt1d" WHERE time < from_milliseconds(1636746715649) AND measure_name = 'last_measure' AND region = 'us-east-1' AND cell = 'us-east-1-cell-10' AND silo = 'us-east-1-cell-10-silo-3' AND availability_zone = 'us-east-1-1' AND microservice_name = 'hercules' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ), last_point_source AS ( SELECT instance_name, MAX(time) AS time, MAX_BY(gc_pause, time) AS last_measure FROM "raw_data"."devops" WHERE time < from_milliseconds(1636746715649) AND time > from_milliseconds(1636746715649) - 26h AND measure_name = 'events' AND region = 'us-east-1' AND cell = 'us-east-1-cell-10' AND silo = 'us-east-1-cell-10-silo-3' AND availability_zone = 'us-east-1-1' AND microservice_name = 'hercules' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, process_name, jdk_version ) SELECT instance_name, MAX(time) AS time, MAX_BY(last_measure, time) AS last_measure FROM ( SELECT * FROM last_point_derived Patterns and examples 423 Amazon Timestream UNION SELECT * FROM last_point_source ) GROUP BY instance_name ORDER BY instance_name, time DESC Developer Guide The previous is just one illustration of how you can structure the derived tables. If you have years of data, you can use more levels of aggregations. For instance, you can have monthly aggregates on top of daily aggregates, and you can have hourly aggregates before the daily. So you can merge together the most recent to fill in the last hour, the hourly to fill in the last day, the daily to fill in the last month, and monthly to fill in the older. The number of levels you set up vs. the refresh schedule will be depending on your requirements of how frequently these queries are issues and how many users are concurrently issuing these queries. Unique dimension values You may have a use case where you have dashboards which you want to use the unique values of dimensions as variables to drill down on the metrics corresponding to a specific slice of data. The snapshot below is an example where the dashboard pre-populates the unique values of several dimensions such as region, cell, silo, microservice, and availability_zone. Here we show an example of how you can use scheduled queries to significantly speed up computing these distinct values of these variables from the metrics you are tracking. Topics • On raw data • Pre-compute unique dimension values • Computing the variables from derived table On raw data You can use SELECT DISTINCT to compute the distinct values seen from your data. For instance, if you want to obtain the distinct values of region, you can use the query of this form. SELECT DISTINCT region FROM "raw_data"."devops" WHERE time > ago(1h) ORDER BY 1 Patterns and examples 424 Amazon Timestream Developer Guide You may be tracking millions of devices and billions of time series. However, in most cases, these interesting variables are for lower cardinality dimensions, where you have a few to tens of values. Computing DISTINCT from raw data can require scanning large volumes of data. Pre-compute unique dimension values You want these variables to load fast so that your dashboards are interactive. Moreover, these variables are often computed on every dashboard load, so you want them to be cost-effective as well. You can optimize finding these variables using scheduled queries and materializing them in a derived table. First, you need to identify the dimensions for which you need to compute the DISTINCT values or columns which you will use in the predicates when computing the DISTINCT value. In this example, you can see that the dashboard is populating distinct values for the dimensions region, cell, silo, availability_zone and microservice. So you can use the query below to pre- compute these unique values. SELECT region, cell, silo, availability_zone, microservice_name, min(@scheduled_runtime) AS time, COUNT(*) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 15m AND @scheduled_runtime GROUP BY region, cell, silo, availability_zone, microservice_name There are a few important things to note here. • You can use one scheduled computation to pre-compute values for many different queries. For instance, you are using the preceding query to pre-compute values for five different variables. So you don't need one for each variable. You can use this same pattern to identify shared computation across multiple panels to optimize the number of scheduled queries you need to maintain. • The unique values of the dimensions isn't inherently time series data. So you convert this to time series using the @scheduled_runtime. By associating this data with the @scheduled_runtime parameter, you can also track which unique values appeared at a given point in time, thus creating time series data out of it. • In the previous example, you will see a metric value being tracked. This example uses COUNT(*). You can compute other meaningful aggregates if you want to track them for your dashboards. Patterns and examples 425 Amazon Timestream Developer Guide Below
|
timestream-099
|
timestream.pdf
| 99 |
of scheduled queries you need to maintain. • The unique values of the dimensions isn't inherently time series data. So you convert this to time series using the @scheduled_runtime. By associating this data with the @scheduled_runtime parameter, you can also track which unique values appeared at a given point in time, thus creating time series data out of it. • In the previous example, you will see a metric value being tracked. This example uses COUNT(*). You can compute other meaningful aggregates if you want to track them for your dashboards. Patterns and examples 425 Amazon Timestream Developer Guide Below is a configuration for a scheduled computation using the previous query. In this example, it is configured to refresh once every 15 mins using the schedule expression cron(0/15 * * * ? *). { "Name": "PT15mHighCardPerUniqueDimensions", "QueryString": "SELECT region, cell, silo, availability_zone, microservice_name, min(@scheduled_runtime) AS time, COUNT(*) as numDataPoints FROM raw_data.devops WHERE time BETWEEN @scheduled_runtime - 15m AND @scheduled_runtime GROUP BY region, cell, silo, availability_zone, microservice_name", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/15 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "hc_unique_dimensions_pt15m", "TimeColumn": "time", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" }, { "Name": "cell", "DimensionValueType": "VARCHAR" }, { "Name": "silo", "DimensionValueType": "VARCHAR" }, { "Name": "availability_zone", "DimensionValueType": "VARCHAR" }, { "Name": "microservice_name", "DimensionValueType": "VARCHAR" } Patterns and examples 426 Developer Guide Amazon Timestream ], "MultiMeasureMappings": { "TargetMultiMeasureName": "count_multi", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Computing the variables from derived table Once the scheduled computation pre-materializes the unique values in the derived table hc_unique_dimensions_pt15m, you can use the derived table to efficiently compute the unique values of the dimensions. Below are example queries for how to compute the unique values, and how you can use other variables as predicates in these unique value queries. Region SELECT DISTINCT region FROM "derived"."hc_unique_dimensions_pt15m" WHERE time > ago(1h) ORDER BY 1 Cell SELECT DISTINCT cell FROM "derived"."hc_unique_dimensions_pt15m" WHERE time > ago(1h) AND region = '${region}' Patterns and examples 427 Developer Guide Amazon Timestream ORDER BY 1 Silo SELECT DISTINCT silo FROM "derived"."hc_unique_dimensions_pt15m" WHERE time > ago(1h) AND region = '${region}' AND cell = '${cell}' ORDER BY 1 Microservice SELECT DISTINCT microservice_name FROM "derived"."hc_unique_dimensions_pt15m" WHERE time > ago(1h) AND region = '${region}' AND cell = '${cell}' ORDER BY 1 Availability Zone SELECT DISTINCT availability_zone FROM "derived"."hc_unique_dimensions_pt15m" WHERE time > ago(1h) AND region = '${region}' AND cell = '${cell}' AND silo = '${silo}' ORDER BY 1 Handling late-arriving data You may have scenarios where you can have data that arrives significantly late, for example, the time when the data was ingested into Timestream for LiveAnalytics is significantly delayed compared to the timestamp associated to the rows that are ingested. In the previous examples, you have seen how you can use the time ranges defined by the @scheduled_runtime parameter to account for some late arriving data. However, if you have use cases where data can be delayed by hours or days, you may need a different pattern to make sure your pre-computations in the derived table are appropriately updated to reflect such late-arriving data. For general information about late-arriving data, see Writing data (inserts and upserts). In the following you will see two different ways to address this late arriving data. • If you have predictable delays in your data arrival, then you can use another "catch-up" scheduled computation to update your aggregates for late arriving data. Patterns and examples 428 Amazon Timestream Developer Guide • If you have un-predictable delays or occasional late-arrival data, you can use manual executions to update the derived tables. This discussion covers scenarios for late data arrival. However, the same principles apply for data corrections, where you have modified the data in your source table and you want to update the aggregates in your derived tables. Topics • Scheduled catch-up queries • Manual executions for unpredictable late arriving data Scheduled catch-up queries Query aggregating data that arrived in time Below is a pattern you will see how you can use an automated way to update your aggregates if you have predictable delays in your data arrival. Consider one of the previous examples of a scheduled computation on real-time data below. This scheduled computation refreshes the derived table once every 30 minutes and already accounts for data up to an hour delayed. { "Name": "MultiPT30mPerHrPerTimeseriesDPCount", "QueryString": "SELECT region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time, 1h) as hour, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND @scheduled_runtime + 1h GROUP BY region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time,
|
timestream-100
|
timestream.pdf
| 100 |
you have predictable delays in your data arrival. Consider one of the previous examples of a scheduled computation on real-time data below. This scheduled computation refreshes the derived table once every 30 minutes and already accounts for data up to an hour delayed. { "Name": "MultiPT30mPerHrPerTimeseriesDPCount", "QueryString": "SELECT region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time, 1h) as hour, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND @scheduled_runtime + 1h GROUP BY region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time, 1h)", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/30 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", Patterns and examples 429 Amazon Timestream Developer Guide "TableName": "dp_per_timeseries_per_hr", "TimeColumn": "hour", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" }, { "Name": "cell", "DimensionValueType": "VARCHAR" }, { "Name": "silo", "DimensionValueType": "VARCHAR" }, { "Name": "availability_zone", "DimensionValueType": "VARCHAR" }, { "Name": "microservice_name", "DimensionValueType": "VARCHAR" }, { "Name": "instance_type", "DimensionValueType": "VARCHAR" }, { "Name": "os_version", "DimensionValueType": "VARCHAR" }, { "Name": "instance_name", "DimensionValueType": "VARCHAR" }, { "Name": "process_name", "DimensionValueType": "VARCHAR" }, { "Name": "jdk_version", "DimensionValueType": "VARCHAR" } ], Patterns and examples 430 Amazon Timestream Developer Guide "MultiMeasureMappings": { "TargetMultiMeasureName": "numDataPoints", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Catch-up query updating the aggregates for late arriving data Now if you consider the case that your data can be delayed by about 12 hours. Below is a variant of the same query. However, the difference is that it computes the aggregates on data that is delayed by up to 12 hours compared to when the scheduled computation is being triggered. For instance, you see the query in the example below, the time range this query is targeting is between 2h to 14h before when the query is triggered. Moreover, if you notice the schedule expression cron(0 0,12 * * ? *), it will trigger the computation at 00:00 UTC and 12:00 UTC every day. Therefore, when the query is triggered on 2021-12-01 00:00:00, then the query updates aggregates in the time range 2021-11-30 10:00:00 to 2021-11-30 22:00:00. Scheduled queries use upsert semantics similar to Timestream for LiveAnalytics's writes where this catch-up query will update the aggregate values with newer values if there is late arriving data in the window or if newer aggregates are found (e.g., a new grouping shows up in this aggregate which was not present when the original scheduled computation was triggered), then the new aggregate will be inserted into the derived table. Similarly, when the next instance is triggered on 2021-12-01 12:00:00, then that instance will update aggregates in the range 2021-11-30 22:00:00 to 2021-12-01 10:00:00. { "Name": "MultiPT12HPerHrPerTimeseriesDPCountCatchUp", Patterns and examples 431 Amazon Timestream Developer Guide "QueryString": "SELECT region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time, 1h) as hour, SUM(CASE WHEN measure_name = 'metrics' THEN 20 ELSE 5 END) as numDataPoints FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 14h AND bin(@scheduled_runtime, 1h) - 2h GROUP BY region, cell, silo, availability_zone, microservice_name, instance_type, os_version, instance_name, process_name, jdk_version, bin(time, 1h)", "ScheduleConfiguration": { "ScheduleExpression": "cron(0 0,12 * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "dp_per_timeseries_per_hr", "TimeColumn": "hour", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" }, { "Name": "cell", "DimensionValueType": "VARCHAR" }, { "Name": "silo", "DimensionValueType": "VARCHAR" }, { "Name": "availability_zone", "DimensionValueType": "VARCHAR" }, { "Name": "microservice_name", "DimensionValueType": "VARCHAR" }, { "Name": "instance_type", "DimensionValueType": "VARCHAR" Patterns and examples 432 Developer Guide Amazon Timestream }, { "Name": "os_version", "DimensionValueType": "VARCHAR" }, { "Name": "instance_name", "DimensionValueType": "VARCHAR" }, { "Name": "process_name", "DimensionValueType": "VARCHAR" }, { "Name": "jdk_version", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "numDataPoints", "MultiMeasureAttributeMappings": [ { "SourceColumn": "numDataPoints", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } This preceding example is an illustration assuming your late arrival is bounded to 12 hours and it is okay to update the derived table once every 12 hours for data arriving later than the real time window. You can adapt this pattern to update your derived table once every hour so your derived Patterns and examples 433 Amazon Timestream Developer Guide table reflects the late arriving data sooner. Similarly, you can adapt the time range to be older than 12 hours, e.g., a day or even a week or more, to handle predictable late-arriving data. Manual executions for unpredictable late arriving data There can be instances where you have unpredictable late arriving data or you made changes to the source data and updated some values after the fact. In
|
timestream-101
|
timestream.pdf
| 101 |
data arriving later than the real time window. You can adapt this pattern to update your derived table once every hour so your derived Patterns and examples 433 Amazon Timestream Developer Guide table reflects the late arriving data sooner. Similarly, you can adapt the time range to be older than 12 hours, e.g., a day or even a week or more, to handle predictable late-arriving data. Manual executions for unpredictable late arriving data There can be instances where you have unpredictable late arriving data or you made changes to the source data and updated some values after the fact. In all such cases, you can manually trigger scheduled queries to update the derived table. Below is an example on how you can achieve this. Assume that you have the use case where you have the computation written to the derived table dp_per_timeseries_per_hr. Your base data in the table devops was updated in the time range 2021-11-30 23:00:00 - 2021-12-01 00:00:00. There are two different scheduled queries that can be used to update this derived table: MultiPT30mPerHrPerTimeseriesDPCount and MultiPT12HPerHrPerTimeseriesDPCountCatchUp. Each scheduled computation you create in Timestream for LiveAnalytics has a unique ARN which you obtain when you create the computation or when you perform a list operation. You can use the ARN for the computation and a value for the parameter @scheduled_runtime taken by the query to perform this operation. Assume that the computation for MultiPT30mPerHrPerTimeseriesDPCount has an ARN arn_1 and you want to use this computation to update the derived table. Since the preceding scheduled computation updates the aggregates 1h before and 1hr after the @scheduled_runtime value, you can cover the time range for the update (2021-11-30 23:00:00 - 2021-12-01 00:00:00) using a value of 2021-12-01 00:00:00 for the @scheduled_runtime parameter. You can use the ExecuteScheduledQuery API to pass the ARN of this computation and the time parameter value in epoch seconds (in UTC) to achieve this. Below is an example using the AWS CLI and you can follow the same pattern using any of the SDKs supported by Timestream for LiveAnalytics. aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638316800 --profile profile --region us-east-1 In the previous example, profile is the AWS profile which has the appropriate privileges to make this API call and 1638316800 corresponds to the epoch second for 2021-12-01 00:00:00. This manual trigger behaves almost like the automated trigger assuming the system triggered this invocation at the desired time period. If you had an update in a longer time period, say the base data was updated for 2021-11-30 23:00:00 - 2021-12-01 11:00:00, then you can trigger the preceding queries multiple times to cover this entire time range. For instance, you could do six different execution as follows. Patterns and examples 434 Amazon Timestream Developer Guide aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638316800 --profile profile --region us-east-1 aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638324000 --profile profile --region us-east-1 aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638331200 --profile profile --region us-east-1 aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638338400 --profile profile --region us-east-1 aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638345600 --profile profile --region us-east-1 aws timestream-query execute-scheduled-query --scheduled-query-arn arn_1 --invocation- time 1638352800 --profile profile --region us-east-1 The previous six commands correspond to the scheduled computation invoked at 2021-12-01 00:00:00, 2021-12-01 02:00:00, 2021-12-01 04:0:00, 2021-12-01 06:00:00, 2021-12-01 08:00:00, and 2021-12-01 10:00: Alternatively, you can use the computation MultiPT12HPerHrPerTimeseriesDPCountCatchUp triggered at 2021-12-01 13:00:00 for one execution to update the aggregates for the entire 12 hour time range. For instance, if arn_2 is the ARN for that computation, you can execute the following command from CLI. aws timestream-query execute-scheduled-query --scheduled-query-arn arn_2 --invocation- time 1638363600 --profile profile --region us-east-1 It is worth noting that for a manual trigger, you can use a timestamp for the invocation-time parameter that does not need to be aligned with that automated trigger timestamps. For instance, in the previous example, you triggered the computation at time 2021-12-01 13:00:00 even though the automated schedule only triggers at timestamps 2021-12-01 10:00:00, 2021-12-01 12:00:00, and 2021-12-02 00:00:00. Timestream for LiveAnalytics provides you with the flexibility to trigger it with appropriate values as needed for your manual operations. Following are a few important considerations when using the ExecuteScheduledQuery API. • If you are triggering multiple of these invocations, you need to make sure that these invocations do not generate results in overlapping time ranges. For instance, in the previous examples, there Patterns and examples 435 Amazon Timestream Developer Guide were six invocations. Each invocation covers 2 hours of time range, and hence the invocation timestamps were spread out by two hours each to avoid any overlap in the updates. This ensures that the data in the derived table ends up in a state that matches are aggregates from the source table. If you cannot ensure non-overlapping time ranges, then
|
timestream-102
|
timestream.pdf
| 102 |
• If you are triggering multiple of these invocations, you need to make sure that these invocations do not generate results in overlapping time ranges. For instance, in the previous examples, there Patterns and examples 435 Amazon Timestream Developer Guide were six invocations. Each invocation covers 2 hours of time range, and hence the invocation timestamps were spread out by two hours each to avoid any overlap in the updates. This ensures that the data in the derived table ends up in a state that matches are aggregates from the source table. If you cannot ensure non-overlapping time ranges, then make sure these the executions are triggered sequentially one after the other. If you trigger multiple executions concurrently which overlap in their time ranges, then you can see trigger failures where you might see version conflicts in the error reports for these executions. Results generated by a scheduled query invocation are assigned a version based on when the invocation was triggered. Therefore, rows generated by newer invocations have higher versions. A higher version record can overwrite a lower version record. For automatically-triggered scheduled queries, Timestream for LiveAnalytics automatically manages the schedules so that you don't see these issues even if the subsequent invocations have overlapping time ranges. • noted earlier, you can trigger the invocations with any timestamp value for @scheduled_runtime. So it is your responsibility to appropriately set the values so the appropriate time ranges are updated in the derived table corresponding to the ranges where data was updated in the source table. • You can also use these manual trigger for scheduled queries that are in the DISABLED state. This allows you to define special queries that are not executed in an automated schedule, since they are in the DISABLED state. Rather, you can use the manual triggers on them to manage data corrections or late arrival use cases. Back-filling historical pre-computations When you create a scheduled computation, Timestream for LiveAnalytics manages executions of the queries moving forward where the refresh is governed by the schedule expression you provide. Depending of how much historical data your source table, you may want to update your derived table with aggregates corresponding to the historical data. You can use the preceding logic for manual triggers to back-fill the historical aggregates. For instance, if we consider the derived table per_timeseries_lastpoint_pt1d, then the scheduled computation is updated once a day for the past day. If your source table has a year of data, you can use the ARN for this scheduled computation and trigger it manually for every day up to a year old so that the derived table has all the historical queries populated. Notes that all the caveats for manual triggers apply here. Moreover, if the derived table is set up in a way that the historical ingestion will write to magnetic store on the derived table, be aware of the best practices and limits for writes to the magnetic store. Patterns and examples 436 Amazon Timestream Developer Guide Scheduled query examples This section contains examples of how you can use Timestream for LiveAnalytics's Scheduled Queries to optimize the costs and dashboard load times when visualizing fleet-wide statistics effectively monitor your fleet of devices. Scheduled Queries in Timestream for LiveAnalytics allow you to express your queries using the full SQL surface area of Timestream for LiveAnalytics. Your query can include one or more source tables, perform aggregations or any other query allowed by Timestream for LiveAnalytics's SQL language, and then store the results of the query in another destination table in Timestream for LiveAnalytics. This section refers to the target table of a scheduled query as a derived table. As an example, we will use a DevOps application where you are monitoring a large fleet of servers that are deployed across multiple deployments (such as regions, cells, and silos), multiple microservices, and you're tracking the fleet-wide statistics using Timestream for LiveAnalytics. The example schema we will use is described in Scheduled Queries Sample Schema. The following scenarios will be described. • How to convert a dashboard, plotting aggregated statistics from the raw data you ingest into Timestream for LiveAnalytics into a scheduled query and then how to use your pre-computed aggregates to create a new dashboard showing aggregate statistics. • How to combine scheduled queries to get an aggregate view and the raw granular data, to drill down into details. This allows you to store and analyze the raw data while optimizing your common fleet-wide operations using scheduled queries. • How to optimize costs using scheduled queries by finding which aggregates are used in multiple dashboards and have the same scheduled query populate multiple panels in the same or multiple dashboards. Topics • Converting an aggregate dashboard to scheduled query • Using scheduled queries and raw data for drill downs •
|
timestream-103
|
timestream.pdf
| 103 |
to create a new dashboard showing aggregate statistics. • How to combine scheduled queries to get an aggregate view and the raw granular data, to drill down into details. This allows you to store and analyze the raw data while optimizing your common fleet-wide operations using scheduled queries. • How to optimize costs using scheduled queries by finding which aggregates are used in multiple dashboards and have the same scheduled query populate multiple panels in the same or multiple dashboards. Topics • Converting an aggregate dashboard to scheduled query • Using scheduled queries and raw data for drill downs • Optimizing costs by sharing scheduled query across dashboards • Comparing a query on a base table with a query of scheduled query results Patterns and examples 437 Amazon Timestream Developer Guide Converting an aggregate dashboard to scheduled query Assume you are computing the fleet-wide statistics such as host counts in the fleet by the five microservices and by the six regions where your service is deployed. From the snapshot below, you can see there are 500K servers emitting metrics, and some of the bigger regions (e.g., us-east-1) have >200K servers. Computing these aggregates, where you are computing distinct instance names over hundreds of gigabytes of data can result in query latency of tens of seconds, in addition to the cost of scanning the data. Original dashboard query The aggregate shown in the dasboard panel is computed, from raw data, using the query below. The query uses multiple SQL constructs, such as distinct counts and multiple aggregation functions. SELECT CASE WHEN microservice_name = 'apollo' THEN num_instances ELSE NULL END AS apollo, CASE WHEN microservice_name = 'athena' THEN num_instances ELSE NULL END AS athena, CASE WHEN microservice_name = 'demeter' THEN num_instances ELSE NULL END AS demeter, CASE WHEN microservice_name = 'hercules' THEN num_instances ELSE NULL END AS hercules, CASE WHEN microservice_name = 'zeus' THEN num_instances ELSE NULL END AS zeus FROM ( SELECT microservice_name, SUM(num_instances) AS num_instances FROM ( SELECT microservice_name, COUNT(DISTINCT instance_name) as num_instances FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636526171043) AND from_milliseconds(1636612571043) AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name ) GROUP BY microservice_name Patterns and examples 438 Amazon Timestream ) Converting to a scheduled query Developer Guide The previous query can be converted into a scheduled query as follows. You first compute the distinct host names within a given deployment in a region, cell, silo, availability zone and microservice. Then you add up the hosts to compute a per hour per microservice host count. By using the @scheduled_runtime parameter supported by the scheduled queries, you can recompute it for the past hour when the query is invoked. The bin(@scheduled_runtime, 1h) in the WHERE clause of the inner query ensures that even if the query is scheduled at a time in the middle of the hour, you still get the data for the full hour. Even though the query computes hourly aggregates, as you will see in the scheduled computation configuration, it is set up to refresh every half hour so that you get updates in your derived table sooner. You can tune that based on your freshness requirements, e.g., recompute the aggregates every 15 minutes or recompute it at the hour boundaries. SELECT microservice_name, hour, SUM(num_instances) AS num_instances FROM ( SELECT microservice_name, bin(time, 1h) AS hour, COUNT(DISTINCT instance_name) as num_instances FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND @scheduled_runtime AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, bin(time, 1h) ) GROUP BY microservice_name, hour { "Name": "MultiPT30mHostCountMicroservicePerHr", "QueryString": "SELECT microservice_name, hour, SUM(num_instances) AS num_instances FROM ( SELECT microservice_name, bin(time, 1h) AS hour, COUNT(DISTINCT instance_name) as num_instances FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND @scheduled_runtime AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, bin(time, 1h) ) GROUP BY microservice_name, hour", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/30 * * * ? *)" }, Patterns and examples 439 Amazon Timestream Developer Guide "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "host_count_pt1h", "TimeColumn": "hour", "DimensionMappings": [ { "Name": "microservice_name", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "num_instances", "MultiMeasureAttributeMappings": [ { "SourceColumn": "num_instances", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Using the pre-computed results in a new dashboard You will now see how to create your aggregate view dashboard using the derived table from the scheduled query you created. From the dashboard snapshot, you will also be able to validate that the aggregates computed from the derived table and the base table also match. Once you create the dashboards using the derived tables, you will notice the significantly faster load time and lower Patterns and examples 440 Amazon Timestream Developer Guide costs of using the derived tables compared to
|
timestream-104
|
timestream.pdf
| 104 |
: "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Using the pre-computed results in a new dashboard You will now see how to create your aggregate view dashboard using the derived table from the scheduled query you created. From the dashboard snapshot, you will also be able to validate that the aggregates computed from the derived table and the base table also match. Once you create the dashboards using the derived tables, you will notice the significantly faster load time and lower Patterns and examples 440 Amazon Timestream Developer Guide costs of using the derived tables compared to computing these aggregates from the raw data. Below is a snapshot of the dashboard using pre-computed data, and the query used to render this panel using pre-computed data stored in the table "derived"."host_count_pt1h". Note that the structure of the query is very similar to the query that was used in the dashboard on raw data, except that is it using the derived table which already computes the distinct counts which this query is aggregating. SELECT CASE WHEN microservice_name = 'apollo' THEN num_instances ELSE NULL END AS apollo, CASE WHEN microservice_name = 'athena' THEN num_instances ELSE NULL END AS athena, CASE WHEN microservice_name = 'demeter' THEN num_instances ELSE NULL END AS demeter, CASE WHEN microservice_name = 'hercules' THEN num_instances ELSE NULL END AS hercules, CASE WHEN microservice_name = 'zeus' THEN num_instances ELSE NULL END AS zeus FROM ( SELECT microservice_name, AVG(num_instances) AS num_instances FROM ( SELECT microservice_name, bin(time, 1h), SUM(num_instances) as num_instances FROM "derived"."host_count_pt1h" WHERE time BETWEEN from_milliseconds(1636567785421) AND from_milliseconds(1636654185421) AND measure_name = 'num_instances' GROUP BY microservice_name, bin(time, 1h) ) GROUP BY microservice_name ) Using scheduled queries and raw data for drill downs You can use the aggregated statistics across your fleet to identify areas that need drill downs and then use the raw data to drill down into granular data to get deeper insights. In this example, you will see how you can use aggregate dashboard to identify any deployment (a deployment is for a given microservice within a given region, cell, silo, and availability zone) which Patterns and examples 441 Amazon Timestream Developer Guide seems to have higher CPU utilization compared to other deployments. You can then drill down to get a better understanding using the raw data. Since these drill downs might be infrequent and only access data relevant to the deployment, you can use the raw data for this analysis and do not need to use scheduled queries. Per deployment drill down The dashboard below provides drill down into more granular and server-level statistics within a given deployment. To help you drill down into the different parts of your fleet, this dashboard uses variables such as region, cell, silo, microservice, and availability_zone. It then shows some aggregate statistics for that deployment. In the query below, you can see that the values chosen in the drop down of the variables are used as predicates in the WHERE clause of the query, which allows you to only focus on the data for the deployment. And then the panel plots the aggregated CPU metrics for instances in that deployment. You can use the raw data to perform this drill down with interactive query latency to derive deeper insights. SELECT bin(time, 5m) as minute, ROUND(AVG(cpu_user), 2) AS avg_value, ROUND(APPROX_PERCENTILE(cpu_user, 0.9), 2) AS p90_value, ROUND(APPROX_PERCENTILE(cpu_user, 0.95), 2) AS p95_value, ROUND(APPROX_PERCENTILE(cpu_user, 0.99), 2) AS p99_value FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099476) AND from_milliseconds(1636613499476) AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1-cell-10-silo-1' AND microservice_name = 'demeter' AND availability_zone = 'eu-west-1-3' AND measure_name = 'metrics' Patterns and examples 442 Amazon Timestream GROUP BY bin(time, 5m) ORDER BY 1 Instance-level statistics Developer Guide This dashboard further computes another variable that also lists the servers/instances with high CPU utilization, sorted in descending order of utilization. The query used to compute this variable is displayed below. WITH microservice_cell_avg AS ( SELECT AVG(cpu_user) AS microservice_avg_metric FROM "raw_data"."devops" WHERE $__timeFilter AND measure_name = 'metrics' AND region = '${region}' AND cell = '${cell}' AND silo = '${silo}' AND availability_zone = '${availability_zone}' AND microservice_name = '${microservice}' ), instance_avg AS ( SELECT instance_name, AVG(cpu_user) AS instance_avg_metric FROM "raw_data"."devops" WHERE $__timeFilter AND measure_name = 'metrics' AND region = '${region}' AND cell = '${cell}' AND silo = '${silo}' AND microservice_name = '${microservice}' AND availability_zone = '${availability_zone}' GROUP BY availability_zone, instance_name ) SELECT i.instance_name FROM instance_avg i CROSS JOIN microservice_cell_avg m WHERE i.instance_avg_metric > (1 + ${utilization_threshold}) * m.microservice_avg_metric ORDER BY i.instance_avg_metric DESC In the preceding query, the variable is dynamically recalculated depending on the values chosen for the other variables. Once the variable is populated for a deployment, you can pick individual instances from the list to further visualize the metrics from that instance. You can pick the different instances from the drop down of the instance names as seen from
|
timestream-105
|
timestream.pdf
| 105 |
cell = '${cell}' AND silo = '${silo}' AND microservice_name = '${microservice}' AND availability_zone = '${availability_zone}' GROUP BY availability_zone, instance_name ) SELECT i.instance_name FROM instance_avg i CROSS JOIN microservice_cell_avg m WHERE i.instance_avg_metric > (1 + ${utilization_threshold}) * m.microservice_avg_metric ORDER BY i.instance_avg_metric DESC In the preceding query, the variable is dynamically recalculated depending on the values chosen for the other variables. Once the variable is populated for a deployment, you can pick individual instances from the list to further visualize the metrics from that instance. You can pick the different instances from the drop down of the instance names as seen from the snapshot below. Patterns and examples 443 Amazon Timestream Developer Guide Preceding panels show the statistics for the instance that is selected and below are the queries used to fetch these statistics. SELECT BIN(time, 30m) AS time_bin, AVG(cpu_user) AS avg_cpu, ROUND(APPROX_PERCENTILE(cpu_user, 0.99), 2) as p99_cpu Patterns and examples 444 Amazon Timestream FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099477) AND from_milliseconds(1636613499477) AND measure_name = 'metrics' Developer Guide AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1- cell-10-silo-1' AND availability_zone = 'eu-west-1-3' AND microservice_name = 'demeter' AND instance_name = 'i-zaZswmJk-demeter-eu-west-1-cell-10- silo-1-00000272.amazonaws.com' GROUP BY BIN(time, 30m) ORDER BY time_bin desc SELECT BIN(time, 30m) AS time_bin, AVG(memory_used) AS avg_memory, ROUND(APPROX_PERCENTILE(memory_used, 0.99), 2) as p99_memory FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099477) AND from_milliseconds(1636613499477) AND measure_name = 'metrics' AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1- cell-10-silo-1' AND availability_zone = 'eu-west-1-3' AND microservice_name = 'demeter' AND instance_name = 'i-zaZswmJk-demeter-eu-west-1-cell-10- silo-1-00000272.amazonaws.com' GROUP BY BIN(time, 30m) ORDER BY time_bin desc SELECT COUNT(gc_pause) FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099477) AND from_milliseconds(1636613499478) AND measure_name = 'events' AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1- cell-10-silo-1' AND availability_zone = 'eu-west-1-3' AND microservice_name = 'demeter' AND instance_name = 'i-zaZswmJk-demeter-eu-west-1-cell-10- silo-1-00000272.amazonaws.com' SELECT avg(gc_pause) as avg, round(approx_percentile(gc_pause, 0.99), 2) as p99 FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099478) AND from_milliseconds(1636613499478) Patterns and examples 445 Amazon Timestream Developer Guide AND measure_name = 'events' AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1- cell-10-silo-1' AND availability_zone = 'eu-west-1-3' AND microservice_name = 'demeter' AND instance_name = 'i-zaZswmJk-demeter-eu-west-1-cell-10- silo-1-00000272.amazonaws.com' SELECT BIN(time, 30m) AS time_bin, AVG(disk_io_reads) AS avg, ROUND(APPROX_PERCENTILE(disk_io_reads, 0.99), 2) as p99 FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636527099478) AND from_milliseconds(1636613499478) AND measure_name = 'metrics' AND region = 'eu-west-1' AND cell = 'eu-west-1-cell-10' AND silo = 'eu-west-1- cell-10-silo-1' AND availability_zone = 'eu-west-1-3' AND microservice_name = 'demeter' AND instance_name = 'i-zaZswmJk-demeter-eu-west-1-cell-10- silo-1-00000272.amazonaws.com' GROUP BY BIN(time, 30m) ORDER BY time_bin desc Optimizing costs by sharing scheduled query across dashboards In this example, we will see a scenario where multiple dashboard panels display variations of similar information (finding high CPU hosts and fraction of fleet with high CPU utilization) and how you can use the same scheduled query to pre-compute results which are then used to populate multiple panels. This reuse further optimizes your costs where instead of using different scheduled queries, one for each panel, you use only owner. Dashboard panels with raw data CPU utilization per region per microservice The first panel computes the instances whose avg CPU utilization is a threshold below or above the above CPU utilization for given deployment within a region, cell, silo, availability zone, and microservice. It then sorts the region and microservice which has the highest percentage of hosts with high utilization. It helps identify how hot the servers of a specific deployment are running, and then subsequently drill down to better understand the issues. Patterns and examples 446 Amazon Timestream Developer Guide The query for the panel demonstrates the flexibility of Timestream for LiveAnalytics's SQL support to perform complex analytical tasks with common table expressions, window functions, joins, and so on. Query: WITH microservice_cell_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, AVG(cpu_user) AS microservice_avg_metric FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636526593876) AND from_milliseconds(1636612993876) AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name ), instance_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, instance_name, AVG(cpu_user) AS instance_avg_metric FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636526593876) AND from_milliseconds(1636612993876) AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name ), instances_above_threshold AS ( SELECT i.*, CASE WHEN i.instance_avg_metric > (1 + 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS high_utilization, CASE WHEN i.instance_avg_metric < (1 - 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS low_utilization FROM instance_avg i INNER JOIN microservice_cell_avg m ON i.region = m.region AND i.cell = m.cell AND i.silo = m.silo AND i.availability_zone = m.availability_zone AND m.microservice_name = i.microservice_name ), per_deployment_high AS ( Patterns and examples 447 Amazon Timestream Developer Guide SELECT region, microservice_name, COUNT(*) AS num_hosts, SUM(high_utilization) AS high_utilization_hosts, SUM(low_utilization) AS low_utilization_hosts, ROUND(SUM(high_utilization) * 100.0 / COUNT(*), 0) AS percent_high_utilization_hosts, ROUND(SUM(low_utilization) * 100.0 / COUNT(*), 0) AS percent_low_utilization_hosts FROM instances_above_threshold GROUP BY region, microservice_name ), per_region_ranked AS ( SELECT *, DENSE_RANK() OVER (PARTITION BY region ORDER
|
timestream-106
|
timestream.pdf
| 106 |
WHEN i.instance_avg_metric < (1 - 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS low_utilization FROM instance_avg i INNER JOIN microservice_cell_avg m ON i.region = m.region AND i.cell = m.cell AND i.silo = m.silo AND i.availability_zone = m.availability_zone AND m.microservice_name = i.microservice_name ), per_deployment_high AS ( Patterns and examples 447 Amazon Timestream Developer Guide SELECT region, microservice_name, COUNT(*) AS num_hosts, SUM(high_utilization) AS high_utilization_hosts, SUM(low_utilization) AS low_utilization_hosts, ROUND(SUM(high_utilization) * 100.0 / COUNT(*), 0) AS percent_high_utilization_hosts, ROUND(SUM(low_utilization) * 100.0 / COUNT(*), 0) AS percent_low_utilization_hosts FROM instances_above_threshold GROUP BY region, microservice_name ), per_region_ranked AS ( SELECT *, DENSE_RANK() OVER (PARTITION BY region ORDER BY percent_high_utilization_hosts DESC, high_utilization_hosts DESC) AS rank FROM per_deployment_high ) SELECT * FROM per_region_ranked WHERE rank <= 2 ORDER BY percent_high_utilization_hosts desc, rank asc Drill down into a microservice to find hot spots The next dashboard allows you to drill deeper into one of the microservices to find out the specific region, cell, and silo for that microservice is running what fraction of fraction of its fleet at higher CPU utilization. For instance, in the fleet wide dashboard you saw the microservice demeter show up in the top few ranked positions, so in this dashboard, you want to drill deeper into that microservice. This dashboard uses a variable to pick microservice to drill down into, and the values of the variable is populated using unique values of the dimension. Once you pick the microservice, the rest of the dashboard refreshes. As you see below, the first panel plots the percentage of hosts in a deployment (a region, cell, and silo for a microservice) over time, and the corresponding query which is used to plot the dashboard. This plot itself identifies a specific deployment having higher percentage of hosts with high CPU. Patterns and examples 448 Amazon Timestream Developer Guide Query: WITH microservice_cell_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, bin(time, 1h) as hour, AVG(cpu_user) AS microservice_avg_metric FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636526898831) AND from_milliseconds(1636613298831) AND measure_name = 'metrics' AND microservice_name = 'demeter' GROUP BY region, cell, silo, availability_zone, microservice_name, bin(time, 1h) ), instance_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) as hour, AVG(cpu_user) AS instance_avg_metric FROM "raw_data"."devops" WHERE time BETWEEN from_milliseconds(1636526898831) AND from_milliseconds(1636613298831) AND measure_name = 'metrics' AND microservice_name = 'demeter' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) ), instances_above_threshold AS ( SELECT i.*, CASE WHEN i.instance_avg_metric > (1 + 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS high_utilization FROM instance_avg i INNER JOIN microservice_cell_avg m ON i.region = m.region AND i.cell = m.cell AND i.silo = m.silo AND i.availability_zone = m.availability_zone Patterns and examples 449 Amazon Timestream Developer Guide AND m.microservice_name = i.microservice_name AND m.hour = i.hour ), high_utilization_percent AS ( SELECT region, cell, silo, microservice_name, hour, COUNT(*) AS num_hosts, SUM(high_utilization) AS high_utilization_hosts, ROUND(SUM(high_utilization) * 100.0 / COUNT(*), 0) AS percent_high_utilization_hosts FROM instances_above_threshold GROUP BY region, cell, silo, microservice_name, hour ), high_utilization_ranked AS ( SELECT region, cell, silo, microservice_name, DENSE_RANK() OVER (PARTITION BY region ORDER BY AVG(percent_high_utilization_hosts) desc, AVG(high_utilization_hosts) desc) AS rank FROM high_utilization_percent GROUP BY region, cell, silo, microservice_name ) SELECT hup.silo, CREATE_TIME_SERIES(hour, hup.percent_high_utilization_hosts) AS percent_high_utilization_hosts FROM high_utilization_percent hup INNER JOIN high_utilization_ranked hur ON hup.region = hur.region AND hup.cell = hur.cell AND hup.silo = hur.silo AND hup.microservice_name = hur.microservice_name WHERE rank <= 2 GROUP BY hup.region, hup.cell, hup.silo ORDER BY hup.silo Converting into a single scheduled query enabling reuse It is important to note that a similar computation is done across the different panels across the two dashboards. You can define a separate scheduled query for each panel. Here you will see how you can further optimize your costs by defining one scheduled query who results can be used to render all the three panels. Following is the query that captures the aggregates that are computed and used for all the different panels. You will observe several important aspects in the definition of this scheduled query. • The flexibility and the power of the SQL surface area supported by scheduled queries, where you can use common table expressions, joins, case statements, etc. • You can using one scheduled query to compute the statistics at a finer granularity than a specific dashboard might need, and for all values that a dashboard might use for different variables. For instance, you will see the aggregates are computed across a region, cell, silo, and microservice. Therefore, you can combine these to create region-level, or region, and microservice-level Patterns and examples 450 Amazon Timestream Developer Guide aggregates. Similarly, the same query computes the aggregates for all regions, cells, silos, and microservices. It allows you to apply filters on these columns to obtain the aggregates for a subset of the values. For instance, you can compute the aggregates for any one region, say us- east-1, or any one microservice say demeter or drill down into
|
timestream-107
|
timestream.pdf
| 107 |
dashboard might use for different variables. For instance, you will see the aggregates are computed across a region, cell, silo, and microservice. Therefore, you can combine these to create region-level, or region, and microservice-level Patterns and examples 450 Amazon Timestream Developer Guide aggregates. Similarly, the same query computes the aggregates for all regions, cells, silos, and microservices. It allows you to apply filters on these columns to obtain the aggregates for a subset of the values. For instance, you can compute the aggregates for any one region, say us- east-1, or any one microservice say demeter or drill down into a specific deployment within a region, cell, silo, and microservice. This approach further optimizes your costs of maintaining the pre-computed aggregates. WITH microservice_cell_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, bin(time, 1h) as hour, AVG(cpu_user) AS microservice_avg_metric FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND bin(@scheduled_runtime, 1h) + 1h AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, bin(time, 1h) ), instance_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) as hour, AVG(cpu_user) AS instance_avg_metric FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND bin(@scheduled_runtime, 1h) + 1h AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) ), instances_above_threshold AS ( SELECT i.*, CASE WHEN i.instance_avg_metric > (1 + 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS high_utilization, CASE WHEN i.instance_avg_metric < (1 - 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS low_utilization FROM instance_avg i INNER JOIN microservice_cell_avg m ON i.region = m.region AND i.cell = m.cell AND i.silo = m.silo AND i.availability_zone = m.availability_zone AND m.microservice_name = i.microservice_name AND m.hour = i.hour ) SELECT region, cell, silo, microservice_name, hour, COUNT(*) AS num_hosts, SUM(high_utilization) AS high_utilization_hosts, SUM(low_utilization) AS low_utilization_hosts FROM instances_above_threshold GROUP BY region, cell, silo, microservice_name, hour Patterns and examples 451 Amazon Timestream Developer Guide The following is a scheduled query definition for the previous query. The schedule expression, it is configured to refresh every 30 mins, and refreshes the data for up to an hour back, again using the bin(@scheduled_runtime, 1h) construct to get the full hour's events. Depending on your application's freshness requirements, you can configure it to refresh more or less frequently. By using WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND bin(@scheduled_runtime, 1h) + 1h, we can ensure that even if you are refreshing once every 15 minutes, you will get the full hour's data for the current hour and the previous hour. Later on, you will see how the three panels use these aggregates written to table deployment_cpu_stats_per_hr to visualize the metrics that are relevant to the panel. { "Name": "MultiPT30mHighCpuDeploymentsPerHr", "QueryString": "WITH microservice_cell_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, bin(time, 1h) as hour, AVG(cpu_user) AS microservice_avg_metric FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND bin(@scheduled_runtime, 1h) + 1h AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, bin(time, 1h) ), instance_avg AS ( SELECT region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) as hour, AVG(cpu_user) AS instance_avg_metric FROM raw_data.devops WHERE time BETWEEN bin(@scheduled_runtime, 1h) - 1h AND bin(@scheduled_runtime, 1h) + 1h AND measure_name = 'metrics' GROUP BY region, cell, silo, availability_zone, microservice_name, instance_name, bin(time, 1h) ), instances_above_threshold AS ( SELECT i.*, CASE WHEN i.instance_avg_metric > (1 + 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS high_utilization, CASE WHEN i.instance_avg_metric < (1 - 0.2) * m.microservice_avg_metric THEN 1 ELSE 0 END AS low_utilization FROM instance_avg i INNER JOIN microservice_cell_avg m ON i.region = m.region AND i.cell = m.cell AND i.silo = m.silo AND i.availability_zone = m.availability_zone AND m.microservice_name = i.microservice_name AND m.hour = i.hour ) SELECT region, cell, silo, microservice_name, hour, COUNT(*) AS num_hosts, SUM(high_utilization) AS high_utilization_hosts, SUM(low_utilization) AS low_utilization_hosts FROM instances_above_threshold GROUP BY region, cell, silo, microservice_name, hour", "ScheduleConfiguration": { "ScheduleExpression": "cron(0/30 * * * ? *)" }, "NotificationConfiguration": { "SnsConfiguration": { "TopicArn": "******" } }, Patterns and examples 452 Amazon Timestream Developer Guide "TargetConfiguration": { "TimestreamConfiguration": { "DatabaseName": "derived", "TableName": "deployment_cpu_stats_per_hr", "TimeColumn": "hour", "DimensionMappings": [ { "Name": "region", "DimensionValueType": "VARCHAR" }, { "Name": "cell", "DimensionValueType": "VARCHAR" }, { "Name": "silo", "DimensionValueType": "VARCHAR" }, { "Name": "microservice_name", "DimensionValueType": "VARCHAR" } ], "MultiMeasureMappings": { "TargetMultiMeasureName": "cpu_user", "MultiMeasureAttributeMappings": [ { "SourceColumn": "num_hosts", "MeasureValueType": "BIGINT" }, { "SourceColumn": "high_utilization_hosts", "MeasureValueType": "BIGINT" }, { "SourceColumn": "low_utilization_hosts", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { Patterns and examples 453 Amazon Timestream Developer Guide "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Dashboard from pre-computed results High CPU utilization hosts For the high utilization hosts, you will see how the different panels use the data from deployment_cpu_stats_per_hr to compute different aggregates necessary for the panels. For instance, this panels provides region-level information, so it reports aggregates grouped by region and microservice, without filtering any region or microservice.
|
timestream-108
|
timestream.pdf
| 108 |
"num_hosts", "MeasureValueType": "BIGINT" }, { "SourceColumn": "high_utilization_hosts", "MeasureValueType": "BIGINT" }, { "SourceColumn": "low_utilization_hosts", "MeasureValueType": "BIGINT" } ] } } }, "ErrorReportConfiguration": { "S3Configuration" : { Patterns and examples 453 Amazon Timestream Developer Guide "BucketName" : "******", "ObjectKeyPrefix": "errors", "EncryptionOption": "SSE_S3" } }, "ScheduledQueryExecutionRoleArn": "******" } Dashboard from pre-computed results High CPU utilization hosts For the high utilization hosts, you will see how the different panels use the data from deployment_cpu_stats_per_hr to compute different aggregates necessary for the panels. For instance, this panels provides region-level information, so it reports aggregates grouped by region and microservice, without filtering any region or microservice. WITH per_deployment_hosts AS ( SELECT region, cell, silo, microservice_name, AVG(num_hosts) AS num_hosts, AVG(high_utilization_hosts) AS high_utilization_hosts, AVG(low_utilization_hosts) AS low_utilization_hosts FROM "derived"."deployment_cpu_stats_per_hr" WHERE time BETWEEN from_milliseconds(1636567785437) AND from_milliseconds(1636654185437) AND measure_name = 'cpu_user' GROUP BY region, cell, silo, microservice_name ), per_deployment_high AS ( SELECT region, microservice_name, SUM(num_hosts) AS num_hosts, ROUND(SUM(high_utilization_hosts), 0) AS high_utilization_hosts, ROUND(SUM(low_utilization_hosts),0) AS low_utilization_hosts, ROUND(SUM(high_utilization_hosts) * 100.0 / SUM(num_hosts)) AS percent_high_utilization_hosts, Patterns and examples 454 Amazon Timestream Developer Guide ROUND(SUM(low_utilization_hosts) * 100.0 / SUM(num_hosts)) AS percent_low_utilization_hosts FROM per_deployment_hosts GROUP BY region, microservice_name ), per_region_ranked AS ( SELECT *, DENSE_RANK() OVER (PARTITION BY region ORDER BY percent_high_utilization_hosts DESC, high_utilization_hosts DESC) AS rank FROM per_deployment_high ) SELECT * FROM per_region_ranked WHERE rank <= 2 ORDER BY percent_high_utilization_hosts desc, rank asc Drill down into a microservice to find high CPU usage deploymentss This next example again uses the deployment_cpu_stats_per_hr derived table, but now applies a filter for a specific microservice (demeter in this example, since it reported high utilization hosts in the aggregate dashboard). This panel tracks the percentage of high CPU utilization hosts over time. WITH high_utilization_percent AS ( SELECT region, cell, silo, microservice_name, bin(time, 1h) AS hour, MAX(num_hosts) AS num_hosts, MAX(high_utilization_hosts) AS high_utilization_hosts, ROUND(MAX(high_utilization_hosts) * 100.0 / MAX(num_hosts)) AS percent_high_utilization_hosts FROM "derived"."deployment_cpu_stats_per_hr" Patterns and examples 455 Amazon Timestream Developer Guide WHERE time BETWEEN from_milliseconds(1636525800000) AND from_milliseconds(1636612200000) AND measure_name = 'cpu_user' AND microservice_name = 'demeter' GROUP BY region, cell, silo, microservice_name, bin(time, 1h) ), high_utilization_ranked AS ( SELECT region, cell, silo, microservice_name, DENSE_RANK() OVER (PARTITION BY region ORDER BY AVG(percent_high_utilization_hosts) desc, AVG(high_utilization_hosts) desc) AS rank FROM high_utilization_percent GROUP BY region, cell, silo, microservice_name ) SELECT hup.silo, CREATE_TIME_SERIES(hour, hup.percent_high_utilization_hosts) AS percent_high_utilization_hosts FROM high_utilization_percent hup INNER JOIN high_utilization_ranked hur ON hup.region = hur.region AND hup.cell = hur.cell AND hup.silo = hur.silo AND hup.microservice_name = hur.microservice_name WHERE rank <= 2 GROUP BY hup.region, hup.cell, hup.silo ORDER BY hup.silo Comparing a query on a base table with a query of scheduled query results In this Timestream query example, we use the following schema, example queries, and outputs to compare a query on a base table with a query on a derived table of scheduled query results. With a well-planned scheduled query, you can get a derived table with fewer rows and other characteristics that can lead to faster queries than would be possible on the original base table. For a video that describes this scenario, see Improve query performance and reduce cost using scheduled queries in Amazon Timestream for LiveAnalytics. For this example, we use the following scenario: • Region – us-east-1 • Base table – "clickstream"."shopping" • Derived table – "clickstream"."aggregate" Base table The following describes the schema for the base table. Patterns and examples 456 Amazon Timestream Developer Guide Column channel description event ip_address measure_name product product_id quantity query session_id user_group user_id Type varchar varchar varchar varchar varchar varchar varchar double varchar varchar varchar varchar Timestream for LiveAnalytics attribute type MULTI MULTI DIMENSION DIMENSION MEASURE_NAME MULTI MULTI MULTI MULTI DIMENSION DIMENSION DIMENSION The following describes the measures for the base table. A base table refers to a table in Timestream that scheduled query is run on. • measure_name – metrics • data – multi • dimensions: [ ( user_group, varchar ),( user_id, varchar ),( session_id, varchar ),( ip_address, varchar ),( event, varchar ) ] Patterns and examples 457 Amazon Timestream Query on a base table Developer Guide The following is an ad-hoc query that gathers counts by a 5-minute aggregate in a given time range. SELECT BIN(time, 5m) as time, channel, product_id, SUM(quantity) as product_quantity FROM "clickstream"."shopping" WHERE BIN(time, 5m) BETWEEN '2023-05-11 10:10:00.000000000' AND '2023-05-11 10:30:00.000000000' AND channel = 'Social media' and product_id = '431412' GROUP BY BIN(time, 5m),channel,product_id Output: duration:1.745 sec Bytes scanned: 29.89 MB Query Id: AEBQEANMHG7MHHBHCKJ3BSOE3QUGIDBGWCCP5I6J6YUW5CVJZ2M3JCJ27QRMM7A Row count:5 Scheduled query The following is a scheduled query that runs every 5 minutes. SELECT BIN(time, 5m) as time, channel as measure_name, product_id, product, SUM(quantity) as product_quantity FROM "clickstream"."shopping" WHERE time BETWEEN BIN(@scheduled_runtime, 5m) - 10m AND BIN(@scheduled_runtime, 5m) - 5m AND channel = 'Social media' GROUP BY BIN(time, 5m), channel, product_id, product Query on a derived table The following is an ad-hoc query on a derived table. A derived table refers to a Timestream table that contains the results of a scheduled query. SELECT
|
timestream-109
|
timestream.pdf
| 109 |
GROUP BY BIN(time, 5m),channel,product_id Output: duration:1.745 sec Bytes scanned: 29.89 MB Query Id: AEBQEANMHG7MHHBHCKJ3BSOE3QUGIDBGWCCP5I6J6YUW5CVJZ2M3JCJ27QRMM7A Row count:5 Scheduled query The following is a scheduled query that runs every 5 minutes. SELECT BIN(time, 5m) as time, channel as measure_name, product_id, product, SUM(quantity) as product_quantity FROM "clickstream"."shopping" WHERE time BETWEEN BIN(@scheduled_runtime, 5m) - 10m AND BIN(@scheduled_runtime, 5m) - 5m AND channel = 'Social media' GROUP BY BIN(time, 5m), channel, product_id, product Query on a derived table The following is an ad-hoc query on a derived table. A derived table refers to a Timestream table that contains the results of a scheduled query. SELECT time, measure_name, product_id,product_quantity Patterns and examples 458 Amazon Timestream Developer Guide FROM "clickstream"."aggregate" WHERE time BETWEEN '2023-05-11 10:10:00.000000000' AND '2023-05-11 10:30:00.000000000' AND measure_name = 'Social media' and product_id = '431412' Output: duration: 0.2960 sec Bytes scanned: 235.00 B QueryID: AEBQEANMHHAAQU4FFTT6CFM6UYXTL4SMLZV22MFP4KV2Z7IRVOPLOMLDD6BR33Q Row count: 5 Comparison The following is a comparison of the results of a query on a base table with a query on a derived table. The same query on a derived table that has aggregated results done through a scheduled query completes faster with fewer scanned bytes. These results show the value of using scheduled queries to aggregate data for faster queries. Query on base table Query on derived table Duration Bytes scanned 1.745 sec 29.89 MB Row count 5 0.2960 sec 235 bytes 5 Using UNLOAD to export query results to S3 from Timestream for LiveAnalytics Amazon Timestream for LiveAnalytics now enables you to export your query results to Amazon S3 in a cost-effective and secure way using the UNLOAD statement. Using the UNLOAD statement, you can now export time series data to selected S3 buckets in either Apache Parquet or Comma Separated Values (CSV) format, which provides flexibility to store, combine, and analyze your time series data with other services. The UNLOAD statement allows you to export the data in a compressed manner, which reduces the data transferred and storage space required. UNLOAD Using UNLOAD 459 Amazon Timestream Developer Guide also supports partitioning based on selected attributes when exporting the data, improving performance and reducing the processing time of downstream services accessing the data. In addition, you can use Amazon S3 managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) managed keys (SSE-KMS) to encrypt your exported data. Benefits of UNLOAD from Timestream for LiveAnalytics The key benefits of using the UNLOAD statement are as follows. • Operational ease – With the UNLOAD statement, you can export gigabytes of data in a single query request in either Apache Parquet or CSV format, providing flexibility to select the best suited format for your downstream processing needs and making it easier to build data lakes. • Secure and Cost effective – UNLOAD statement provides the capability to export your data to an S3 bucket in a compressed manner and to encrypt (SSE-KMS or SSE_S3) your data using customer managed keys, reducing the data storage costs and protecting against unauthorized access. • Performance – Using the UNLOAD statement, you can partition the data when exporting to an S3 bucket. Partitioning the data enables downstream services to process the data in parallel, reducing their processing time. In addition, downstream services can process only the data they need, reducing the processing resources required and thereby costs associated. Use cases for UNLOAD from Timestream for LiveAnalytics You can use the UNLOAD statement to write data to your S3 bucket to the following. • Build Data Warehouse – You can export gigabytes of query results into S3 bucket and more easily add time series data into your data lake. You can use services such as Amazon Athena and Amazon Redshift to combine your time series data with other relevant data to derive complex business insights. • Build AI and ML data pipelines – The UNLOAD statement enables you to easily build data pipelines for your machine learning models that access time series data, making it easier to use time series data with services such as Amazon SageMaker and Amazon EMR. • Simplify ETL Processing – Exporting data into S3 buckets can simplify the process of performing Extract, Transform, Load (ETL) operations on the data, enabling you to seamlessly use third-party tools or AWS services such as AWS Glue to process and transform the data. Benefits 460 Developer Guide Amazon Timestream UNLOAD Concepts Syntax UNLOAD (SELECT statement) TO 's3://bucket-name/folder' WITH ( option = expression [, ...] ) where option is { partitioned_by = ARRAY[ col_name[,…] ] | format = [ '{ CSV | PARQUET }' ] | compression = [ '{ GZIP | NONE }' ] | encryption = [ '{ SSE_KMS | SSE_S3 }' ] | kms_key = '<string>' | field_delimiter ='<character>' | escaped_by = '<character>' | include_header = ['{true, false}'] | max_file_size = '<value>' | } Parameters SELECT statement The query statement
|
timestream-110
|
timestream.pdf
| 110 |
services such as AWS Glue to process and transform the data. Benefits 460 Developer Guide Amazon Timestream UNLOAD Concepts Syntax UNLOAD (SELECT statement) TO 's3://bucket-name/folder' WITH ( option = expression [, ...] ) where option is { partitioned_by = ARRAY[ col_name[,…] ] | format = [ '{ CSV | PARQUET }' ] | compression = [ '{ GZIP | NONE }' ] | encryption = [ '{ SSE_KMS | SSE_S3 }' ] | kms_key = '<string>' | field_delimiter ='<character>' | escaped_by = '<character>' | include_header = ['{true, false}'] | max_file_size = '<value>' | } Parameters SELECT statement The query statement used to select and retrieve data from one or more Timestream for LiveAnalytics tables. (SELECT column 1, column 2, column 3 from database.table where measure_name = "ABC" and timestamp between ago (1d) and now() ) TO clause TO 's3://bucket-name/folder' or TO 's3://access-point-alias/folder' Concepts 461 Amazon Timestream Developer Guide The TO clause in the UNLOAD statement specifies the destination for the output of the query results. You need to provide the full path, including either Amazon S3 bucket-name or Amazon S3 access-point-alias with folder location on Amazon S3 where Timestream for LiveAnalytics writes the output file objects. The S3 bucket should be owned by the same account and in the same region. In addition to the query result set, Timestream for LiveAnalytics writes the manifest and metadata files to specified destination folder. PARTITIONED_BY clause partitioned_by = ARRAY [col_name[,…] , (default: none) The partitioned_by clause is used in queries to group and analyze data at a granular level. When you export your query results to the S3 bucket, you can choose to partition the data based on one or more columns in the select query. When partitioning the data, the exported data is divided into subsets based on the partition column and each subset is stored in a separate folder. Within the results folder that contains your exported data, a sub-folder folder/results/partition column = partition value/ is automatically created. However, note that partitioned columns are not included in the output file. partitioned_by is not a mandatory clause in the syntax. If you choose to export the data without any partitioning, you can exclude the clause in the syntax. Example Assuming you are monitoring clickstream data of your website and have 5 channels of traffic namely direct, Social Media, Organic Search, Other, and Referral. When exporting the data, you can choose to partition the data using the column Channel. Within your data folder, s3://bucketname/results, you will have five folders each with their respective channel name, for instance, s3://bucketname/results/channel=Social Media/. Within this folder you will find the data of all the customers that landed on your website through the Social Media channel. Similarly, you will have other folders for the remaining channels. Exported data partitioned by Channel column Concepts 462 Amazon Timestream Developer Guide FORMAT format = [ '{ CSV | PARQUET }' , default: CSV The keywords to specify the format of the query results written to your S3 bucket. You can export the data either as a comma separated value (CSV) using a comma (,) as the default delimiter or in the Apache Parquet format, an efficient open columnar storage format for analytics. COMPRESSION compression = [ '{ GZIP | NONE }' ], default: GZIP You can compress the exported data using compression algorithm GZIP or have it uncompressed by specifying the NONE option. ENCRYPTION encryption = [ '{ SSE_KMS | SSE_S3 }' ], default: SSE_S3 The output files on Amazon S3 are encrypted using your selected encryption option. In addition to your data, the manifest and metadata files are also encrypted based on your selected encryption option. We currently support SSE_S3 and SSE_KMS encryption. SSE_S3 is a server- side encryption with Amazon S3 encrypting the data using 256-bit advanced encryption standard (AES) encryption. SSE_KMS is a server-side encryption to encrypt data using customer- managed keys. Concepts 463 Amazon Timestream KMS_KEY kms_key = '<string>' Developer Guide KMS Key is a customer-defined key to encrypt exported query results. KMS Key is securely managed by AWS Key Management Service (AWS KMS) and used to encrypt data files on Amazon S3. FIELD_DELIMITER field_delimiter ='<character>' , default: (,) When exporting the data in CSV format, this field specifies a single ASCII character that is used to separate fields in the output file, such as pipe character (|), a comma (,), or tab (/t). The default delimiter for CSV files is a comma character. If a value in your data contains the chosen delimiter, the delimiter will be quoted with a quote character. For instance, if the value in your data contains Time,stream, then this value will be quoted as "Time,stream" in the exported data. The quote character used by Timestream for LiveAnalytics is double quotes ("). Avoid specifying the carriage return character (ASCII 13, hex 0D,
|
timestream-111
|
timestream.pdf
| 111 |
field specifies a single ASCII character that is used to separate fields in the output file, such as pipe character (|), a comma (,), or tab (/t). The default delimiter for CSV files is a comma character. If a value in your data contains the chosen delimiter, the delimiter will be quoted with a quote character. For instance, if the value in your data contains Time,stream, then this value will be quoted as "Time,stream" in the exported data. The quote character used by Timestream for LiveAnalytics is double quotes ("). Avoid specifying the carriage return character (ASCII 13, hex 0D, text '\r') or the line break character (ASCII 10, hex 0A, text '\n') as the FIELD_DELIMITER if you want to include headers in the CSV, since that will prevent many parsers from being able to parse the headers correctly in the resulting CSV output. ESCAPED_BY escaped_by = '<character>', default: (\) When exporting the data in CSV format, this field specifies the character that should be treated as an escape character in the data file written to S3 bucket. Escaping happens in the following scenarios: 1. If the value itself contains the quote character (") then it will be escaped using an escape character. For example, if the value is Time"stream, where (\) is the configured escape character, then it will be escaped as Time\"stream. 2. If the value contains the configured escape character, it will be escaped. For example, if the value is Time\stream, then it will be escaped as Time\\stream. Concepts 464 Amazon Timestream Note Developer Guide If the exported output contains complex data type in the like Arrays, Rows or Timeseries, it will be serialized as a JSON string. Following is an example. Data type Actual value How the value is escaped in CSV format [serialized JSON string] Array Row [ 23,24,25 ] "[23,24,25]" ( x=23.0, y=hello ) "{\"x\":23.0,\"y\": \"hello\"}" Timeseries [ ( time=1970-01-01 "[{\"time\":\"1970 00:00:00.000000010 -01-01 00:00:00. , value=100.0 ), 000000010Z\",\"val ( time=1970-01-01 ue\":100.0},{\"tim 00:00:00.000000012, e\":\"1970-01-01 value=120.0 ) ] 00:00:00.000000012 Z\",\"value\":120. 0}]" INCLUDE_HEADER include_header = 'true' , default: 'false' When exporting the data in CSV format, this field lets you include column names as the first row of the exported CSV data files. The accepted values are 'true' and 'false' and the default value is 'false'. Text transformation options such as escaped_by and field_delimiter apply to headers as well. Concepts 465 Amazon Timestream Note Developer Guide When including headers, it is important that you not select a carriage return character (ASCII 13, hex 0D, text '\r') or a line break character (ASCII 10, hex 0A, text '\n') as the FIELD_DELIMITER, since that will prevent many parsers from being able to parse the headers correctly in the resulting CSV output. MAX_FILE_SIZE max_file_size = 'X[MB|GB]' , default: '78GB' This field specifies the maximum size of the files that the UNLOAD statement creates in Amazon S3. The UNLOAD statement can create multiple files but the maximum size of each file written to Amazon S3 will be approximately what is specified in this field. The value of the field must be between 16 MB and 78 GB, inclusive. You can specify it in integer such as 12GB, or in decimals such as 0.5GB or 24.7MB. The default value is 78 GB. The actual file size is approximated when the file is being written, so the actual maximum size may not be exactly equal to the number you specify. What is written to my S3 bucket? For every successfully executed UNLOAD query, Timestream for LiveAnalytics writes your query results, metadata file and manifest file into the S3 bucket. If you have partitioned the data, you have all the partition folders in the results folder. Manifest file contains a list of the files that were written by the UNLOAD command. Metadata file contains information that describes the characteristics, properties, and attributes of the written data. What is the exported file name? The exported file name contains two components, the first component is the queryID and the second component is a unique identifier. CSV files S3://bucket_name/results/<queryid>_<UUID>.csv Concepts 466 Amazon Timestream Developer Guide S3://bucket_name/results/<partitioncolumn>=<partitionvalue>/<queryid>_<UUID>.csv Compressed CSV file S3://bucket_name/results/<partitioncolumn>=<partitionvalue>/<queryid>_<UUID>.gz Parquet file S3://bucket_name/results/<partitioncolumn>=<partitionvalue>/<queryid>_<UUID>.parquet Metadata and Manifest files S3://bucket_name/<queryid>_<UUID>_manifest.json S3://bucket_name/<queryid>_<UUID>_metadata.json As the data in CSV format is stored at a file level, when you compress the data when exporting to S3, the file will have a “.gz” extension. However, the data in Parquet is compressed at column level so even when you compress the data while exporting, the file will still have .parquet extension. What information does each file contain? Manifest file The manifest file provides information on the list of files that are exported with the UNLOAD execution. The manifest file is available in the provided S3 bucket with a file name: s3:// <bucket_name>/<queryid>_<UUID>_manifest.json. The manifest file will contain the url of the files in the results
|
timestream-112
|
timestream.pdf
| 112 |
at a file level, when you compress the data when exporting to S3, the file will have a “.gz” extension. However, the data in Parquet is compressed at column level so even when you compress the data while exporting, the file will still have .parquet extension. What information does each file contain? Manifest file The manifest file provides information on the list of files that are exported with the UNLOAD execution. The manifest file is available in the provided S3 bucket with a file name: s3:// <bucket_name>/<queryid>_<UUID>_manifest.json. The manifest file will contain the url of the files in the results folder, the number of records and size of the respective files, and the query metadata (which is total bytes and total rows exported to S3 for the query). { "result_files": [ { "url":"s3://my_timestream_unloads/ec2_metrics/ AEDAGANLHLBH4OLISD3CVOZZRWPX5GV2XCXRBKCVD554N6GWPWWXBP7LSG74V2Q_1448466917_szCL4YgVYzGXj2lS.gz", "file_metadata": { "content_length_in_bytes": 32295, "row_count": 10 } }, { Concepts 467 Amazon Timestream Developer Guide "url":"s3://my_timestream_unloads/ec2_metrics/ AEDAGANLHLBH4OLISD3CVOZZRWPX5GV2XCXRBKCVD554N6GWPWWXBP7LSG74V2Q_1448466917_szCL4YgVYzGXj2lS.gz", "file_metadata": { "content_length_in_bytes": 62295, "row_count": 20 } }, ], "query_metadata": { "content_length_in_bytes": 94590, "total_row_count": 30, "result_format": "CSV", "result_version": "Amazon Timestream version 1.0.0" }, "author": { "name": "Amazon Timestream", "manifest_file_version": "1.0" } } Metadata The metadata file provides additional information about the data set such as column name, column type, and schema. The metadata file is available in the provided S3 bucket with a file name: S3://bucket_name/<queryid>_<UUID>_metadata.json Following is an example of a metadata file. { "ColumnInfo": [ { "Name": "hostname", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "region", "Type": { "ScalarType": "VARCHAR" } Concepts 468 Developer Guide Amazon Timestream }, { "Name": "measure_name", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "cpu_utilization", "Type": { "TimeSeriesMeasureValueColumnInfo": { "Type": { "ScalarType": "DOUBLE" } } } } ], "Author": { "Name": "Amazon Timestream", "MetadataFileVersion": "1.0" } } The column information shared in the metadata file has same structure as ColumnInfo sent in Query API response for SELECT queries. Results Results folder contains your exported data in either Apache Parquet or CSV format. Example When you submit an UNLOAD query like below via Query API, UNLOAD(SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://my_timestream_unloads/withoutpartition/' WITH ( format='CSV', compression='GZIP') UNLOAD query response will have 1 row * 3 columns. Those 3 columns are: Concepts 469 Amazon Timestream Developer Guide • rows of type BIGINT - indicating the number of rows exported • metadataFile of type VARCHAR - which is the S3 URI of metadata file exported • manifestFile of type VARCHAR - which is the S3 URI of manifest file exported You will get the following response from Query API: { "Rows": [ { "Data": [ { "ScalarValue": "20" # No of rows in output across all files }, { "ScalarValue": "s3://my_timestream_unloads/withoutpartition/ AEDAAANGH3D7FYHOBQGQQMEAISCJ45B42OWWJMOT4N6RRJICZUA7R25VYVOHJIY_<UUID>_metadata.json" #Metadata file }, { "ScalarValue": "s3://my_timestream_unloads/withoutpartition/ AEDAAANGH3D7FYHOBQGQQMEAISCJ45B42OWWJMOT4N6RRJICZUA7R25VYVOHJIY_<UUID>_manifest.json" #Manifest file } ] } ], "ColumnInfo": [ { "Name": "rows", "Type": { "ScalarType": "BIGINT" } }, { "Name": "metadataFile", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "manifestFile", "Type": { Concepts 470 Amazon Timestream Developer Guide "ScalarType": "VARCHAR" } } ], "QueryId": "AEDAAANGH3D7FYHOBQGQQMEAISCJ45B42OWWJMOT4N6RRJICZUA7R25VYVOHJIY", "QueryStatus": { "ProgressPercentage": 100.0, "CumulativeBytesScanned": 1000, "CumulativeBytesMetered": 10000000 } } Data types The UNLOAD statement supports all data types of Timestream for LiveAnalytics’s query language described in Supported data types except time and unknown. Prerequisites for UNLOAD from Timestream for LiveAnalytics Following are prerequisites for writing data to S3 using UNLOAD from Timestream for LiveAnalytics. • You must have permission to read data from the Timestream for LiveAnalytics table(s) to be used in an UNLOAD command. • You must have an Amazon S3 bucket in the same AWS Region as your Timestream for LiveAnalytics resources. • For the selected S3 bucket, ensure that the S3 bucket policy also has permissions to allow Timestream for LiveAnalytics to export the data. • The credentials used to execute UNLOAD query must have necessary AWS Identity and Access Management (IAM) permissions that allows Timestream for LiveAnalytics to write the data to S3. An example policy would be as follows: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "timestream:Select", "timestream:ListMeasures", "timestream:WriteRecords", "timestream:Unload" Prerequisites 471 Amazon Timestream ], Developer Guide "Resource": "arn:aws:timestream:<region>:<account_id>:database/ <database_name>/table/<table_name>" }, { "Effect": "Allow", "Action": [ "s3:GetBucketAcl", "s3:PutObject", "s3:GetObjectMetadata", "s3:AbortMultipartUpload" ], "Resource": [ "arn:aws:s3:::<S3_Bucket_Created>", "arn:aws:s3:::<S3_Bucket_Created>/*" ] } ] } For additional context on these S3 write permissions, refer to the Amazon Simple Storage Service guide. If you are using a KMS key for encrypting the exported data, see the following for the additional IAM policies required. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:Decrypt", "kms:GenerateDataKey*" ], "Resource": "<account_id>-arn:aws:kms:<region>:<account_id>:key/*", "Condition": { "ForAnyValue:StringLike": { "kms:ResourceAliases": "alias/<Alias_For_Generated_Key>" } } }, { "Effect": "Allow", "Action": [ Prerequisites 472 Amazon Timestream Developer Guide "kms:CreateGrant" ], "Resource": "<account_id>-arn:aws:kms:<region>:<account_id>:key/*", "Condition": { "ForAnyValue:StringEquals": { "kms:EncryptionContextKeys": "aws:timestream:<database_name>" }, "Bool": { "kms:GrantIsForAWSResource": true }, "StringLike": { "kms:ViaService": "timestream.<region>.amazonaws.com" }, "ForAnyValue:StringLike": { "kms:ResourceAliases": "alias/<Alias_For_Generated_Key>" } }
|
timestream-113
|
timestream.pdf
| 113 |
For additional context on these S3 write permissions, refer to the Amazon Simple Storage Service guide. If you are using a KMS key for encrypting the exported data, see the following for the additional IAM policies required. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:Decrypt", "kms:GenerateDataKey*" ], "Resource": "<account_id>-arn:aws:kms:<region>:<account_id>:key/*", "Condition": { "ForAnyValue:StringLike": { "kms:ResourceAliases": "alias/<Alias_For_Generated_Key>" } } }, { "Effect": "Allow", "Action": [ Prerequisites 472 Amazon Timestream Developer Guide "kms:CreateGrant" ], "Resource": "<account_id>-arn:aws:kms:<region>:<account_id>:key/*", "Condition": { "ForAnyValue:StringEquals": { "kms:EncryptionContextKeys": "aws:timestream:<database_name>" }, "Bool": { "kms:GrantIsForAWSResource": true }, "StringLike": { "kms:ViaService": "timestream.<region>.amazonaws.com" }, "ForAnyValue:StringLike": { "kms:ResourceAliases": "alias/<Alias_For_Generated_Key>" } } } ] } Best practices for UNLOAD from Timestream for LiveAnalytics Following are best practices related to the UNLOAD command. • The amount of data that can be exported to S3 bucket using the UNLOAD command is not bounded. However, the query times out in 60 minutes and we recommend exporting no more than 60GB of data in a single query. If you need to export more than 60GB of data, split the job across multiple queries. • While you can send thousands of requests to S3 to upload the data, it is recommended to parallelize the write operations to multiple S3 prefixes. Refer to documentation here. S3 API call rate could be throttled when multiple readers/writers access the same folder. • Given the limit on S3 key length for defining a prefix, we recommend having bucket and folder names within 10-15 characters, especially when using partitioned_by clause. • When you receive a 4XX or 5XX for queries containing the UNLOAD statement, it is possible that partial results are written into the S3 bucket. Timestream for LiveAnalytics does not delete any data from your bucket. Before executing another UNLOAD query with same S3 destination, we recommend to manually delete the files created by the failed query. You can identify the files written by a failed query with the corresponding QueryExecutionId. For failed queries, Timestream for LiveAnalytics does not export a manifest file to the S3 bucket. Best practices 473 Amazon Timestream Developer Guide • Timestream for LiveAnalytics uses multi-part upload to export query results to S3. When you receive a 4XX or 5XX from Timestream for LiveAnalytics for queries containing an UNLOAD statement, Timestream for LiveAnalytics does a best-effort abortion of multi-part upload but it is possible that some incomplete parts are left behind. Hence, we recommended to set up an auto cleanup of incomplete multi-part uploads in your S3 bucket by following the guidelines here. Recommendations for accessing the data in CSV format using CSV parser • CSV parsers don’t allow you to have same character in delimiter, escape, and quote character. • Some CSV parsers cannot interpret complex data types such as Arrays, we recommend interpreting those through JSON deserializer. Recommendations for accessing the data in Parquet format 1. If your use case requires UTF-8 character support in schema aka column name, we recommend using Parquet-mr library. 2. The timestamp in your results is represented as a 12 byte integer (INT96) 3. Timeseries will be represented as array<row<time, value>>, other nested structures will use corresponding datatypes supported in Parquet format Using partition_by clause • The column used in the partitioned_by field should be the last column in the select query. If more than one column is used in the partitioned_by field, the columns should be the last columns in the select query and in the same order as used in the partition_by field. • The column values used to partition the data (partitioned_by field) can contain only ASCII characters. While Timestream for LiveAnalytics allows UTF-8 characters in the values, S3 supports only ASCII characters as object keys. Example use case for UNLOAD from Timestream for LiveAnalytics Assume you are monitoring user session metrics, traffic sources, and product purchases of your e- commerce website. You are using Timestream for LiveAnalytics to derive real-time insights into Example use case 474 Amazon Timestream Developer Guide user behavior, product sales, and perform marketing analytics on traffic channels (organic search, social media, direct traffic, paid campaigns and others) that drive customers to the website. Topics • Exporting the data without any partitions • Partitioning data by channel • Partitioning data by event • Partitioning data by both channel and event • Manifest and metadata files • Using Glue crawlers to build Glue Data Catalog Exporting the data without any partitions You want to export the last two days of your data in CSV format. UNLOAD(SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://<bucket_name>/withoutpartition' WITH ( format='CSV', compression='GZIP') Partitioning data by channel You want to export the last two days of data in CSV format but would like to have the data from each traffic channel in a separate folder. To do
|
timestream-114
|
timestream.pdf
| 114 |
Partitioning data by both channel and event • Manifest and metadata files • Using Glue crawlers to build Glue Data Catalog Exporting the data without any partitions You want to export the last two days of your data in CSV format. UNLOAD(SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://<bucket_name>/withoutpartition' WITH ( format='CSV', compression='GZIP') Partitioning data by channel You want to export the last two days of data in CSV format but would like to have the data from each traffic channel in a separate folder. To do this, you need to partition the data using the channel column as shown in the following. UNLOAD(SELECT user_id, ip_address, event, session_id, measure_name, time, query, quantity, product_id, channel FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://<bucket_name>/partitionbychannel/' WITH ( partitioned_by = ARRAY ['channel'], format='CSV', compression='GZIP') Example use case 475 Amazon Timestream Partitioning data by event Developer Guide You want to export the last two days of data in CSV format but would like to have the data for each event in a separate folder. To do this, you need to partition the data using the event column as shown in the following. UNLOAD(SELECT user_id, ip_address, channel, session_id, measure_name, time, query, quantity, product_id, event FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://<bucket_name>/partitionbyevent/' WITH ( partitioned_by = ARRAY ['event'], format='CSV', compression='GZIP') Partitioning data by both channel and event You want to export the last two days of data in CSV format but would like to have the data for each channel and within channel store each event in a separate folder. To do this, you need to partition the data using both channel and event column as shown in the following. UNLOAD(SELECT user_id, ip_address, session_id, measure_name, time, query, quantity, product_id, channel,event FROM sample_clickstream.sample_shopping WHERE time BETWEEN ago(2d) AND now()) TO 's3://<bucket_name>/partitionbychannelevent/' WITH ( partitioned_by = ARRAY ['channel','event'], format='CSV', compression='GZIP') Manifest and metadata files Manifest file The manifest file provides information on the list of files that are exported with the UNLOAD execution. The manifest file is available in the provided S3 bucket with a file name: S3:// bucket_name/<queryid>_<UUID>_manifest.json. The manifest file will contain the url of the files in the results folder, the number of records and size of the respective files, and the query metadata (which is total bytes and total rows exported to S3 for the query). Example use case 476 Amazon Timestream Developer Guide { "result_files": [ { "url":"s3://my_timestream_unloads/ec2_metrics/ AEDAGANLHLBH4OLISD3CVOZZRWPX5GV2XCXRBKCVD554N6GWPWWXBP7LSG74V2Q_1448466917_szCL4YgVYzGXj2lS.gz", "file_metadata": { "content_length_in_bytes": 32295, "row_count": 10 } }, { "url":"s3://my_timestream_unloads/ec2_metrics/ AEDAGANLHLBH4OLISD3CVOZZRWPX5GV2XCXRBKCVD554N6GWPWWXBP7LSG74V2Q_1448466917_szCL4YgVYzGXj2lS.gz", "file_metadata": { "content_length_in_bytes": 62295, "row_count": 20 } }, ], "query_metadata": { "content_length_in_bytes": 94590, "total_row_count": 30, "result_format": "CSV", "result_version": "Amazon Timestream version 1.0.0" }, "author": { "name": "Amazon Timestream", "manifest_file_version": "1.0" } } Metadata The metadata file provides additional information about the data set such as column name, column type, and schema. The metadata file is available in the provided S3 bucket with a file name: S3://bucket_name/<queryid>_<UUID>_metadata.json Following is an example of a metadata file. { Example use case 477 Amazon Timestream Developer Guide "ColumnInfo": [ { "Name": "hostname", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "region", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "measure_name", "Type": { "ScalarType": "VARCHAR" } }, { "Name": "cpu_utilization", "Type": { "TimeSeriesMeasureValueColumnInfo": { "Type": { "ScalarType": "DOUBLE" } } } } ], "Author": { "Name": "Amazon Timestream", "MetadataFileVersion": "1.0" } } The column information shared in the metadata file has same structure as ColumnInfo sent in Query API response for SELECT queries. Using Glue crawlers to build Glue Data Catalog 1. Login to your account with Admin credentials for the following validation. Example use case 478 Amazon Timestream Developer Guide 2. Create a Crawler for Glue Database using the guidelines provided here. Please note that the S3 folder to be provided in the datasource should be the UNLOAD result folder such as s3:// my_timestream_unloads/results. 3. Run the crawler following the guidelines here. 4. View the Glue table. • Go to AWS Glue → Tables. • You will see a new table created with table prefix provided while creating the crawler. • You can see the schema and partition information by clicking the table details view. The following are other AWS services and open-source projects that use the AWS Glue Data Catalog. • Amazon Athena – For more information, see Understanding tables, databases, and data catalogs in the Amazon Athena User Guide. • Amazon Redshift Spectrum – For more information, see Querying external data using Amazon Redshift Spectrum in the Amazon Redshift Database Developer Guide. • Amazon EMR – For more information, see Use resource-based policies for Amazon EMR access to AWS Glue Data Catalog in the Amazon EMR Management Guide. • AWS Glue Data Catalog client for Apache Hive metastore – For more information about this GitHub project, see AWS
|
timestream-115
|
timestream.pdf
| 115 |
services and open-source projects that use the AWS Glue Data Catalog. • Amazon Athena – For more information, see Understanding tables, databases, and data catalogs in the Amazon Athena User Guide. • Amazon Redshift Spectrum – For more information, see Querying external data using Amazon Redshift Spectrum in the Amazon Redshift Database Developer Guide. • Amazon EMR – For more information, see Use resource-based policies for Amazon EMR access to AWS Glue Data Catalog in the Amazon EMR Management Guide. • AWS Glue Data Catalog client for Apache Hive metastore – For more information about this GitHub project, see AWS Glue Data Catalog Client for Apache Hive Metastore. Limits for UNLOAD from Timestream for LiveAnalytics Following are limits related to the UNLOAD command. • Concurrency for queries using the UNLOAD statement is 1 query per second (QPS). Exceeding the query rate might result in throttling. • Queries containing UNLOAD statement can export at most 100 partitions per query. We recommend to check the distinct count of the selected column before using it to partition the exported data. • Queries containing UNLOAD statement time out after 60 minutes. • The maximum size of the files that the UNLOAD statement creates in Amazon S3 is 78 GB. For other limits for Timestream for LiveAnalytics, see Quotas Limits 479 Amazon Timestream Developer Guide Using query insights to optimize queries in Amazon Timestream Query insights is a performance tuning feature that helps you optimize your queries, improve their performance, and reduce costs. With query insights, you can assess the temporal, time-based, and spatial partition key-based pruning efficiency of your queries. Using query insights, you can also identify areas for improvement to enhance query performance. In addition, with query insights, you can evaluate how effectively your queries use time-based and partition key-based indexing to optimize data retrieval. To optimize query performance, it's essential to fine-tune both the temporal and spatial parameters that govern query execution. Topics • Benefits of query insights • Optimizing data access in Amazon Timestream • Enabling query insights in Amazon Timestream • Optimizing queries using query insights response Benefits of query insights The following are the key benefits of using query insights: • Identifying inefficient queries – Query insights provides information on the time-based and attribute-based pruning of the tables accessed by the query. This information helps you identify the tables that are sub-optimally accessed. • Optimizing your data model and partitioning – You can use the query insights information to access and fine-tune your data model and partitioning strategy. • Tuning queries – Query insights highlights opportunities to use indexes more effectively. Optimizing data access in Amazon Timestream You can optimize the data access patterns in Amazon Timestream using the Timestream partitioning scheme or data organization techniques. Topics • Timestream partitioning scheme Using query insights 480 Amazon Timestream • Data organization Timestream partitioning scheme Developer Guide Amazon Timestream uses a highly scalable partitioning scheme where each Timestream table can have hundreds, thousands, or even millions of independent partitions. A highly available partition tracking and indexing service manages the partitioning, minimizing the impact of failures and making the system more resilient. Data organization Timestream stores each data point it ingests in a single partition. As you ingest data into a Timestream table, Timestream automatically creates partitions based on the timestamps, partition key, and other context attributes in the data. In addition to partitioning the data on time (temporal partitioning), Timestream also partitions the data based on the selected partitioning key and other dimensions (spatial partitioning). This approach is designed to distribute write traffic and allow for effective pruning of data for queries. Optimizing data access 481 Amazon Timestream Developer Guide The query insights feature provides valuable insights into the pruning efficiency of the query, which includes query spatial coverage and query temporal coverage. Topics • QuerySpatialCoverage • QueryTemporalCoverage QuerySpatialCoverage The QuerySpatialCoverage metric provides insights into the spatial coverage of the executed query and the table with the most inefficient spatial pruning. This information can help you identify areas of improvement in the partitioning strategy to enhance spatial pruning. The value for the QuerySpatialCoverage metric ranges between 0 and 1. The lower the value of the metric, the more optimal the query pruning on the spatial axis. For example, a value of 0.1 indicates that the query scans 10% of the spatial axis. A value of 1 indicates that the query scans 100% of the spatial axis. Example Using query insights to analyze a query's spatial coverage Say that you've a Timestream database that stores weather data. Assume that the temperature is recorded every hour from weather stations located across different states in United States. Imagine that you choose State as the customer-defined partitioning key (CDPK) to partition the data by state. Suppose that you execute a query to retrieve the average
|
timestream-116
|
timestream.pdf
| 116 |
pruning on the spatial axis. For example, a value of 0.1 indicates that the query scans 10% of the spatial axis. A value of 1 indicates that the query scans 100% of the spatial axis. Example Using query insights to analyze a query's spatial coverage Say that you've a Timestream database that stores weather data. Assume that the temperature is recorded every hour from weather stations located across different states in United States. Imagine that you choose State as the customer-defined partitioning key (CDPK) to partition the data by state. Suppose that you execute a query to retrieve the average temperature for all weather stations in California between 2 PM and 4 PM on a specific day. The following example shows the query for this scenario. SELECT AVG(temperature) FROM "weather_data"."hourly_weather" WHERE time >= '2024-10-01 14:00:00' AND time < '2024-10-01 16:00:00' AND state = 'CA'; Using the query insights feature, you can analyze the query's spatial coverage. Imagine that the QuerySpatialCoverage metric returns a value of 0.02. This means that the query only scanned 2% of the spatial axis, which is efficient. In this case, the query was able to effectively prune the spatial range, only retrieving data from California and ignoring data from other states. Optimizing data access 482 Amazon Timestream Developer Guide On the contrary, if the QuerySpatialCoverage metric returned a value of 0.8, it would indicate that the query scanned 80% of the spatial axis, which is less efficient. This might suggest that the partitioning strategy needs to be refined to improve spatial pruning. For example, you can select the partition key as city or region instead of a state. By analyzing the QuerySpatialCoverage metric, you can identify opportunities to optimize your partitioning strategy and improve the performance of your queries. The following image shows poor spatial pruning. To improve spatial pruning efficiency, you can do one or both of the following: • Add measure_name, the default paritioning key, or use the CDPK predicates in your query. • If you've already added the attributes mentioned in the previous point, remove functions around these attributes or clauses, such as LIKE. Optimizing data access 483 Amazon Timestream QueryTemporalCoverage Developer Guide The QueryTemporalCoverage metric provides insights into the temporal range scanned by the executed query, including the table with the largest time range scanned. The value for the QueryTemporalCoverage metric is time range represented in nanoseconds. The lower the value of this metric, the more optimal the query pruning on the temporal range. For example, a query scanning last few minutes of data is more performant than a query scanning the entire time range of the table. Example Say that you've a Timestream database that stores IoT sensor data, with measurements taken every minute from devices located in a manufacturing plant. Assume that you've partitioned your data by device_ID. Suppose that you execute a query to retrieve the average sensor reading for a specific device over the last 30 minutes. The following example shows the query for this scenario. SELECT AVG(sensor_reading) FROM "sensor_data"."factory_1" WHERE device_id = 'DEV_123' AND time >= NOW() - INTERVAL 30 MINUTE and time < NOW(); Using the query insights feature, you can analyze the temporal range scanned by the query. Imagine the QueryTemporalCoverage metric returns a value of 1800000000000 nanoseconds (30 minutes). This means that the query only scanned the last 30 minutes of data, which is a relatively narrow temporal range. This is a good sign because it indicates that the query was able to effectively prune the temporal partitioning and only retrieved the requested data. On the contrary, if the QueryTemporalCoverage metric returned a value of 1 year in nanoseconds, it indicates that the query scanned one year of time range in the table, which is less efficient. This might suggest that the query is not optimized for temporal pruning, and you could improve it by adding time filters. The following image shows poor temporal pruning. Optimizing data access 484 Amazon Timestream Developer Guide To improve temporal pruning, we recommend that you do one or all of the following: • Add the missing time predicates in the query and make sure that the time predicates are pruning the desired time window. • Remove functions, such as MAX(), around the time predicates. • Add time predicates to all the sub queries. This is important if your sub queries are joining large tables or performing complex operations. Enabling query insights in Amazon Timestream You can enable query insights for your queries with insights delivered directly through the query response. Enabling query insights doesn't require additional infrastructure or incur any additional costs. When you enable query insights, it returns query performance related metadata fields in addition to query results as part of your query response. You can use this information to tune your queries to improve query performance
|
timestream-117
|
timestream.pdf
| 117 |
as MAX(), around the time predicates. • Add time predicates to all the sub queries. This is important if your sub queries are joining large tables or performing complex operations. Enabling query insights in Amazon Timestream You can enable query insights for your queries with insights delivered directly through the query response. Enabling query insights doesn't require additional infrastructure or incur any additional costs. When you enable query insights, it returns query performance related metadata fields in addition to query results as part of your query response. You can use this information to tune your queries to improve query performance and reduce query cost. For information about enabling query insights, see Run a query. To view examples of the responses returned by enabling query insights, see Examples for scheduled queries. Enabling query insights in Amazon Timestream 485 Amazon Timestream Note Developer Guide • When you enable query insights, it rate limits the query to 1 query per second (QPS). To avoid performance impacts, we strongly recommend that you enable query insights only during the evaluation phase of your queries, before deploying them to production. • The insights provided in query insights are eventually consistent, which means they might change as new data is continuously ingested into the tables. Optimizing queries using query insights response Say that you're using Amazon Timestream for LiveAnalytics to monitor energy consumption across various locations. Imagine that you've two tables in your database named raw-metrics and aggregate-metrics. The raw-metrics table stores detailed energy data at the device level and contains the following columns: • Timestamp • State, for example, Washington • Device ID • Energy consumption The data for this table is collected and stored at a minute-by-minute granularity. The table uses State as the CDPK. The aggregate-metrics table stores the result of a scheduled query to aggregate the energy consumption data across all devices hourly. This table contains the following columns: • Timestamp • State, for example, Washington • Total energy consumption The aggregate-metrics table stores this data at an hourly granularity. The table uses State as the CDPK. Optimizing queries 486 Amazon Timestream Topics • Querying energy consumption for the last 24 hours • Optimizing the query for temporal range • Optimizing the query for spatial coverage • Improved query performance Developer Guide Querying energy consumption for the last 24 hours Say that you want to extract the total energy consumed in Washington over the last 24 hours. To find this data, you can leverage the strengths of both the tables: raw-metrics and aggregate- metrics. The aggregate-metrics table provides hourly energy consumption data for the last 23 hours, while the raw-metrics table offers minute-granular data for the last one hour. By querying across both tables, you can get a complete and accurate picture of energy consumption in Washington over the last 24 hours. SELECT am.time, am.state, am.total_energy_consumption, rm.time, rm.state, rm.device_id, rm.energy_consumption FROM "metrics"."aggregate-metrics" am LEFT JOIN "metrics"."raw-metrics" rm ON am.state = rm.state WHERE rm.time >= ago(1h) and rm.time < now() This example query is provided for illustrative purposes only and might not work as is. It's intended to demonstrate the concept, but you might need to modify it to fit your specific use case or environment. After executing this query, you might notice that the query response time is slower than expected. To identify the root cause of this performance issue, you can use the query insights feature to analyze the query's performance and optimize its execution. The following example shows the query insights response. queryInsightsResponse={ QuerySpatialCoverage: { Max: { Value: 1.0, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/raw-metrics, PartitionKey: [State] Optimizing queries 487 Amazon Timestream Developer Guide } }, QueryTemporalRange: { Max: { Value:31540000000000000 //365 days, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics } }, QueryTableCount: 2, OutputRows: 83, OutputBytes: 590 The query insights response provides the following information: • Temporal range: The query scanned an excessive 365-day temporal range for the aggregate- metrics table. This indicates an inefficient use of temporal filtering. • Spatial coverage: The query scanned the entire spatial range (100%) of the raw-metrics table. This suggests that the spatial filtering isn't being utilized effectively. If your query accesses more than one table, query insights provides the metrics for the table with most sub-optimal access pattern. Optimizing the query for temporal range Based on the query insights response, you can optimize the query for temporal range as shown in the following example. SELECT am.time, am.state, am.total_energy_consumption, rm.time, rm.state, rm.device_id, rm.energy_consumption FROM "metrics"."aggregate-metrics" am LEFT JOIN "metrics"."raw-metrics" rm ON am.state = rm.state WHERE am.time >= ago(23h) and am.time < now() AND rm.time >= ago(1h) and rm.time < now() AND rm.state = 'Washington' If you run the QueryInsights command again, it returns the following response. queryInsightsResponse={ Optimizing queries 488 Amazon Timestream Developer Guide QuerySpatialCoverage: { Max: { Value: 1.0, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics, PartitionKey: [State] } }, QueryTemporalRange: { Max: {
|
timestream-118
|
timestream.pdf
| 118 |
query for temporal range Based on the query insights response, you can optimize the query for temporal range as shown in the following example. SELECT am.time, am.state, am.total_energy_consumption, rm.time, rm.state, rm.device_id, rm.energy_consumption FROM "metrics"."aggregate-metrics" am LEFT JOIN "metrics"."raw-metrics" rm ON am.state = rm.state WHERE am.time >= ago(23h) and am.time < now() AND rm.time >= ago(1h) and rm.time < now() AND rm.state = 'Washington' If you run the QueryInsights command again, it returns the following response. queryInsightsResponse={ Optimizing queries 488 Amazon Timestream Developer Guide QuerySpatialCoverage: { Max: { Value: 1.0, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics, PartitionKey: [State] } }, QueryTemporalRange: { Max: { Value: 82800000000000 //23 hours, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics } }, QueryTableCount: 2, OutputRows: 83, OutputBytes: 590 This response shows that the spatial coverage for the aggregate-metrics table is still 100%, which is inefficient. The following section shows how to optimze the query for spatial coverage. Optimizing the query for spatial coverage Based on the query insights response, you can optimize the query for spatial coverage as shown in the following example. SELECT am.time, am.state, am.total_energy_consumption, rm.time, rm.state, rm.device_id, rm.energy_consumption FROM "metrics"."aggregate-metrics" am LEFT JOIN "metrics"."raw-metrics" rm ON am.state = rm.state WHERE am.time >= ago(23h) and am.time < now() AND am.state ='Washington' AND rm.time >= ago(1h) and rm.time < now() AND rm.state = 'Washington' If you run the QueryInsights command again, it returns the following response. queryInsightsResponse={ QuerySpatialCoverage: { Optimizing queries 489 Amazon Timestream Max: { Developer Guide Value: 0.02, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics, PartitionKey: [State] } }, QueryTemporalRange: { Max: { Value: 82800000000000 //23 hours, TableArn: arn:aws:timestream:us-east-1:123456789012:database/ metrics/table/aggregate-metrics } }, QueryTableCount: 2, OutputRows: 83, OutputBytes: 590 Improved query performance After optimizing the query, query insights provides the following information: • Temporal pruning for the aggregate-metrics table is 23 hours. This indicates that only 23 hours of the temporal range is scanned. • Spatial pruning for aggregate-metrics table is 0.02. This indicates that only 2% of the table's spatial range data is being scanned. The query scans a very small portion of the tables leading to fast performance and reduced resource utilization. The improved pruning efficiency indicates that the query is now optimized for performance. Working with AWS Backup The data protection functionality in Amazon Timestream for LiveAnalytics is a fully managed solution to help you meet your regulatory compliance and business continuity requirements. The functionality is enabled through native integration with AWS Backup, a unified backup service designed to simplify the creation, migration, restoration, and deletion of backups, while providing improved reporting and auditing. Through integration with AWS Backup, you can use a fully managed, policy-driven centralized data protection solution to create immutable backups and centrally manage data protection of your application data spanning Timestream and other AWS services supported by AWS Backup. Working with AWS Backup 490 Amazon Timestream Developer Guide To use the functionality, you must opt-in to allow AWS Backup to protect your Timestream resources. Opt-in choices apply to the specific account and AWS Region, so you might have to opt in to multiple Regions using the same account. For more information on AWS Backup, see the AWS Backup Developer Guide. Data Protection functionality available through AWS Backup includes the following. Scheduled backups—You can set up regularly scheduled backups of your Timestream for LiveAnalytics tables using backup plans. Cross-account and cross-Region copying—You can automatically copy your backups to another backup vault in a different AWS Region or account, which allows you to support your data protection requirements. Cold storage tiering—You can configure your backups to implement life cycle rules to delete or transition backups to colder storage. This can help you optimize your backup costs. Tags—You can automatically tag your backups for billing and cost allocation purposes. Encryption—Your backup data is stored in the AWS Backup vault. This allows you to encrypt and secure your backups by using an AWS KMS key that is independent from your Timestream for LiveAnalytics table encryption key. Secure backups using the WORM model—You can use AWS Backup Vault Lock to enable a write- once-read-many (WORM) setting for your backups. With AWS Backup Vault Lock, you can add an additional layer of defense that protects backups from inadvertent or malicious delete operations, changes to backup retention periods, and updates to lifecycle settings. To learn more, see AWS Backup Vault Lock. The data protection functionality is available in all regions To learn more about the functionality, see the AWS Backup Developer Guide. Backing up and restoring Timestream tables: How it works You can create backups of your Amazon Timestream tables. This section provides an overview of what happens during the backup and restore process. Topics • Backups • Restores How it works 491 Amazon Timestream Backups Developer Guide You can use the on-demand backup feature to create full backups of your Amazon Timestream for LiveAnalytics tables. This section provides an overview of what happens during
|
timestream-119
|
timestream.pdf
| 119 |
more, see AWS Backup Vault Lock. The data protection functionality is available in all regions To learn more about the functionality, see the AWS Backup Developer Guide. Backing up and restoring Timestream tables: How it works You can create backups of your Amazon Timestream tables. This section provides an overview of what happens during the backup and restore process. Topics • Backups • Restores How it works 491 Amazon Timestream Backups Developer Guide You can use the on-demand backup feature to create full backups of your Amazon Timestream for LiveAnalytics tables. This section provides an overview of what happens during the backup and restore process. You can create a backup of your Timestream data at a table granularity. You can initiate a backup of the selected table using either Timestream console, or AWS Backup console, SDK, or CLI. The backup is created asynchronously and all the data in the table until the backup initiation time is included in the backup. However, there is a possibility that some of the data ingested into the table while the backup is in progress might also be included in the backup. To protect your data, you can either create a one-time on-demand backup or schedule a recurring backup of your table. While a backup is in progress, you cannot do the following. • Pause or cancel the backup operation. • Delete the source table of the backup. • Disable backups on a table if a backup for that table is in progress. Once configured, AWS Backup provides automated backup schedules, retention management, and lifecycle management, removing the need for custom scripts and manual processes. For more information, see the AWS Backup Developer Guide All Timestream for LiveAnalytics backups are incremental in nature, implying that the first backup of a table is a full backup and every subsequent backup of the same table is an incremental backup, copying only the changes to the data since the last backup. As the data in Timestream for LiveAnalytics is stored in a collection of partitions, all the partitions that changed either due to ingesting new data or updates to the existing data since the last backup are copied during subsequent backups. If you are using Timestream for LiveAnalytics console, the backups created for all the resources in the account are listed in the Backups tab. Additionally, the backups are also listed in the Table details. Restores You can restore a table from the Timestream for LiveAnalytics console, or AWS Backup console, SDK, or AWS CLI. You can either restore the entire data from your backup, or configure the table How it works 492 Amazon Timestream Developer Guide retention settings to restore select data. When you initiate a restore, you can configure the following table settings. • Database Name • Table Name • Memory store retention • Magnetic store retention • Enable Magnetic storage writes • S3 error logs location (optional) • IAM role that AWS Backup will assume when restoring the backup The preceding configurations are independent of the source table. To restore all the data in your backup, we recommend that you configure the new table settings such that the sum of memory store retention period and magnetic store retention period is greater than the difference between the oldest timestamp and now. When you select a backup that is incremental to restore, all data (incremental + underlying full data) is restored. Upon successful restore, the table is in active state and you can perform ingestion and/or query operations on the restored table. However, you cannot perform these operations while the restore is in progress. Once restored, the table is similar to any other table in your account. Example Restore the all data from a backup This example has the following assumptions. Oldest timestamp—August 1, 2021 0:00:00 • Now—November 9, 2022 0:00:00 To restore all data from a backup, enter and compare values as follows. 1. Enter Memory store retention and Magnetic store retention. For example, assume these values. • Memory store retention—12 hours • Magnetic store retention—500 days 2. Find the sum of Memory store retention and Magnetic store retention. 12 hours + (500 * 24 hours) = How it works 493 Amazon Timestream Developer Guide 12 hours + 12,000 hours = 12,012 hours 3. Find the difference between Oldest timestamp and now. November 9, 2022 0:00:00 - August 1, 2021 0:00:00 = 465 days = 465 * 24 hours = 11,160 hours 4. Ensure the sum of retention values in the second step is greater than difference of times in the third step. Adjust the retention times if necessary. 12,012 > 11,160 true Example Restore select data from a backup This example has the following assumption. • Now—November 9, 2022 0:00:00 To restore only select data from a backup, enter and compare values as follows. 1.
|
timestream-120
|
timestream.pdf
| 120 |
hours + 12,000 hours = 12,012 hours 3. Find the difference between Oldest timestamp and now. November 9, 2022 0:00:00 - August 1, 2021 0:00:00 = 465 days = 465 * 24 hours = 11,160 hours 4. Ensure the sum of retention values in the second step is greater than difference of times in the third step. Adjust the retention times if necessary. 12,012 > 11,160 true Example Restore select data from a backup This example has the following assumption. • Now—November 9, 2022 0:00:00 To restore only select data from a backup, enter and compare values as follows. 1. Determine the earliest timestamp required. For example, assume December 4, 2021 0:00:00. 2. Find the difference between the earliest timestamp required and now. November 9, 2022 0:00:00 - December 4, 2021 0:00:00 = 340 days = 340 * 24 hours = 8,160 hours 3. Enter the desired value for Memory store retention. For example, enter 12 hours. 4. Subtract the value from the difference in the second step. 8,160 hours - 12 hours = 8148 hours 5. Enter that value for Magnetic store retention. How it works 494 Amazon Timestream Developer Guide You can copy a backup of your Timestream for LiveAnalytics table data to a different AWS Region and then restore it in that new Region. You can copy and then restore backups between AWS commercial Regions, and AWS GovCloud (US) Regions. You pay only for the data you copy from the source Region and the data you restore to a new table in the destination Region. Once the table is restored, you must manually set up the following on the restored table. • AWS Identity and Access Management (IAM) policies • Tags • Scheduled Queries Restore times are directly related to the configuration of your tables. These include the size of your tables, the number of underlying partitions, the amount of data restored to memory store, and other variables. A best practice when planning for disaster recovery is to regularly document average restore completion times and establish how these times affect your overall Recovery Time Objective (RTO). All backup and restore console and API actions are captured and recorded in AWS CloudTrail for logging, continuous monitoring, and auditing. Creating backups of Amazon Timestream tables This section describes how to enable AWS Backup and create on-demand and scheduled backups for Amazon Timestream. Topics • Enabling AWS Backup to protect Timestream for LiveAnalytics data • Creating on-demand backups • Scheduled backups Enabling AWS Backup to protect Timestream for LiveAnalytics data You must enable AWS Backup to use it with Timestream for LiveAnalytics. To enable AWS Backup in the Timestream for LiveAnalytics console, perform the following steps. 1. Sign in to the AWS Management Console. Creating backups 495 Amazon Timestream Developer Guide 2. A pop-up banner appears at the top of your Timestream for LiveAnalytics dashboard page to enable AWS Backup to support Timestream for LiveAnalytics data. Otherwise, from the navigation pane, choose Backups. 3. In the Backup window, you will see the banner to enable AWS Backup. Choose Enable. Data Protection through AWS Backup is now available for your Timestream for LiveAnalytics tables. To enable through AWS Backup, refer to AWS Backup documentation to enable via console and programmatically. If you choose to disable AWS Backup from protection your Timestream for LiveAnalytics data after those have been enabled, log in through AWS Backup console and move the toggle to the left. If you can’t enable or disable the AWS Backup features, your AWS admin may need to perform those actions. Creating on-demand backups To create an on-demand backup of a Timestream for LiveAnalytics table, follow these steps. 1. Sign in to the AWS Management Console. 2. In the navigation pane on the left side of the console, choose Backups. 3. Choose Create on-demand backup. 4. Continue to select the settings in the backup window. 5. You can either create a backup now, initiates a backup immediately, or select a backup window to start the backup. 6. Select the lifecycle management policy of your backup. You can transition your backup data into cold storage where you have to retain the backup for a minimum of 90 days. You can set the required retention period for your backup You can either select an existing vault or or select create new backup vault to navigate to AWS Backup console and create a new backup vault <documentation link on creating a new backup vault here> 7. Select the appropriate IAM role. 8. If you want to assign one or more tags to your on-demand backup, enter a key and optional value, and choose Add tag. 9. Choose to create an on-demand backup. This takes you to the Backup page, where you will see a list of jobs. Creating backups 496 Amazon Timestream Developer Guide
|
timestream-121
|
timestream.pdf
| 121 |
required retention period for your backup You can either select an existing vault or or select create new backup vault to navigate to AWS Backup console and create a new backup vault <documentation link on creating a new backup vault here> 7. Select the appropriate IAM role. 8. If you want to assign one or more tags to your on-demand backup, enter a key and optional value, and choose Add tag. 9. Choose to create an on-demand backup. This takes you to the Backup page, where you will see a list of jobs. Creating backups 496 Amazon Timestream Developer Guide 10.Choose the Backup job ID for the resource that you chose to back up to see the details of that job. Scheduled backups To schedule a backup, refer to Create a scheduled backup. Restoring a backup of an Amazon Timestream table This section describes how to restore a backup of an Amazon Timestream table. Topics • Restoring a Timestream for LiveAnalytics table from AWS Backup • Restoring a Timestream for LiveAnalytics table to another Region or account Restoring a Timestream for LiveAnalytics table from AWS Backup To restore your Timestream for LiveAnalytics table from AWS Backup using Timestream for LiveAnalytics console, follow these steps. 1. Sign in to the AWS Management Console. 2. In the navigation pane on the left side of the console, choose Backups. 3. To restore a resource, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose Restore. 4. Enter the table configuration settings, namely Database name and Table Name. Please note, the restored table name should be different from the original source table name. 5. Configure the memory and magnetic store retention settings. 6. For Restore role, choose the IAM role that AWS Backup will assume for this restore. 7. Choose Restore backup. A message at the top of the page provides information about the restore job. Restoring backups 497 Amazon Timestream Note Developer Guide You are charged for restoring the entire backup irrespective of the configured memory and magnetic store retention periods. However, once the restore is completed, your restored table will only contain the data within the configured retention periods. Restoring a Timestream for LiveAnalytics table to another Region or account To restore a Timestream for LiveAnalytics table to another Region or account, you will first need to copy the backup to that new Region or account. In order to copy to another account, that account must first grant you permission. After you have copied your Timestream for LiveAnalytics backup to the new Region or account, it can be restored with the process in the previous section. Copying a backup of a Amazon Timestream table You can make a copy of a current backup. You can copy backups to multiple AWS accounts or AWS Regions on demand or automatically as part of a scheduled backup plan. Cross-Region replication is especially valuable if you have business continuity or compliance requirements to store backups a minimum distance away from your production data. Cross-account backups are useful for securely copying your backups to one or more AWS accounts in your organization for operational or security reasons. If your original backup is inadvertently deleted, you can copy the backup from its destination account to its source account, and then start the restore. Before you can do this, you must have two accounts that belong to the same organization in the Organizations service and required permissions for the accounts. When you copy an incremental backup into another account or Region, the associated full backup is also copied. Copies inherit the source backup's configuration unless you specify otherwise. There is one exception. If you specify your new copy to "Never" expire. With this setting, the new copy still inherits its source expiration date. If you want your new backup copy to be permanent, either set your source backups to never expire, or specify your new copy to expire 100 years after its creation. To copy a backup from Timestream console, follow these steps. 1. Sign in to the AWS Management Console. 2. In the navigation pane on the left side of the console, choose Backups. Copying backups 498 Amazon Timestream Developer Guide 3. Choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, select Actions and choose Copy. 4. Select Continue to AWS Backup and follow the steps for Cross account backup. Copying on-demand and scheduled backups across accounts and Regions is not natively supported in the Timestream for LiveAnalytics console currently and you have to navigate to AWS Backup to perform the operation. Deleting backups This section describes how to delete a backup of a Timestream for LiveAnalytics table. To delete a backup from Timestream console, follow
|
timestream-122
|
timestream.pdf
| 122 |
Amazon Timestream Developer Guide 3. Choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, select Actions and choose Copy. 4. Select Continue to AWS Backup and follow the steps for Cross account backup. Copying on-demand and scheduled backups across accounts and Regions is not natively supported in the Timestream for LiveAnalytics console currently and you have to navigate to AWS Backup to perform the operation. Deleting backups This section describes how to delete a backup of a Timestream for LiveAnalytics table. To delete a backup from Timestream console, follow these steps. 1. Sign in to the AWS Management Console. 2. In the navigation pane on the left side of the console, choose Backups. 3. Choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, select Actions and choose Delete. 4. Select Continue to AWS Backup and follow the steps for deleting backups at Deleting backups. Note When you delete a backup that is incremental, only the incremental backup is deleted and the underlying full backup is not deleted. Quota and limits AWS Backup limits the backups to one concurrent backup per resource. Therefore, additional scheduled or on-demand backup requests for the resource are queued and will start only after the existing backup job is completed. If the backup job is not started or completed within the backup window, the request fails. For more information about AWS Backup limits, see AWS Backup Limits in the AWS Backup Developer Guide. When creating a backup, you can execute up to four concurrent backups per account. Similarly, you can execute one concurrent restore per account. When you initiate more than four backup Deleting backups 499 Amazon Timestream Developer Guide jobs simultaneously, only four backup jobs are initiated and the remaining jobs will be periodically retried. Once initiated, if the backup job is not completed within the configured backup window duration, the backup job fails. If the failed backup job is an on-demand backup, you can retry the backup and for scheduled backups, the job is attempted in the following schedule. Customer-defined partition keys Amazon Timestream for LiveAnalytics customer-defined partition keys is a feature in Timestream for LiveAnalytics that enables customers to define their own partition keys for their tables. Partitioning is a technique used to distribute data across multiple physical storage units, allowing for faster and more efficient data retrieval. With customer-defined partition keys, customers can create a partitioning schema that better aligns with their query patterns and use cases. With Timestream for LiveAnalytics customer-defined partition keys, customers can choose one dimension names as a partition key for their tables. This allows for more flexibility in defining the partitioning schema for their data. By selecting the right partition key, customers can optimize their data model, improving their query performance, and reduce query latency. Topics • Using customer-defined partition keys • Getting started with customer-defined partition keys • Checking partitioning schema configuration • Updating partitioning schema configuration • Advantages of customer-defined partition keys • Limitations of customer-defined partition keys • Customer-defined partition keys and low cardinality dimensions • Creating partition keys for existing tables • Timestream for LiveAnalytics schema validation with custom composite partition keys Using customer-defined partition keys If you have a well-defined query pattern with high cardinality dimensions and require low query latency, a Timestream for LiveAnalytics customer-defined partition key can be a useful tool to enhance your data model. For instance, if you are a retail company tracking customer interactions on your website, the main access patterns would likely be by customer ID and timestamp. By Customer-defined partition keys 500 Amazon Timestream Developer Guide defining customer ID as the partition key, your data can be distributed evenly, allowing for reduced latency, ultimately improving the user experience. Another example is in the healthcare industry, where wearable devices collect sensor data to track patients' vital signs. The main access pattern would be by Device ID and timestamp, with high cardinality on both dimensions. By defining Device ID as the partition key, can optimize your query execution and ensure a sustained long term query performance. In summary, Timestream for LiveAnalytics customer-defined partition keys are most useful when you have a clear query pattern, high cardinality dimensions, and need low latency for your queries. By defining a partition key that aligns with your query pattern, you can optimize your query execution and ensure a sustained long term performance query performance. Getting started with customer-defined partition keys From the console, choose Tables and create a new table. You can also use an SDK to access the CreateTable action to create new tables that can include a customer-defined partition key. Create a table with a dimension type partition key You can use the following code snippets to
|
timestream-123
|
timestream.pdf
| 123 |
most useful when you have a clear query pattern, high cardinality dimensions, and need low latency for your queries. By defining a partition key that aligns with your query pattern, you can optimize your query execution and ensure a sustained long term performance query performance. Getting started with customer-defined partition keys From the console, choose Tables and create a new table. You can also use an SDK to access the CreateTable action to create new tables that can include a customer-defined partition key. Create a table with a dimension type partition key You can use the following code snippets to create a table with a dimension type partition key. Java public void createTableWithDimensionTypePartitionKeyExample() { System.out.println("Creating table"); CreateTableRequest createTableRequest = new CreateTableRequest(); createTableRequest.setDatabaseName(DATABASE_NAME); createTableRequest.setTableName(TABLE_NAME); final RetentionProperties retentionProperties = new RetentionProperties() .withMemoryStoreRetentionPeriodInHours(HT_TTL_HOURS) .withMagneticStoreRetentionPeriodInDays(CT_TTL_DAYS); createTableRequest.setRetentionProperties(retentionProperties); // Can specify enforcement level with OPTIONAL or REQUIRED final List<PartitionKey> partitionKeyWithDimensionAndOptionalEnforcement = Collections.singletonList(new PartitionKey() .withName(COMPOSITE_PARTITION_KEY_DIM_NAME) .withType(PartitionKeyType.DIMENSION) .withEnforcementInRecord(PartitionKeyEnforcementLevel.OPTIONAL)); Schema schema = new Schema(); Getting started with customer-defined partition keys 501 Amazon Timestream Developer Guide schema.setCompositePartitionKey(partitionKeyWithDimensionAndOptionalEnforcement); createTableRequest.setSchema(schema); try { writeClient.createTable(createTableRequest); System.out.println("Table [" + TABLE_NAME + "] successfully created."); } catch (ConflictException e) { System.out.println("Table [" + TABLE_NAME + "] exists on database [" + DATABASE_NAME + "] . Skipping database creation"); } } Java v2 public void createTableWithDimensionTypePartitionKeyExample() { System.out.println("Creating table"); final RetentionProperties retentionProperties = RetentionProperties.builder() .memoryStoreRetentionPeriodInHours(HT_TTL_HOURS) .magneticStoreRetentionPeriodInDays(CT_TTL_DAYS) .build(); // Can specify enforcement level with OPTIONAL or REQUIRED final List<PartitionKey> partitionKeyWithDimensionAndOptionalEnforcement = Collections.singletonList(PartitionKey .builder() .name(COMPOSITE_PARTITION_KEY_DIM_NAME) .type(PartitionKeyType.DIMENSION) .enforcementInRecord(PartitionKeyEnforcementLevel.OPTIONAL) .build()); final Schema schema = Schema.builder() .compositePartitionKey(partitionKeyWithDimensionAndOptionalEnforcement).build(); final CreateTableRequest createTableRequest = CreateTableRequest.builder() .databaseName(DATABASE_NAME) .tableName(TABLE_NAME) .retentionProperties(retentionProperties) .schema(schema) .build(); try { writeClient.createTable(createTableRequest); System.out.println("Table [" + TABLE_NAME + "] successfully created."); Getting started with customer-defined partition keys 502 Amazon Timestream Developer Guide } catch (ConflictException e) { System.out.println("Table [" + TABLE_NAME + "] exists on database [" + DATABASE_NAME + "] . Skipping database creation"); } } Go v1 func createTableWithDimensionTypePartitionKeyExample(){ // Can specify enforcement level with OPTIONAL or REQUIRED partitionKeyWithDimensionAndOptionalEnforcement := []*timestreamwrite.PartitionKey{ { Name: aws.String(CompositePartitionKeyDimName), EnforcementInRecord: aws.String("OPTIONAL"), Type: aws.String("DIMENSION"), }, } createTableInput := ×treamwrite.CreateTableInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), // Enable MagneticStoreWrite for Table MagneticStoreWriteProperties: ×treamwrite.MagneticStoreWriteProperties{ EnableMagneticStoreWrites: aws.Bool(true), // Persist MagneticStoreWrite rejected records in S3 MagneticStoreRejectedDataLocation: ×treamwrite.MagneticStoreRejectedDataLocation{ S3Configuration: ×treamwrite.S3Configuration{ BucketName: aws.String("timestream-sample-bucket"), ObjectKeyPrefix: aws.String("TimeStreamCustomerSampleGo"), EncryptionOption: aws.String("SSE_S3"), }, }, }, Schema: ×treamwrite.Schema{ CompositePartitionKey: partitionKeyWithDimensionAndOptionalEnforcement, } } _, err := writeSvc.CreateTable(createTableInput) } Getting started with customer-defined partition keys 503 Amazon Timestream Go v2 Developer Guide func (timestreamBuilder TimestreamBuilder) CreateTableWithDimensionTypePartitionKeyExample() error { partitionKeyWithDimensionAndOptionalEnforcement := []types.PartitionKey{ { Name: aws.String(CompositePartitionKeyDimName), EnforcementInRecord: types.PartitionKeyEnforcementLevelOptional, Type: types.PartitionKeyTypeDimension, }, } _, err := timestreamBuilder.WriteSvc.CreateTable(context.TODO(), ×treamwrite.CreateTableInput{ DatabaseName: aws.String(databaseName), TableName: aws.String(tableName), MagneticStoreWriteProperties: &types.MagneticStoreWriteProperties{ EnableMagneticStoreWrites: aws.Bool(true), // Persist MagneticStoreWrite rejected records in S3 MagneticStoreRejectedDataLocation: &types.MagneticStoreRejectedDataLocation{ S3Configuration: &types.S3Configuration{ BucketName: aws.String(s3BucketName), EncryptionOption: "SSE_S3", }, }, }, Schema: &types.Schema{ CompositePartitionKey: partitionKeyWithDimensionAndOptionalEnforcement, }, }) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Create table is successful") } return err } Getting started with customer-defined partition keys 504 Amazon Timestream Python Developer Guide def create_table_with_measure_name_type_partition_key(self): print("Creating table") retention_properties = { 'MemoryStoreRetentionPeriodInHours': HT_TTL_HOURS, 'MagneticStoreRetentionPeriodInDays': CT_TTL_DAYS } partitionKey_with_measure_name = [ {'Type': 'MEASURE'} ] schema = { 'CompositePartitionKey': partitionKey_with_measure_name } try: self.client.create_table(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, RetentionProperties=retention_properties, Schema=schema) print("Table [%s] successfully created." % TABLE_NAME) except self.client.exceptions.ConflictException: print("Table [%s] exists on database [%s]. Skipping table creation" % ( TABLE_NAME, DATABASE_NAME)) except Exception as err: print("Create table failed:", err) Checking partitioning schema configuration You can check how a table configuration for partitioning schema in a couple of ways. From the console, choose Databases and choose the table to check. You can also use an SDK to access the DescribeTable action. Describe a table with a partition key You can use the following code snippets to describe a table with a partition key. Java public void describeTable() { System.out.println("Describing table"); final DescribeTableRequest describeTableRequest = new DescribeTableRequest(); Checking partitioning schema configuration 505 Amazon Timestream Developer Guide describeTableRequest.setDatabaseName(DATABASE_NAME); describeTableRequest.setTableName(TABLE_NAME); try { DescribeTableResult result = amazonTimestreamWrite.describeTable(describeTableRequest); String tableId = result.getTable().getArn(); System.out.println("Table " + TABLE_NAME + " has id " + tableId); // If table is created with composite partition key, it can be described with // System.out.println(result.getTable().getSchema().getCompositePartitionKey()); } catch (final Exception e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } } The following is an example output. 1. Table has dimension type partition key [{Type: DIMENSION,Name: hostId,EnforcementInRecord: OPTIONAL}] 2. Table has measure name type partition key [{Type: MEASURE,}] 3. Getting composite partition key from a table created without specifying composite partition key [{Type: MEASURE,}] Java v2 public void describeTable() { System.out.println("Describing table"); final DescribeTableRequest describeTableRequest = DescribeTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).build(); try { Checking partitioning schema configuration 506 Amazon Timestream Developer Guide DescribeTableResponse response = writeClient.describeTable(describeTableRequest); String tableId = response.table().arn(); System.out.println("Table " + TABLE_NAME + " has id " + tableId); // If table is created with composite partition key, it can be described with // System.out.println(response.table().schema().compositePartitionKey()); } catch (final Exception e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; }
|
timestream-124
|
timestream.pdf
| 124 |
key [{Type: MEASURE,}] 3. Getting composite partition key from a table created without specifying composite partition key [{Type: MEASURE,}] Java v2 public void describeTable() { System.out.println("Describing table"); final DescribeTableRequest describeTableRequest = DescribeTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).build(); try { Checking partitioning schema configuration 506 Amazon Timestream Developer Guide DescribeTableResponse response = writeClient.describeTable(describeTableRequest); String tableId = response.table().arn(); System.out.println("Table " + TABLE_NAME + " has id " + tableId); // If table is created with composite partition key, it can be described with // System.out.println(response.table().schema().compositePartitionKey()); } catch (final Exception e) { System.out.println("Table " + TABLE_NAME + " doesn't exist = " + e); throw e; } } The following is an example output. 1. Table has dimension type partition key [PartitionKey(Type=DIMENSION, Name=hostId, EnforcementInRecord=OPTIONAL)] 2. Table has measure name type partition key [PartitionKey(Type=MEASURE)] 3. Getting composite partition key from a table created without specifying composite partition key will return [PartitionKey(Type=MEASURE)] Go v1 <tablistentry> <tabname> Go </tabname> <tabcontent> <programlisting language="go"></programlisting> </tabcontent> </tablistentry> The following is an example output. { Checking partitioning schema configuration 507 Amazon Timestream Table: { Developer Guide Arn: "arn:aws:timestream:us-west-2:533139590831:database/devops/table/ host_metrics_dim_pk_1", CreationTime: 2023-05-31 01:52:00.511 +0000 UTC, DatabaseName: "devops", LastUpdatedTime: 2023-05-31 01:52:00.511 +0000 UTC, MagneticStoreWriteProperties: { EnableMagneticStoreWrites: true, MagneticStoreRejectedDataLocation: { S3Configuration: { BucketName: "timestream-sample-bucket-west", EncryptionOption: "SSE_S3", ObjectKeyPrefix: "TimeStreamCustomerSampleGo" } } }, RetentionProperties: { MagneticStoreRetentionPeriodInDays: 73000, MemoryStoreRetentionPeriodInHours: 6 }, Schema: { CompositePartitionKey: [{ EnforcementInRecord: "OPTIONAL", Name: "hostId", Type: "DIMENSION" }] }, TableName: "host_metrics_dim_pk_1", TableStatus: "ACTIVE" } } Go v2 func (timestreamBuilder TimestreamBuilder) DescribeTable() (*timestreamwrite.DescribeTableOutput, error) { describeTableInput := ×treamwrite.DescribeTableInput{ DatabaseName: aws.String(databaseName), TableName: aws.String(tableName), } describeTableOutput, err := timestreamBuilder.WriteSvc.DescribeTable(context.TODO(), describeTableInput) Checking partitioning schema configuration 508 Amazon Timestream if err != nil { Developer Guide fmt.Printf("Failed to describe table with Error: %s", err.Error()) } else { fmt.Printf("Describe table is successful : %s\n", JsonMarshalIgnoreError(*describeTableOutput)) // If table is created with composite partition key, it will be included in the output } return describeTableOutput, err } The following is an example output. { "Table": { "Arn":"arn:aws:timestream:us-east-1:351861611069:database/cdpk-wr-db/table/ host_metrics_dim_pk", "CreationTime":"2023-05-31T22:36:10.66Z", "DatabaseName":"cdpk-wr-db", "LastUpdatedTime":"2023-05-31T22:36:10.66Z", "MagneticStoreWriteProperties":{ "EnableMagneticStoreWrites":true, "MagneticStoreRejectedDataLocation":{ "S3Configuration":{ "BucketName":"error-configuration-sample-s3-bucket-cq8my", "EncryptionOption":"SSE_S3", "KmsKeyId":null,"ObjectKeyPrefix":null } } }, "RetentionProperties":{ "MagneticStoreRetentionPeriodInDays":73000, "MemoryStoreRetentionPeriodInHours":6 }, "Schema":{ "CompositePartitionKey":[{ "Type":"DIMENSION", "EnforcementInRecord":"OPTIONAL", "Name":"hostId" }] }, "TableName":"host_metrics_dim_pk", Checking partitioning schema configuration 509 Amazon Timestream Developer Guide "TableStatus":"ACTIVE" }, "ResultMetadata":{} } Python def describe_table(self): print('Describing table') try: result = self.client.describe_table(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME) print("Table [%s] has id [%s]" % (TABLE_NAME, result['Table']['Arn'])) # If table is created with composite partition key, it can be described with # print(result['Table']['Schema']) except self.client.exceptions.ResourceNotFoundException: print("Table doesn't exist") except Exception as err: print("Describe table failed:", err) The following is an example output. 1. Table has dimension type partition key [{'CompositePartitionKey': [{'Type': 'DIMENSION', 'Name': 'hostId', 'EnforcementInRecord': 'OPTIONAL'}]}] 2. Table has measure name type partition key [{'CompositePartitionKey': [{'Type': 'MEASURE'}]}] 3. Getting composite partition key from a table created without specifying composite partition key [{'CompositePartitionKey': [{'Type': 'MEASURE'}]}] Checking partitioning schema configuration 510 Amazon Timestream Developer Guide Updating partitioning schema configuration You can update table configuration for partitioning schema with an SDK with access the UpdateTable action. Update a table with a partition key You can use the following code snippets to update a table with a partition key. Java public void updateTableCompositePartitionKeyEnforcement() { System.out.println("Updating table"); UpdateTableRequest updateTableRequest = new UpdateTableRequest(); updateTableRequest.setDatabaseName(DATABASE_NAME); updateTableRequest.setTableName(TABLE_NAME); // Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement final List<PartitionKey> partitionKeyWithDimensionAndRequiredEnforcement = Collections.singletonList(new PartitionKey() .withName(COMPOSITE_PARTITION_KEY_DIM_NAME) .withType(PartitionKeyType.DIMENSION) .withEnforcementInRecord(PartitionKeyEnforcementLevel.REQUIRED)); Schema schema = new Schema(); schema.setCompositePartitionKey(partitionKeyWithDimensionAndRequiredEnforcement); updateTableRequest.withSchema(schema); writeClient.updateTable(updateTableRequest); System.out.println("Table updated"); Java v2 public void updateTableCompositePartitionKeyEnforcement() { System.out.println("Updating table"); // Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement final List<PartitionKey> partitionKeyWithDimensionAndRequiredEnforcement = Collections.singletonList(PartitionKey .builder() .name(COMPOSITE_PARTITION_KEY_DIM_NAME) Updating partitioning schema configuration 511 Amazon Timestream Developer Guide .type(PartitionKeyType.DIMENSION) .enforcementInRecord(PartitionKeyEnforcementLevel.REQUIRED) .build()); final Schema schema = Schema.builder() .compositePartitionKey(partitionKeyWithDimensionAndRequiredEnforcement).build(); final UpdateTableRequest updateTableRequest = UpdateTableRequest.builder() .databaseName(DATABASE_NAME).tableName(TABLE_NAME).schema(schema).build(); writeClient.updateTable(updateTableRequest); System.out.println("Table updated"); Go v1 // Update table partition key enforcement attribute updateTableInput := ×treamwrite.UpdateTableInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), // Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement Schema: ×treamwrite.Schema{ CompositePartitionKey: []*timestreamwrite.PartitionKey{ { Name: aws.String(CompositePartitionKeyDimName), EnforcementInRecord: aws.String("REQUIRED"), Type: aws.String("DIMENSION"), }, }}, } updateTableOutput, err := writeSvc.UpdateTable(updateTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Update table is successful, below is the output:") fmt.Println(updateTableOutput) } Go v2 // Update table partition key enforcement attribute Updating partitioning schema configuration 512 Amazon Timestream Developer Guide updateTableInput := ×treamwrite.UpdateTableInput{ DatabaseName: aws.String(*databaseName), TableName: aws.String(*tableName), // Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement Schema: &types.Schema{ CompositePartitionKey: []types.PartitionKey{ { Name: aws.String(CompositePartitionKeyDimName), EnforcementInRecord: types.PartitionKeyEnforcementLevelRequired, Type: types.PartitionKeyTypeDimension, }, }}, } updateTableOutput, err := timestreamBuilder.WriteSvc.UpdateTable(context.TODO(), updateTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Update table is successful, below is the output:") fmt.Println(updateTableOutput) } Python def update_table(self): print('Updating table') try: # Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement partition_key_with_dimension_and_required_enforcement = [ { 'Type': 'DIMENSION', 'Name': COMPOSITE_PARTITION_KEY_DIM_NAME, 'EnforcementInRecord': 'REQUIRED' } ] schema = { 'CompositePartitionKey': partition_key_with_dimension_and_required_enforcement Updating partitioning schema configuration 513 Amazon Timestream } Developer Guide
|
timestream-125
|
timestream.pdf
| 125 |
update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement Schema: &types.Schema{ CompositePartitionKey: []types.PartitionKey{ { Name: aws.String(CompositePartitionKeyDimName), EnforcementInRecord: types.PartitionKeyEnforcementLevelRequired, Type: types.PartitionKeyTypeDimension, }, }}, } updateTableOutput, err := timestreamBuilder.WriteSvc.UpdateTable(context.TODO(), updateTableInput) if err != nil { fmt.Println("Error:") fmt.Println(err) } else { fmt.Println("Update table is successful, below is the output:") fmt.Println(updateTableOutput) } Python def update_table(self): print('Updating table') try: # Can update enforcement level for dimension type partition key with OPTIONAL or REQUIRED enforcement partition_key_with_dimension_and_required_enforcement = [ { 'Type': 'DIMENSION', 'Name': COMPOSITE_PARTITION_KEY_DIM_NAME, 'EnforcementInRecord': 'REQUIRED' } ] schema = { 'CompositePartitionKey': partition_key_with_dimension_and_required_enforcement Updating partitioning schema configuration 513 Amazon Timestream } Developer Guide self.client.update_table(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, Schema=schema) print('Table updated.') except Exception as err: print('Update table failed:', err) Advantages of customer-defined partition keys Enhanced query performance: Customer-defined partition keys enable you to optimize your query execution and improve overall query performance. By defining partition keys that align with your query patterns, you can minimize data scanning and optimize data pruning, resulting in lower query latency. Better long term performance predictability: Customer-defined partition keys allow customers to distribute data evenly across partitions, improving the efficiency of data management. This will ensure that your query performance remains stable as your data stored scales over time. Limitations of customer-defined partition keys As a Timestream for LiveAnalytics user, it's important to keep in mind the limitations around a customer partition key. Firstly, it requires a good understanding of your workload and query patterns. This means that you should have a clear idea of which dimensions are most frequently use as main filtering conditions in queries and have high cardinality to make the most effective use of partition keys. Secondly, partition keys need to be defined at the time of table creation and cannot be added to existing tables. This means that you should carefully consider your partitioning strategy before creating a table to ensure that it aligns with your business needs. Lastly, it's important to note that once the table has been created, you cannot change the partition key afterwards. This means that you should thoroughly test and evaluate your partitioning strategy before committing to it. With these limitations in mind, Timestream's customer-defined partition key can greatly improve query performance and long term satisfaction. Customer-defined partition keys and low cardinality dimensions If you decide to use a partition key with very low cardinality, such as a specific region or state, it is important to note that the data for for other entities such as customerID, ProductCategory, Advantages of customer-defined partition keys 514 Amazon Timestream Developer Guide and others, could end up spread across too many partitions sometimes with little or no data present. This can lead to inefficient query execution and decreased performance. To avoid this, we recommend you choose dimensions that are not only part of your key filtering condition but have higher cardinality. This will help ensure that the data is evenly distributed across the partitions and improve query performance. Creating partition keys for existing tables If you already have tables in Timestream for LiveAnalytics and want to use customer-defined partition keys, you will need to migrate your data into a new table with the desired partitioning schema definition. This can be done using export to S3 and batch load together, which involves exporting the data from the existing table to S3, modifying the data to include the partition key (if necessary) and adding headers to your CSV files, and then importing the data into a new table with the desired partitioning schema defined. Keep in mind that this method can be time consuming and costly, especially for large tables. Alternatively, you can use scheduled queries to migrate your data to a new table with the desired partitioning schema. This method involves creating a scheduled query that reads from the existing table and writes to the new table. The scheduled query can be set up to run on a regular basis until all the data has been migrated. Keep in mind that you will be charged for reading and writing the data during the migration process. Timestream for LiveAnalytics schema validation with custom composite partition keys Schema validation in Timestream for LiveAnalytics helps ensure that data ingested into the database complies with the specified schema, minimizing ingestion errors and improving data quality. In particular, schema validation is especially useful when adopting customer-defined partition key with the goal of optimizing your query performance. What is Timestream for LiveAnalytics schema validation with customer-defined partition keys? Timestream for LiveAnalytics schema validation is a feature that validates data being ingested into a Timestream for LiveAnalytics table based on a predefined schema. This schema defines the data model, including partition key, data types, and constraints for the records being inserted. Creating partition keys for existing tables 515 Amazon Timestream Developer Guide When using a customer-defined partition key, schema validation becomes even more
|
timestream-126
|
timestream.pdf
| 126 |
errors and improving data quality. In particular, schema validation is especially useful when adopting customer-defined partition key with the goal of optimizing your query performance. What is Timestream for LiveAnalytics schema validation with customer-defined partition keys? Timestream for LiveAnalytics schema validation is a feature that validates data being ingested into a Timestream for LiveAnalytics table based on a predefined schema. This schema defines the data model, including partition key, data types, and constraints for the records being inserted. Creating partition keys for existing tables 515 Amazon Timestream Developer Guide When using a customer-defined partition key, schema validation becomes even more crucial. Partition keys allow you to specify a partition key, which determines how your data is stored in Timestream for LiveAnalytics. By validating the incoming data against the schema with a custom partition key, you can enforce data consistency, detect errors early, and improve the overall quality of the data stored in Timestream for LiveAnalytics. How to Use Timestream for LiveAnalytics schema validation with custom composite partition keys To use Timestream for LiveAnalytics schema validation with custom composite partition keys, follow these steps: Think about what your query patterns will look like: To properly choose and define the schema for your Timestream for LiveAnalytics table you should start with your query requirements. Specify custom composite partition keys: When creating the table, specify a custom partition key. This key determines the attribute that will be used to partition the table data. You can choose between dimension keys and measure keys for partitioning. A dimension key partitions data based on a dimension name, while a measure key partitions data based on the measure name. Set enforcement levels: To ensure proper data partitioning and the benefits that come with it, Amazon Timestream for LiveAnalytics allows you to set enforcement levels for each partition key in your schema. The enforcement level determines whether the partition key dimension is required or optional when ingesting records. You can choose between two options: REQUIRED, which means the partition key must be present in the ingested record, and OPTIONAL, which means the partition key doesn't have to be present. It is recommended that you use the REQUIRED enforcement level when using a customer-defined partition to ensure that your data is properly partitioned and you get the full benefits of this feature. Additionally, you can change the enforcement level configuration at any time after the schema creation to adjust to your data ingestion requirements. Ingest data: When ingesting data into the Timestream for LiveAnalytics table, the schema validation process will check the records against the defined schema with custom composite partition keys. If the records do not adhere to the schema, Timestream for LiveAnalytics will return a validation error. Handle validation errors: In case of validation errors, Timestream for LiveAnalytics will return a ValidationException or a RejectedRecordsException, depending on the type of error. Make sure to handle these exceptions in your application and take appropriate action, such as fixing the incorrect records and retrying the ingestion. Timestream for LiveAnalytics schema validation with custom composite partition keys 516 Amazon Timestream Developer Guide Update enforcement levels: If necessary, you can update the enforcement level of partition keys after table creation using the UpdateTable action. However, it's important to note that some aspects of the partition key configuration, such as the name, and type, cannot be changed after table creation. If you change the enforcement level from REQUIRED to OPTIONAL, all records will be accepted regardless of the presence of the attribute selected as the customer-defined partition key. Conversely, if you change the enforcement level from OPTIONAL to REQUIRED, you may start seeing 4xx write errors for records that don't meet this condition. Therefore, it's essential to choose the appropriate enforcement level for your use case when creating your table, based on your data's partitioning requirements. When to use Timestream for LiveAnalytics schema validation with custom composite partition keys Timestream for LiveAnalytics schema validation with custom composite partition keys should be used in scenarios where data consistency, quality, and optimized partitioning are crucial. By enforcing a schema during data ingestion, you can prevent errors and inconsistencies that might lead to incorrect analysis or loss of valuable insights. Interaction with batch load jobs When setting up a batch load job to import data into a table with a customer-defined partition key, there are a few scenarios that could affect the process: 1. If the enforcement level is set to OPTIONAL, an alert will be displayed on the console during the creation flow if the partition key is not mapped during job configuration. This alert will not appear when using the API or CLI. 2. If the enforcement level is set to REQUIRED, the job creation will be rejected unless the partition key is mapped to a source data column. 3. If the enforcement
|
timestream-127
|
timestream.pdf
| 127 |
When setting up a batch load job to import data into a table with a customer-defined partition key, there are a few scenarios that could affect the process: 1. If the enforcement level is set to OPTIONAL, an alert will be displayed on the console during the creation flow if the partition key is not mapped during job configuration. This alert will not appear when using the API or CLI. 2. If the enforcement level is set to REQUIRED, the job creation will be rejected unless the partition key is mapped to a source data column. 3. If the enforcement level is changed to REQUIRED after the job is created, the job will continue to execute, but any records that do not have the proper mapping for the partition key will be rejected with a 4xx error. Interaction with scheduled query When setting up a scheduled query job for calculating and storing aggregates, rollups, and other forms of preprocessed data into a table with a customer-defined partition key, there are a few scenarios that could affect the process: Timestream for LiveAnalytics schema validation with custom composite partition keys 517 Amazon Timestream Developer Guide 1. If the enforcement level is set to OPTIONAL, an alert will be displayed if the partition key is not mapped during job configuration. This alert will not appear when using the API or CLI. 2. If the enforcement level is set to REQUIRED, the job creation will be rejected unless the partition key is mapped to a source data column. 3. If the enforcement level is changed to REQUIRED after the job is created and the scheduled query results does not contain the partition key dimension, all the next iterations of the job will fail. Adding tags and labels to resources You can label Amazon Timestream for LiveAnalytics resources using tags. Tags let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. Tags can help you do the following: • Quickly identify a resource based on the tags that you assigned to it. • See AWS bills broken down by tags. Tagging is supported by AWS services like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Timestream for LiveAnalytics, and more. Efficient tagging can provide cost insights by enabling you to create reports across services that carry a specific tag. To get started with tagging, do the following: 1. Understand Tagging restrictions. 2. Create tags by using Tagging operations. Finally, it is good practice to follow optimal tagging strategies. For information, see AWS Tagging Strategies. Tagging restrictions Each tag consists of a key and a value, both of which you define. The following restrictions apply: • Each Timestream for LiveAnalytics table can have only one tag with the same key. If you try to add an existing tag, the existing tag value is updated to the new value. Tagging resources 518 Amazon Timestream Developer Guide • A value acts as a descriptor within a tag category. In Timestream for LiveAnalytics the value cannot be empty or null. • Tag keys and values are case sensitive. • The maximum key length is 128 Unicode characters. • The maximum value length is 256 Unicode characters. • The allowed characters are letters, white space, and numbers, plus the following special characters: + - = . _ : / • The maximum number of tags per resource is 50. • AWS-assigned tag names and values are automatically assigned the aws: prefix, which you can't assign. AWS-assigned tag names don't count toward the tag limit of 50. User-assigned tag names have the prefix user: in the cost allocation report. • You can't backdate the application of a tag. Tagging operations You can add, list, edit, or delete tags for databases and tables using the Amazon Timestream for LiveAnalytics console, query language, or the AWS Command Line Interface (AWS CLI). Topics • Adding tags to new or existing databases and tables using the console Adding tags to new or existing databases and tables using the console You can use the Timestream for LiveAnalytics console to add tags to new databases, tables and scheduled queries when you create them. You can also add, edit, or delete tags for existing tables. To tag databases when creating them (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Databases, and then choose Create database. 3. On the Create database page, provide a name for the database. Enter a key and value for the tag, and then choose Add new tag. 4. Choose Create database. Tagging operations 519 Amazon Timestream Developer Guide To tag tables when creating them (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create
|
timestream-128
|
timestream.pdf
| 128 |
edit, or delete tags for existing tables. To tag databases when creating them (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Databases, and then choose Create database. 3. On the Create database page, provide a name for the database. Enter a key and value for the tag, and then choose Add new tag. 4. Choose Create database. Tagging operations 519 Amazon Timestream Developer Guide To tag tables when creating them (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Tables, and then choose Create table. 3. On the Create Timestream for LiveAnalytics table page, provide a name for the table. Enter a key and value for the tag, and choose Add new tag. 4. Choose Create table. To tag scheduled queries when creating them (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Scheduled queries, and then choose Create scheduled query. 3. On the Step 3. Configure query settings page, choose Add new tag. Enter a key and value for the tag. Choose Add new tag to add additional tags. 4. Choose Next. To tag existing resources (console) 1. Open the Timestream console at https://console.aws.amazon.com/timestream. 2. In the navigation pane, choose Databases, Tables or Scheduled queries. 3. Choose a database or table in the list. Then choose Manage tags to add, edit, or delete your tags. For information about tag structure, see Tagging restrictions. Security in Timestream for LiveAnalytics Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. The Security 520 Amazon Timestream Developer Guide effectiveness of our security is regularly tested and verified by third-party auditors as part of the AWS compliance programs. To learn about the compliance programs that apply to Timestream for LiveAnalytics, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization's requirements, and applicable laws and regulations. This documentation will help you understand how to apply the shared responsibility model when using Timestream for LiveAnalytics. The following topics show you how to configure Timestream for LiveAnalytics to meet your security and compliance objectives. You'll also learn how to use other AWS services that can help you to monitor and secure your Timestream for LiveAnalytics resources. Topics • Data protection in Timestream for LiveAnalytics • Identity and access management for Amazon Timestream for LiveAnalytics • Logging and monitoring in Timestream for LiveAnalytics • Resilience in Amazon Timestream Live Analytics • Infrastructure security in Amazon Timestream Live Analytics • Configuration and vulnerability analysis in Timestream • Incident response in Timestream for LiveAnalytics • VPC endpoints (AWS PrivateLink) • Security best practices for Amazon Timestream for LiveAnalytics Data protection in Timestream for LiveAnalytics The AWS shared responsibility model applies to data protection in Amazon Timestream Live Analytics. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the Data Privacy FAQ. For information about data protection in Europe, see the AWS Shared Responsibility Model and GDPR blog post on the AWS Security Blog. Data protection 521 Amazon Timestream Developer Guide For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3. • Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see Working with CloudTrail trails in the AWS CloudTrail User Guide. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and
|
timestream-129
|
timestream.pdf
| 129 |
their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3. • Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see Working with CloudTrail trails in the AWS CloudTrail User Guide. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. • If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see Federal Information Processing Standard (FIPS) 140-3. We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a Name field. This includes when you work with Timestream Live Analytics or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server. For more detailed information on Timestream for LiveAnalytics data protection topics like Encryption at Rest and Key Management, select any of the available topics below. Topics • Encryption at rest • Encryption in transit • Key management Data protection 522 Amazon Timestream Encryption at rest Developer Guide Timestream for LiveAnalytics encryption at rest provides enhanced security by encrypting all your data at rest using encryption keys stored in AWS Key Management Service (AWS KMS). This functionality helps reduce the operational burden and complexity involved in protecting sensitive data. With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. • Encryption is turned on by default on your Timestream for LiveAnalytics database, and cannot be turned off. The industry standard AES-256 encryption algorithm is the default encryption algorithm used. • AWS KMS is required for encryption at rest in Timestream for LiveAnalytics. • You cannot encrypt only a subset of items in a table. • You don't need to modify your database client applications to use encryption. If you do not provide a key, Timestream for LiveAnalytics creates and uses an AWS KMS key named alias/aws/timestream in your account. You may use your own customer managed key in KMS to encrypt your Timestream for LiveAnalytics data. For more information on keys in Timestream for LiveAnalytics, see Key management. Timestream for LiveAnalytics stores your data in two storage tiers, memory store and magnetic store. Memory store data is encrypted using a Timestream for LiveAnalytics service key. Magnetic store data is encrypted using your AWS KMS key. The Timestream Query service requires credentials to access your data. These credentials are encrypted using your KMS key. Note Timestream for LiveAnalytics doesn't call AWS KMS for every Decrypt operation. Instead, it maintains a local cache of keys for 5 minutes with active traffic. Any permission changes are propagated through the Timestream for LiveAnalytics system with eventual consistency within at most 5 minutes. Data protection 523 Amazon Timestream Encryption in transit Developer Guide All your Timestream Live Analytics data is encrypted in transit. By default, all communications to and from Timestream for LiveAnalytics are protected by using Transport Layer Security (TLS) encryption. Key management You can manage keys for Amazon Timestream Live Analytics using the AWS Key Management Service (AWS KMS). Timestream Live Analytics requires the use of KMS to encrypt your data. You have the following options for key management, depending on how much control you require over your keys: Database and table resources • Timestream Live Analytics-managed key: If you do not provide a key, Timestream Live Analytics will create a alias/aws/timestream key using KMS. • Customer managed key: KMS customer managed keys are supported. Choose this option if you require more control over the permissions and lifecycle of your keys, including the ability to have them automatically rotated on an annual basis. Scheduled query resource • Timestream Live Analytics-owned key: If you do not provide a key, Timestream Live Analytics will use its own a KMS key to encrypt the Query resource, this key is present in timestream account. See AWS owned keys in the KMS developer guide for more details. • Customer managed key: KMS customer managed keys are supported. Choose this option if you require more control over the permissions and lifecycle of your keys,
|
timestream-130
|
timestream.pdf
| 130 |
if you require more control over the permissions and lifecycle of your keys, including the ability to have them automatically rotated on an annual basis. Scheduled query resource • Timestream Live Analytics-owned key: If you do not provide a key, Timestream Live Analytics will use its own a KMS key to encrypt the Query resource, this key is present in timestream account. See AWS owned keys in the KMS developer guide for more details. • Customer managed key: KMS customer managed keys are supported. Choose this option if you require more control over the permissions and lifecycle of your keys, including the ability to have them automatically rotated on an annual basis. KMS keys in an external key store (XKS) are not supported. Identity and access management for Amazon Timestream for LiveAnalytics AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) Identity and access management 524 Amazon Timestream Developer Guide and authorized (have permissions) to use Timestream for LiveAnalytics resources. IAM is an AWS service that you can use with no additional charge. Topics • Audience • Authenticating with identities • Managing access using policies • How Amazon Timestream for LiveAnalytics works with IAM • AWS managed policies for Amazon Timestream Live Analytics • Amazon Timestream for LiveAnalytics identity-based policy examples • Troubleshooting Amazon Timestream for LiveAnalytics identity and access Audience How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in Timestream for LiveAnalytics. Service user – If you use the Timestream for LiveAnalytics service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Timestream for LiveAnalytics features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Timestream for LiveAnalytics, see Troubleshooting Amazon Timestream for LiveAnalytics identity and access. Service administrator – If you're in charge of Timestream for LiveAnalytics resources at your company, you probably have full access to Timestream for LiveAnalytics. It's your job to determine which Timestream for LiveAnalytics features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Timestream for LiveAnalytics, see How Amazon Timestream for LiveAnalytics works with IAM. IAM administrator – If you're an IAM administrator, you might want to learn details about how you can write policies to manage access to Timestream for LiveAnalytics. To view example Timestream for LiveAnalytics identity-based policies that you can use in IAM, see Amazon Timestream for LiveAnalytics identity-based policy examples. Identity and access management 525 Amazon Timestream Developer Guide Authenticating with identities Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. IAM users and groups An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend
|
timestream-131
|
timestream.pdf
| 131 |
see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. IAM users and groups An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require long- term credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier Identity and access management 526 Amazon Timestream Developer Guide to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. IAM roles An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross- account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. Identity and access management 527 Amazon Timestream Developer Guide • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that
|
timestream-132
|
timestream.pdf
| 132 |
a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. Managing access using policies You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal (user, root user, or role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. Identity and access management 528 Amazon Timestream Developer Guide By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API. Identity-based policies Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. Identity-based policies can be further categorized as inline policies or managed policies. Inline policies are embedded directly into a single user, group, or role. Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see Choose between managed policies and inline policies in the IAM User Guide. Resource-based policies Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from
|
timestream-133
|
timestream.pdf
| 133 |
you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy. Access control lists (ACLs) Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Identity and access management 529 Amazon Timestream Developer Guide Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide. Other policy types AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. • Permissions boundaries – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of an entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the user or role in the Principal field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see Permissions boundaries for IAM entities in the IAM User Guide. • Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see Service control policies in the AWS Organizations User Guide. • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see Resource control policies (RCPs) in the AWS Organizations User Guide. • Session policies – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the user or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see Session policies in the IAM User Guide. Identity and access management 530 Amazon Timestream Multiple policy types Developer Guide When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see Policy evaluation logic in the IAM User Guide. How Amazon Timestream for LiveAnalytics works with IAM Before you use IAM to manage access to Timestream for LiveAnalytics, you should understand what IAM features are available to use with Timestream for LiveAnalytics. To get a high-level view of how Timestream for LiveAnalytics and other AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide. Topics • Timestream for LiveAnalytics identity-based policies • Timestream for LiveAnalytics resource-based policies • Authorization based on Timestream for LiveAnalytics tags • Timestream for LiveAnalytics IAM roles Timestream for LiveAnalytics identity-based policies With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. Timestream for LiveAnalytics supports specific actions and resources, and condition keys. To learn about all of the elements that you use in a JSON policy, see IAM JSON Policy Elements Reference in the IAM User Guide. Actions Administrators can use AWS JSON policies to specify
|
timestream-134
|
timestream.pdf
| 134 |
Timestream for LiveAnalytics identity-based policies • Timestream for LiveAnalytics resource-based policies • Authorization based on Timestream for LiveAnalytics tags • Timestream for LiveAnalytics IAM roles Timestream for LiveAnalytics identity-based policies With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. Timestream for LiveAnalytics supports specific actions and resources, and condition keys. To learn about all of the elements that you use in a JSON policy, see IAM JSON Policy Elements Reference in the IAM User Guide. Actions Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Action element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Policy actions usually have the same name as the associated AWS API operation. There are some exceptions, such as permission-only actions that don't have a matching API operation. There are also some operations that require multiple actions in a policy. These additional actions are called dependent actions. Include actions in a policy to grant permissions to perform the associated operation. Identity and access management 531 Amazon Timestream Developer Guide You can specify the following actions in the Action element of an IAM policy statement. Use policies to grant permissions to perform an operation in AWS. When you use an action in a policy, you usually allow or deny access to the API operation, CLI command or SQL command with the same name. In some cases, a single action controls access to an API operation as well as SQL command. Alternatively, some operations require several different actions. For a list of supported Timestream for LiveAnalytics Action's, see the table below: Note For all database-specific Actions, you can specify a database ARN to limit the action to a particular database. Actions Description Access level DescribeEndpoints Returns the Timestream endpoint All Resource types (*required) * Select that subsequent requests must be made to. Run queries on Timestream that select data from one or more tables. See this note for a detailed explanation Read table* CancelQuery Cancel a query. Read * ListTables Get the list of tables. List database* ListDatabases Get the list of databases. List * Identity and access management 532 Amazon Timestream Developer Guide Actions Description Access level Resource types (*required) table* table* database* * Read Read Read Read ListMeasures DescribeTable Get the list of measures. Get the table description. DescribeDatabase Get the database description. SelectValues WriteRecords Run queries that do not require a particular resource to be specified. See this note for a detailed explanation. Insert data into Timestream. Write table* CreateTable Create a table. Write database* CreateDatabase Create a database. Write DeleteDatabase Delete a database. Write UpdateDatabase Update a database. Write * * * DeleteTable Delete a table. UpdateTable Update a table. Write Write database* database* SelectValues vs. select: SelectValues is an Action that is used for queries that do not require a resource. An example of a query that does not require a resource is as follows: Identity and access management 533 Amazon Timestream SELECT 1 Developer Guide Notice that this query does not refer to a particular Timestream for LiveAnalytics resource. Consider another example: SELECT now() This query returns the current timestamp using the now() function, but does not require a resource to be specified. SelectValues is often used for testing, so that Timestream for LiveAnalytics can run queries without resources. Now, consider a Select query: SELECT * FROM database.table This type of query requires a resource, specifcially an Timestream for LiveAnalytics table , so that the specified data can be fetched from the table. Resources Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Resource JSON policy element specifies the object or objects to which the action applies. Statements must include either a Resource or a NotResource element. As a best practice, specify a resource using its Amazon Resource Name (ARN). You can do this for actions that support a specific resource type, known as resource-level permissions. For actions that don't support resource-level permissions, such as listing operations, use a wildcard (*) to indicate that the statement applies to all resources. "Resource": "*" In Timestream for LiveAnalytics databases and tables can be used in the Resource element of IAM permissions. The Timestream for LiveAnalytics database resource has the following ARN: arn:${Partition}:timestream:${Region}:${Account}:database/${DatabaseName} The Timestream for LiveAnalytics table resource has the following ARN: Identity and access management 534 Amazon Timestream Developer Guide arn:${Partition}:timestream:${Region}:${Account}:database/${DatabaseName}/table/ ${TableName} For more information about the format of ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces. For example, to specify the database keyspace in
|
timestream-135
|
timestream.pdf
| 135 |
For actions that don't support resource-level permissions, such as listing operations, use a wildcard (*) to indicate that the statement applies to all resources. "Resource": "*" In Timestream for LiveAnalytics databases and tables can be used in the Resource element of IAM permissions. The Timestream for LiveAnalytics database resource has the following ARN: arn:${Partition}:timestream:${Region}:${Account}:database/${DatabaseName} The Timestream for LiveAnalytics table resource has the following ARN: Identity and access management 534 Amazon Timestream Developer Guide arn:${Partition}:timestream:${Region}:${Account}:database/${DatabaseName}/table/ ${TableName} For more information about the format of ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces. For example, to specify the database keyspace in your statement, use the following ARN: "Resource": "arn:aws:timestream:us-east-1:123456789012:database/mydatabase" To specify all databases that belong to a specific account, use the wildcard (*): "Resource": "arn:aws:timestream:us-east-1:123456789012:database/*" Some Timestream for LiveAnalytics actions, such as those for creating resources, cannot be performed on a specific resource. In those cases, you must use the wildcard (*). "Resource": "*" Condition keys Timestream for LiveAnalytics does not provide any service-specific condition keys, but it does support using some global condition keys. To see all AWS global condition keys, see AWS Global Condition Context Keys in the IAM User Guide. Examples To view examples of Timestream for LiveAnalytics identity-based policies, see Amazon Timestream for LiveAnalytics identity-based policy examples. Timestream for LiveAnalytics resource-based policies Timestream for LiveAnalytics does not support resource-based policies. To view an example of a detailed resource-based policy page, see https://docs.aws.amazon.com/lambda/latest/dg/access- control-resource-based.html. Authorization based on Timestream for LiveAnalytics tags You can manage access to your Timestream for LiveAnalytics resources by using tags. To manage resource access based on tags, you provide tag information in the condition element of a Identity and access management 535 Amazon Timestream Developer Guide policy using the timestream:ResourceTag/key-name, aws:RequestTag/key-name, or aws:TagKeys condition keys. For more information about tagging Timestream for LiveAnalytics resources, see the section called “Tagging resources”. To view example identity-based policies for limiting access to a resource based on the tags on that resource, see Timestream for LiveAnalytics resource access based on tags. Timestream for LiveAnalytics IAM roles An IAM role is an entity within your AWS account that has specific permissions. Using temporary credentials with Timestream for LiveAnalytics You can use temporary credentials to sign in with federation, assume an IAM role, or to assume a cross-account role. You obtain temporary security credentials by calling AWS STS API operations such as AssumeRole or GetFederationToken. Service-linked roles Timestream for LiveAnalytics does not support service-linked roles. Service roles Timestream for LiveAnalytics does not support service roles. AWS managed policies for Amazon Timestream Live Analytics An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles. Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining customer managed policies that are specific to your use cases. You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed Identity and access management 536 Amazon Timestream Developer Guide policy when a new AWS service is launched or new API operations become available for existing services. For more information, see AWS managed policies in the IAM User Guide. Topics • AWS managed policy: AmazonTimestreamReadOnlyAccess • AWS managed policy: AmazonTimestreamConsoleFullAccess • AWS managed policy: AmazonTimestreamFullAccess • Timestream Live Analytics updates to AWS managed policies AWS managed policy: AmazonTimestreamReadOnlyAccess You can attach AmazonTimestreamReadOnlyAccess to your users, groups, and roles. The policy provides read-only access to Amazon Timestream. Permission details This policy includes the following permission: • Amazon Timestream – Provides read-only access to Amazon Timestream. This policy also grants permission to cancel any running query. To review this policy in JSON format, see AmazonTimestreamReadOnlyAccess. AWS managed policy: AmazonTimestreamConsoleFullAccess You can attach AmazonTimestreamConsoleFullAccess to your users, groups, and roles. The policy provides full access to manage Amazon Timestream using the AWS Management Console. This policy also grants permissions for certain AWS KMS operations and operations to manage your saved queries. Permission details Identity and access management 537 Amazon Timestream Developer Guide This policy includes the following permissions: • Amazon Timestream – Grants principals full access to Amazon Timestream. • AWS KMS – Allows principals to list aliases and describe keys. • Amazon S3 – Allows principals to list all Amazon S3 buckets. • Amazon SNS – Allows principals to list Amazon SNS topics. • IAM – Allows principals to list IAM roles. • DBQMS – Allows principals to access,
|
timestream-136
|
timestream.pdf
| 136 |
the AWS Management Console. This policy also grants permissions for certain AWS KMS operations and operations to manage your saved queries. Permission details Identity and access management 537 Amazon Timestream Developer Guide This policy includes the following permissions: • Amazon Timestream – Grants principals full access to Amazon Timestream. • AWS KMS – Allows principals to list aliases and describe keys. • Amazon S3 – Allows principals to list all Amazon S3 buckets. • Amazon SNS – Allows principals to list Amazon SNS topics. • IAM – Allows principals to list IAM roles. • DBQMS – Allows principals to access, create, delete, describe, and update queries. The Database Query Metadata Service (dbqms) is an internal-only service. It provides your recent and saved queries for the query editor on the AWS Management Console for multiple AWS services, including Amazon Timestream. To review this policy in JSON format, see AmazonTimestreamConsoleFullAccess. AWS managed policy: AmazonTimestreamFullAccess You can attach AmazonTimestreamFullAccess to your users, groups, and roles. The policy provides full access to Amazon Timestream. This policy also grants permissions for certain AWS KMS operations. Permission details This policy includes the following permissions: • Amazon Timestream – Grants principals full access to Amazon Timestream. • AWS KMS – Allows principals to list aliases and describe keys. • Amazon S3 – Allows principals to list all Amazon S3 buckets. To review this policy in JSON format, see AmazonTimestreamFullAccess. Timestream Live Analytics updates to AWS managed policies View details about updates to AWS managed policies for Timestream Live Analytics since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Timestream Live Analytics Document history page. Identity and access management 538 Amazon Timestream Developer Guide Change Description Date AmazonTimestreamRe adOnlyAccess – Update to an existing policy AmazonTimestreamRe adOnlyAccess – Update to an existing policy Added the timestrea June 03, 2024 m:DescribeAccountS ettings action to the existing AmazonTim estreamReadOnlyAcc ess managed policy. This action is used for describing AWS account settings. Timestream Live Analytics has also updated this managed policy by adding an Sid field. The policy update doesn't impact the usage of the AmazonTimestreamRe adOnlyAccess policy. managed Added the timestrea February 24, 2023 m:DescribeBatchLoa dTask and timestrea m:ListBatchLoadTas ks actions to the existing AmazonTimestreamRe adOnlyAccess policy. These actions are used managed when listing and describing batch load tasks. The policy update doesn't impact the usage of the AmazonTimestreamRe adOnlyAccess policy. managed Identity and access management 539 Amazon Timestream Developer Guide Change Description Date AmazonTimestreamRe adOnlyAccess – Update to an existing policy AmazonTimestreamCo nsoleFullAccess – Update to an existing policy Added the timestrea November 29, 2021 m:DescribeSchedule dQuery and timestrea m:ListScheduledQue ries actions to the existing AmazonTimestreamRe adOnlyAccess policy. These actions are used managed when listing and describing existing scheduled queries. The policy update doesn't impact the usage of the AmazonTimestreamRe adOnlyAccess policy. managed Added the s3:ListAl November 29, 2021 lMyBuckets the existing AmazonTim action to estreamConsoleFull Access managed policy. This action is used when you specify an Amazon S3 bucket for Timestream to log magnetic store write errors. The policy update doesn't impact the usage of the AmazonTimestreamCo nsoleFullAccess managed policy. Identity and access management 540 Amazon Timestream Developer Guide Change Description Date AmazonTimestreamFullAccess – Update to an existing policy AmazonTimestreamCo nsoleFullAccess – Update to an existing policy Added the s3:ListAl November 29, 2021 lMyBuckets the existing AmazonTim action to estreamFullAccess managed policy. This action is used when you specify an Amazon S3 bucket for Timestream to log magnetic store write errors. The policy update doesn't impact the usage of the AmazonTimestreamFu llAccess managed policy. Removed redundant actions April 23, 2021 from the existing AmazonTim estreamConsoleFull Access managed policy. Previously, this policy included a redundant action dbqms:DescribeQuer yHistory . The updated policy removes the redundant action. The policy update doesn't impact the usage of the AmazonTimestreamCo nsoleFullAccess managed policy. Timestream Live Analytics started tracking changes Timestream Live Analytics started tracking changes for its AWS managed policies. April 21, 2021 Identity and access management 541 Amazon Timestream Developer Guide Amazon Timestream for LiveAnalytics identity-based policy examples By default, IAM users and roles don't have permission to create or modify Timestream for LiveAnalytics resources. They also can't perform tasks using the AWS Management Console, CQLSH, AWS CLI, or AWS API. An IAM administrator must create IAM policies that grant users and roles permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the IAM users or groups that require those permissions. To learn how to create an IAM identity-based policy using these example JSON policy documents, see Creating Policies on the JSON Tab in the IAM User Guide. Topics • Policy best practices • Using the Timestream for LiveAnalytics console • Allow users to view their own permissions
|
timestream-137
|
timestream.pdf
| 137 |
tasks using the AWS Management Console, CQLSH, AWS CLI, or AWS API. An IAM administrator must create IAM policies that grant users and roles permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the IAM users or groups that require those permissions. To learn how to create an IAM identity-based policy using these example JSON policy documents, see Creating Policies on the JSON Tab in the IAM User Guide. Topics • Policy best practices • Using the Timestream for LiveAnalytics console • Allow users to view their own permissions • Common operations in Timestream for LiveAnalytics • Timestream for LiveAnalytics resource access based on tags • Scheduled queries Policy best practices Identity-based policies determine whether someone can create, access, or delete Timestream for LiveAnalytics resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations: • Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide. • Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide. Identity and access management 542 Amazon Timestream Developer Guide • Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide. • Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see Secure API access with MFA in the IAM User Guide. For more information about best practices in IAM, see Security best practices in IAM in the IAM User Guide. Using the Timestream for LiveAnalytics console Timestream for LiveAnalytics does not require specific permissions to access the Amazon Timestream for LiveAnalytics console. You need at least read-only permissions to list and view details about the Timestream for LiveAnalytics resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (IAM users or roles) with that policy. Allow users to view their own permissions This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API. { "Version": "2012-10-17", "Statement": [ { "Sid": "ViewOwnUserInfo", "Effect": "Allow", Identity and access management 543 Amazon Timestream Developer Guide "Action": [ "iam:GetUserPolicy", "iam:ListGroupsForUser", "iam:ListAttachedUserPolicies", "iam:ListUserPolicies", "iam:GetUser" ], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], "Resource": "*" } ] } Common operations in Timestream for LiveAnalytics Below are sample IAM policies that allow for common operations in the Timestream for LiveAnalytics service. Topics • Allowing all operations • Allowing SELECT operations • Allowing SELECT operations on multiple resources • Allowing metadata operations • Allowing INSERT operations • Allowing CRUD operations • Cancel queries and select data without specifying resources Identity and access management 544 Amazon Timestream Developer Guide • Create, describe, delete and describe a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.