id
stringlengths 8
78
| source
stringclasses 743
values | chunk_id
int64 1
5.05k
| text
stringlengths 593
49.7k
|
---|---|---|---|
twinmaker-guide-013 | twinmaker-guide.pdf | 13 | exist. • Sub-resource delete: AWS IoT TwinMaker metadataTransferJob functionality 38 AWS IoT TwinMaker User Guide AWS IoT TwinMaker also supports sub-resource deletion. A sub-resource can be a component, property, or relationship. If you want to delete a component, you must do it from the entity level. If you want to delete a property or relationship, you must do it from the Entity or EntityComponent level. To delete a sub-resource, you update the higher level resource and omit the definition of the sub-resource. • No top-level resource deletion: AWS IoT TwinMaker will never delete top-level resources. A top- level resource refers to an entity or ComponentType. • No sub-resource Definitions for the same top-level resource in one template: You can't provide the full entity definition and sub-resource (like property) definition of the same entity in the same template. If an entityId is used in Entity, you cannot use the same ID in Entity, EntityComponent, property, or relationship. If an entityId or componentName combination is used in EntityComponent, you cannot use the same combination in EntityComponent, property, or relationship. If an entityId, componentName, propertyName combination is used in property or relationship, you cannot use the same combination in the property or relationship. • ExternalId is optional for AWS IoT TwinMaker: The ExternalId can be used to help you identify your resources. Performing bulk import and export operations This topic covers how to perform bulk import and export operations and how to handle errors in your transfer jobs. It provides examples of transfer jobs using CLI commands. The AWS IoT TwinMaker API Reference contains information on the CreateMetadataTransferJob and other API actions. Topics • metadataTransferJob prerequisites Performing bulk import and export operations 39 AWS IoT TwinMaker • IAM permissions • Run a bulk operation • Error handling • Import metadata templates • AWS IoT TwinMaker metadataTransferJob examples metadataTransferJob prerequisites User Guide Please complete the following prerequisites before you run a metadataTransferJob: • Create an AWS IoT TwinMaker workspace. The workspace can be the import destination or export source for a metadataTransferJob. For information on creating a workspace see, Create a workspace. • Create an Amazon S3 bucket to store resources. For more information on using Amazon S3 see, What is Amazon S3? IAM permissions When you perform bulk operations you need to create an IAM policy with permissions to allow for the exchange of AWS resources between Amazon S3, AWS IoT TwinMaker, AWS IoT SiteWise, and your local machine. For more information on creating IAM policies, see Creating IAM policies. The policy statements for AWS IoT TwinMaker, AWS IoT SiteWise and Amazon S3 are listed here: • AWS IoT TwinMaker policy: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts" metadataTransferJob prerequisites 40 User Guide AWS IoT TwinMaker ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iottwinmaker:GetWorkspace", "iottwinmaker:CreateEntity", "iottwinmaker:GetEntity", "iottwinmaker:UpdateEntity", "iottwinmaker:GetComponentType", "iottwinmaker:CreateComponentType", "iottwinmaker:UpdateComponentType", "iottwinmaker:ListEntities", "iottwinmaker:ListComponentTypes", "iottwinmaker:ListTagsForResource", "iottwinmaker:TagResource", "iottwinmaker:UntagResource" ], "Resource": "*" } ] } • AWS IoT SiteWise policy: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ IAM permissions 41 AWS IoT TwinMaker User Guide "iotsitewise:CreateAsset", "iotsitewise:CreateAssetModel", "iotsitewise:UpdateAsset", "iotsitewise:UpdateAssetModel", "iotsitewise:UpdateAssetProperty", "iotsitewise:ListAssets", "iotsitewise:ListAssetModels", "iotsitewise:ListAssetProperties", "iotsitewise:ListAssetModelProperties", "iotsitewise:ListAssociatedAssets", "iotsitewise:DescribeAsset", "iotsitewise:DescribeAssetModel", "iotsitewise:DescribeAssetProperty", "iotsitewise:AssociateAssets", "iotsitewise:DisassociateAssets", "iotsitewise:AssociateTimeSeriesToAssetProperty", "iotsitewise:DisassociateTimeSeriesFromAssetProperty", "iotsitewise:BatchPutAssetPropertyValue", "iotsitewise:BatchGetAssetPropertyValue", "iotsitewise:TagResource", "iotsitewise:UntagResource", "iotsitewise:ListTagsForResource" ], "Resource": "*" } ] } • Amazon S3 policy: { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts" ], "Resource": "*" } IAM permissions 42 AWS IoT TwinMaker User Guide Alternatively you can scope your Amazon S3 policy to only access a single Amazon S3 bucket, see the following policy. Amazon S3 single bucket scoped policy { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::bucket name", "arn:aws:s3:::bucket name/*" ] } Set access control for a metadataTransferJob To control what kind of jobs a user can access, add the following IAM policy to the role used to call AWS IoT TwinMaker. Note This policy only allows access to AWS IoT TwinMaker import and export jobs that transfer resources to and from Amazon S3. { "Effect": "Allow", "Action": [ "iottwinmaker:*DataTransferJob*" ], "Resource": "*", "Condition": { IAM permissions 43 AWS IoT TwinMaker User Guide "StringLikeIfExists": { "iottwinmaker:sourceType": [ "s3", "iottwinmaker" ], "iottwinmaker:destinationType": [ "iottwinmaker", "s3" ] } } } Run a bulk operation This section covers how to perform bulk import and export operations. Import data from Amazon S3 to AWS IoT TwinMaker 1. Specify the resources you want to transfer using the AWS IoT TwinMaker metadataTransferJob schema. Create and store your schema file in your Amazon S3 bucket. For example schemas, see Import metadata templates. 2. Create a request body and save it as a JSON file. The request body specifies the source and destination for the transfer job. Make sure to |
twinmaker-guide-014 | twinmaker-guide.pdf | 14 | Guide "StringLikeIfExists": { "iottwinmaker:sourceType": [ "s3", "iottwinmaker" ], "iottwinmaker:destinationType": [ "iottwinmaker", "s3" ] } } } Run a bulk operation This section covers how to perform bulk import and export operations. Import data from Amazon S3 to AWS IoT TwinMaker 1. Specify the resources you want to transfer using the AWS IoT TwinMaker metadataTransferJob schema. Create and store your schema file in your Amazon S3 bucket. For example schemas, see Import metadata templates. 2. Create a request body and save it as a JSON file. The request body specifies the source and destination for the transfer job. Make sure to specify your Amazon S3 bucket as the source and your AWS IoT TwinMaker workspace as the destination. The following is an example of a request body: { "metadataTransferJobId": "your-transfer-job-Id", "sources": [{ "type": "s3", "s3Configuration": { "location": "arn:aws:s3:::amzn-s3-demo-bucket/your_import_data.json" } }], "destination": { "type": "iottwinmaker", "iotTwinMakerConfiguration": { "workspace": "arn:aws:iottwinmaker:us- east-1:111122223333:workspace/your-worksapce-name" Run a bulk operation 44 AWS IoT TwinMaker } } } User Guide Record the file name you gave your request body, you will need it in the next step. In this example the request body is named createMetadataTransferJobImport.json. 3. Run the following CLI command to invoke CreateMetadataTransferJob (replace the input- json file name with the name you gave your request body): aws iottwinmaker create-metadata-transfer-job --region us-east-1 \ --cli-input-json file://createMetadataTransferJobImport.json This creates a metadataTransferJob and begins the process of the transferring your selected resources. Export data from AWS IoT TwinMaker to Amazon S3 1. Create a JSON request body with the appropriate filters to choose the resources you want to export. For this example we use: { "metadataTransferJobId": "your-transfer-job-Id", "sources": [{ "type": "iottwinmaker", "iotTwinMakerConfiguration": { "workspace": "arn:aws:iottwinmaker:us- east-1:111122223333:workspace/your-workspace-name", "filters": [{ "filterByEntity": { "entityId": "parent" }}, { "filterByEntity": { "entityId": "child" }}, { "filterByComponentType": { "componentTypeId": "component.type.minimal" }} ] Run a bulk operation 45 AWS IoT TwinMaker User Guide } }], "destination": { "type": "s3", "s3Configuration": { "location": "arn:aws:s3:::amzn-s3-demo-bucket" } } } The filters array lets you specify which resources will be exported. In this example we filter by entity, and componentType. Make sure to specify your AWS IoT TwinMaker workspace as the source and your Amazon S3 bucket as the destination of the metadata transfer job. Save your request body and record the file name, you will need it in the next step. For this example, we named our request body createMetadataTransferJobExport.json. 2. Run the following CLI command to invoke CreateMetadataTransferJob (replace the input- json file name with the name you gave your request body): aws iottwinmaker create-metadata-transfer-job --region us-east-1 \ --cli-input-json file://createMetadataTransferJobExport.json This creates a metadataTransferJob and begins the process of the transferring your selected resources. To check or update the status of a transfer job, use the following commands: • To cancel a job use the CancelMetadataTransferJob API action. When you call CancelMetadataTransferJob, the API only cancels a running metadataTransferJob, and any resources already exported or imported are not affected by this API call. • To retrieve information on a specific job use the GetMetadataTransferJob API action. Or, you can call GetMetadataTransferJob on an existing transfer job with the following CLI command: aws iottwinmaker get-metadata-transfer-job --job-id ExistingJobId Run a bulk operation 46 AWS IoT TwinMaker User Guide If you call GetMetadataTransferJob on a non-existing AWS IoT TwinMaker import or export job, you get a ResourceNotFoundException error in response. • To list current jobs, use the ListMetadataTransferJobs API action. Here is a CLI example that calls ListMetadataTransferJobs with AWS IoT TwinMaker as the destinationType and s3 as the sourceType: aws iottwinmaker list-metadata-transfer-jobs --destination-type iottwinmaker -- source-type s3 Note You can change the values for the sourceType and destinationType parameters to match your import or export job's source and destination. For more examples of CLI commands that invoke these API actions, see AWS IoT TwinMaker metadataTransferJob examples. If you encounter any errors during the transfer job, see Error handling. Error handling After you create and run a transfer job, you can call GetMetadataTransferJob to diagnose any errors that occurred: aws iottwinmaker get-metadata-transfer-job \ --metadata-transfer-job-id your_metadata_transfer_job_id \ --region us-east-1 Once you see the state of the job turn to COMPLETED, you can verify the results of the job. GetMetadataTransferJob returns an object called MetadataTransferJobProgress which contains the following fields: • failedCount: Indicates the number of resources that failed during the transfer process. • skippedCount: Indicates the number of resources that were skipped during the transfer process. • succeededCount: Indicates the number of resources that succeeded during the transfer process. Error handling 47 AWS IoT TwinMaker User Guide • totalCount: Indicates the total count of resources involved in the transfer process. Additionally, a reportUrl element is returned which contains a pre-signed URL. If your transfer job has errors you wish to investigate further, then you can download a full error report using this URL. Import metadata templates You can import many components, componentTypes, or |
twinmaker-guide-015 | twinmaker-guide.pdf | 15 | number of resources that failed during the transfer process. • skippedCount: Indicates the number of resources that were skipped during the transfer process. • succeededCount: Indicates the number of resources that succeeded during the transfer process. Error handling 47 AWS IoT TwinMaker User Guide • totalCount: Indicates the total count of resources involved in the transfer process. Additionally, a reportUrl element is returned which contains a pre-signed URL. If your transfer job has errors you wish to investigate further, then you can download a full error report using this URL. Import metadata templates You can import many components, componentTypes, or entities with a single bulk import operation. The examples in this section show how to do this. template: Importing entities Use the following template format for a job that imports entities: { "entities": [ { "description": "string", "entityId": "string", "entityName": "string", "parentEntityId": "string", "tags": { "string": "string" }, "components": { "string": { "componentTypeId": "string", "description": "string", "properties": { "string": { "definition": { "configuration": { "string": "string" }, "dataType": "DataType", "defaultValue": "DataValue", "displayName": "string", "isExternalId": "boolean", "isRequiredInEntity": "boolean", "isStoredExternally": "boolean", "isTimeSeries": "boolean" }, Import metadata templates 48 AWS IoT TwinMaker User Guide "value": "DataValue" } }, "propertyGroups": { "string": { "groupType": "string", "propertyNames": [ "string" ] } } } } } ] } template: Importing componentTypes Use the following template format for a job that imports componentTypes: { "componentTypes": [ { "componentTypeId": "string", "componentTypeName": "string", "description": "string", "extendsFrom": [ "string" ], "functions": { "string": { "implementedBy": { "isNative": "boolean", "lambda": { "functionName": "Telemetry-tsDataReader", "arn": "Telemetry-tsDataReaderARN" } }, "requiredProperties": [ "string" ], "scope": "string" } Import metadata templates 49 User Guide AWS IoT TwinMaker }, "isSingleton": "boolean", "propertyDefinitions": { "string": { "configuration": { "string": "string" }, "dataType": "DataType", "defaultValue": "DataValue", "displayName": "string", "isExternalId": "boolean", "isRequiredInEntity": "boolean", "isStoredExternally": "boolean", "isTimeSeries": "boolean" } }, "propertyGroups": { "string": { "groupType": "string", "propertyNames": [ "string" ] } }, "tags": { "string": "string" } } ] } template: Importing components Use the following template format for a job that imports components: { "entityComponents": [ { "entityId": "string", "componentName": "string", "componentTypeId": "string", "description": "string", "properties": { "string": { Import metadata templates 50 AWS IoT TwinMaker User Guide "definition": { "configuration": { "string": "string" }, "dataType": "DataType", "defaultValue": "DataValue", "displayName": "string", "isExternalId": "boolean", "isRequiredInEntity": "boolean", "isStoredExternally": "boolean", "isTimeSeries": "boolean" }, "value": "DataValue" } }, "propertyGroups": { "string": { "groupType": "string", "propertyNames": [ "string" ] } } } ] } AWS IoT TwinMaker metadataTransferJob examples Use the following commands to manage your metadata transfers: • CreateMetadataTransferJob API action. CLI command example: aws iottwinmaker create-metadata-transfer-job --region us-east-1 \ --cli-input-json file://yourTransferFileName.json • To cancel a job use the CancelMetadataTransferJob API action. CLI command example: AWS IoT TwinMaker metadataTransferJob examples 51 AWS IoT TwinMaker User Guide aws iottwinmaker cancel-metadata-transfer-job --region us-east-1 \ --metadata-transfer-job-id job-to-cancel-id When you call CancelMetadataTransferJob, it only cancels a specific metadata transfer job, and any resources already exported or imported are not affected. • To retrieve information on a specific job use the GetMetadataTransferJob API action. CLI command example: aws iottwinmaker get-metadata-transfer-job \ --metadata-transfer-job-id your_metadata_transfer_job_id \ --region us-east-1 \ • To list current jobs use the ListMetadataTransferJobs API action. You can filter the results returned by ListMetadataTransferJobs using a JSON file. See the following procedure using the CLI: 1. Create a CLI input JSON file to specify the filters you want to use: { "sourceType": "s3", "destinationType": "iottwinmaker", "filters": [{ "workspaceId": "workspaceforbulkimport" }, { "state": "COMPLETED" }] } Save it and record the file name, you will need it when entering the CLI command. 2. Use the JSON file as an argument to the following CLI command: aws iottwinmaker list-metadata-transfer-job --region us-east-1 \ --cli-input-json file://ListMetadataTransferJobsExample.json AWS IoT TwinMaker metadataTransferJob examples 52 AWS IoT TwinMaker User Guide AWS IoT TwinMaker metadata transfer job schema metadataTransferJob import schema: Use this AWS IoT TwinMaker metadata schema to validate your data when you upload it to an Amazon S3 bucket: { "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "IoTTwinMaker", "description": "Metadata transfer job resource schema for IoTTwinMaker", "definitions": { "ExternalId": { "type": "string", "minLength": 1, "maxLength": 128, "pattern": "[a-zA-Z0-9][a-zA-Z_\\-0-9.:]*[a-zA-Z0-9]+" }, "Description": { "type": "string", "minLength": 0, "maxLength": 512 }, "DescriptionWithDefault": { "type": "string", "minLength": 0, "maxLength": 512, "default": "" }, "ComponentTypeName": { "description": "A friendly name for the component type.", "type": "string", "pattern": ".*[^\\u0000-\\u001F\\u007F]*.*", "minLength": 1, "maxLength": 256 }, "ComponentTypeId": { "description": "The ID of the component type.", "type": "string", "pattern": "[a-zA-Z_.\\-0-9:]+", "minLength": 1, "maxLength": 256 }, "ComponentName": { "description": "The name of the component.", AWS IoT TwinMaker metadata transfer job schema 53 AWS IoT TwinMaker User Guide "type": "string", "pattern": "[a-zA-Z_\\-0-9]+", "minLength": 1, "maxLength": 256 }, "EntityId": { "description": "The ID of the entity.", "type": "string", "minLength": 1, "maxLength": 128, "pattern": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|^[a-zA- Z0-9][a-zA-Z_\\-0-9.:]*[a-zA-Z0-9]+" }, "EntityName": { "description": "The name of the entity.", "type": "string", "minLength": 1, "maxLength": 256, "pattern": "[a-zA-Z_0-9-.][a-zA-Z_0-9-. ]*[a-zA-Z0-9]+" }, "ParentEntityId": { "description": "The ID |
twinmaker-guide-016 | twinmaker-guide.pdf | 16 | component type.", "type": "string", "pattern": ".*[^\\u0000-\\u001F\\u007F]*.*", "minLength": 1, "maxLength": 256 }, "ComponentTypeId": { "description": "The ID of the component type.", "type": "string", "pattern": "[a-zA-Z_.\\-0-9:]+", "minLength": 1, "maxLength": 256 }, "ComponentName": { "description": "The name of the component.", AWS IoT TwinMaker metadata transfer job schema 53 AWS IoT TwinMaker User Guide "type": "string", "pattern": "[a-zA-Z_\\-0-9]+", "minLength": 1, "maxLength": 256 }, "EntityId": { "description": "The ID of the entity.", "type": "string", "minLength": 1, "maxLength": 128, "pattern": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|^[a-zA- Z0-9][a-zA-Z_\\-0-9.:]*[a-zA-Z0-9]+" }, "EntityName": { "description": "The name of the entity.", "type": "string", "minLength": 1, "maxLength": 256, "pattern": "[a-zA-Z_0-9-.][a-zA-Z_0-9-. ]*[a-zA-Z0-9]+" }, "ParentEntityId": { "description": "The ID of the parent entity.", "type": "string", "minLength": 1, "maxLength": 128, "pattern": "\\$ROOT|^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f] {12}|^[a-zA-Z0-9][a-zA-Z_\\-0-9.:]*[a-zA-Z0-9]+", "default": "$ROOT" }, "DisplayName": { "description": "A friendly name for the property.", "type": "string", "pattern": ".*[^\\u0000-\\u001F\\u007F]*.*", "minLength": 0, "maxLength": 256 }, "Tags": { "description": "Metadata that you can use to manage the entity / componentType", "patternProperties": { "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$": { "type": "string", "minLength": 1, "maxLength": 256 } AWS IoT TwinMaker metadata transfer job schema 54 User Guide AWS IoT TwinMaker }, "existingJavaType": "java.util.Map<String,String>", "minProperties": 0, "maxProperties": 50 }, "Relationship": { "description": "The type of the relationship.", "type": "object", "properties": { "relationshipType": { "description": "The type of the relationship.", "type": "string", "pattern": ".*", "minLength": 1, "maxLength": 256 }, "targetComponentTypeId": { "description": "The ID of the target component type associated with this relationship.", "$ref": "#/definitions/ComponentTypeId" } }, "additionalProperties": false }, "DataValue": { "description": "An object that specifies a value for a property.", "type": "object", "properties": { "booleanValue": { "description": "A Boolean value.", "type": "boolean" }, "doubleValue": { "description": "A double value.", "type": "number" }, "expression": { "description": "An expression that produces the value.", "type": "string", "pattern": "(^\\$\\{Parameters\\.[a-zA-z]+([a-zA-z_0-9]*)}$)", "minLength": 1, "maxLength": 316 }, "integerValue": { AWS IoT TwinMaker metadata transfer job schema 55 AWS IoT TwinMaker User Guide "description": "An integer value.", "type": "integer" }, "listValue": { "description": "A list of multiple values.", "type": "array", "minItems": 0, "maxItems": 50, "uniqueItems": false, "insertionOrder": false, "items": { "$ref": "#/definitions/DataValue" }, "default": null }, "longValue": { "description": "A long value.", "type": "integer", "existingJavaType": "java.lang.Long" }, "stringValue": { "description": "A string value.", "type": "string", "pattern": ".*", "minLength": 1, "maxLength": 256 }, "mapValue": { "description": "An object that maps strings to multiple DataValue objects.", "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/DataValue" } }, "additionalProperties": { "$ref": "#/definitions/DataValue" } }, "relationshipValue": { "description": "A value that relates a component to another component.", "type": "object", "properties": { "TargetComponentName": { AWS IoT TwinMaker metadata transfer job schema 56 AWS IoT TwinMaker User Guide "type": "string", "pattern": "[a-zA-Z_\\-0-9]+", "minLength": 1, "maxLength": 256 }, "TargetEntityId": { "type": "string", "pattern": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}| ^[a-zA-Z0-9][a-zA-Z_\\-0-9.:]*[a-zA-Z0-9]+", "minLength": 1, "maxLength": 128 } }, "additionalProperties": false } }, "additionalProperties": false }, "DataType": { "description": "An object that specifies the data type of a property.", "type": "object", "properties": { "allowedValues": { "description": "The allowed values for this data type.", "type": "array", "minItems": 0, "maxItems": 50, "uniqueItems": false, "insertionOrder": false, "items": { "$ref": "#/definitions/DataValue" }, "default": null }, "nestedType": { "description": "The nested type in the data type.", "$ref": "#/definitions/DataType" }, "relationship": { "description": "A relationship that associates a component with another component.", "$ref": "#/definitions/Relationship" }, "type": { AWS IoT TwinMaker metadata transfer job schema 57 AWS IoT TwinMaker User Guide "description": "The underlying type of the data type.", "type": "string", "enum": [ "RELATIONSHIP", "STRING", "LONG", "BOOLEAN", "INTEGER", "DOUBLE", "LIST", "MAP" ] }, "unitOfMeasure": { "description": "The unit of measure used in this data type.", "type": "string", "pattern": ".*", "minLength": 1, "maxLength": 256 } }, "required": [ "type" ], "additionalProperties": false }, "PropertyDefinition": { "description": "An object that specifies information about a property.", "type": "object", "properties": { "configuration": { "description": "An object that specifies information about a property.", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "type": "string", "pattern": "[a-zA-Z_\\-0-9]+", "minLength": 1, "maxLength": 256 } }, "existingJavaType": "java.util.Map<String,String>" }, "dataType": { "description": "An object that contains information about the data type.", AWS IoT TwinMaker metadata transfer job schema 58 AWS IoT TwinMaker User Guide "$ref": "#/definitions/DataType" }, "defaultValue": { "description": "An object that contains the default value.", "$ref": "#/definitions/DataValue" }, "displayName": { "description": "An object that contains the default value.", "$ref": "#/definitions/DisplayName" }, "isExternalId": { "description": "A Boolean value that specifies whether the property ID comes from an external data store.", "type": "boolean", "default": null }, "isRequiredInEntity": { "description": "A Boolean value that specifies whether the property is required.", "type": "boolean", "default": null }, "isStoredExternally": { "description": "A Boolean value that specifies whether the property is stored externally.", "type": "boolean", "default": null }, "isTimeSeries": { "description": "A Boolean value that specifies whether the property consists of time series data.", "type": "boolean", "default": null } }, "additionalProperties": false }, "PropertyDefinitions": { "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/PropertyDefinition" } }, AWS IoT TwinMaker metadata transfer job schema 59 AWS IoT TwinMaker User Guide "additionalProperties": { "$ref": "#/definitions/PropertyDefinition" } }, "Property": { "type": "object", |
twinmaker-guide-017 | twinmaker-guide.pdf | 17 | "default": null }, "isRequiredInEntity": { "description": "A Boolean value that specifies whether the property is required.", "type": "boolean", "default": null }, "isStoredExternally": { "description": "A Boolean value that specifies whether the property is stored externally.", "type": "boolean", "default": null }, "isTimeSeries": { "description": "A Boolean value that specifies whether the property consists of time series data.", "type": "boolean", "default": null } }, "additionalProperties": false }, "PropertyDefinitions": { "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/PropertyDefinition" } }, AWS IoT TwinMaker metadata transfer job schema 59 AWS IoT TwinMaker User Guide "additionalProperties": { "$ref": "#/definitions/PropertyDefinition" } }, "Property": { "type": "object", "properties": { "definition": { "description": "The definition of the property", "$ref": "#/definitions/PropertyDefinition" }, "value": { "description": "The value of the property.", "$ref": "#/definitions/DataValue" } }, "additionalProperties": false }, "Properties": { "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/Property" } }, "additionalProperties": { "$ref": "#/definitions/Property" } }, "PropertyName": { "type": "string", "pattern": "[a-zA-Z_\\-0-9]+" }, "PropertyGroup": { "description": "An object that specifies information about a property group.", "type": "object", "properties": { "groupType": { "description": "The type of property group.", "type": "string", "enum": [ "TABULAR" ] }, AWS IoT TwinMaker metadata transfer job schema 60 AWS IoT TwinMaker "propertyNames": { User Guide "description": "The list of property names in the property group.", "type": "array", "minItems": 1, "maxItems": 256, "uniqueItems": true, "insertionOrder": false, "items": { "$ref": "#/definitions/PropertyName" }, "default": null } }, "additionalProperties": false }, "PropertyGroups": { "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/PropertyGroup" } }, "additionalProperties": { "$ref": "#/definitions/PropertyGroup" } }, "Component": { "type": "object", "properties": { "componentTypeId": { "$ref": "#/definitions/ComponentTypeId" }, "description": { "$ref": "#/definitions/Description" }, "properties": { "description": "An object that maps strings to the properties to set in the component type. Each string in the mapping must be unique to this object.", "$ref": "#/definitions/Properties" }, "propertyGroups": { "description": "An object that maps strings to the property groups to set in the entity component. Each string in the mapping must be unique to this object.", "$ref": "#/definitions/PropertyGroups" AWS IoT TwinMaker metadata transfer job schema 61 User Guide AWS IoT TwinMaker } }, "required": [ "componentTypeId" ], "additionalProperties": false }, "RequiredProperty": { "type": "string", "pattern": "[a-zA-Z_\\-0-9]+" }, "LambdaFunction": { "type": "object", "properties": { "arn": { "type": "string", "pattern": "arn:((aws)|(aws-cn)|(aws-us-gov)|(\\${partition})):lambda:(([a- z0-9-]+)|(\\${region})):([0-9]{12}|(\\${accountId})):function:[/a-zA-Z0-9_-]+", "minLength": 1, "maxLength": 128 } }, "additionalProperties": false, "required": [ "arn" ] }, "DataConnector": { "description": "The data connector.", "type": "object", "properties": { "isNative": { "description": "A Boolean value that specifies whether the data connector is native to IoT TwinMaker.", "type": "boolean" }, "lambda": { "description": "The Lambda function associated with this data connector.", "$ref": "#/definitions/LambdaFunction" } }, "additionalProperties": false }, "Function": { AWS IoT TwinMaker metadata transfer job schema 62 AWS IoT TwinMaker User Guide "description": "The function of component type.", "type": "object", "properties": { "implementedBy": { "description": "The data connector.", "$ref": "#/definitions/DataConnector" }, "requiredProperties": { "description": "The required properties of the function.", "type": "array", "minItems": 1, "maxItems": 256, "uniqueItems": true, "insertionOrder": false, "items": { "$ref": "#/definitions/RequiredProperty" }, "default": null }, "scope": { "description": "The scope of the function.", "type": "string", "enum": [ "ENTITY", "WORKSPACE" ] } }, "additionalProperties": false }, "Entity": { "type": "object", "properties": { "description": { "description": "The description of the entity.", "$ref": "#/definitions/DescriptionWithDefault" }, "entityId": { "$ref": "#/definitions/EntityId" }, "entityExternalId": { "description": "The external ID of the entity.", "$ref": "#/definitions/ExternalId" }, AWS IoT TwinMaker metadata transfer job schema 63 User Guide AWS IoT TwinMaker "entityName": { "$ref": "#/definitions/EntityName" }, "parentEntityId": { "$ref": "#/definitions/ParentEntityId" }, "tags": { "$ref": "#/definitions/Tags" }, "components": { "description": "A map that sets information about a component.", "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/Component" } }, "additionalProperties": { "$ref": "#/definitions/Component" } } }, "required": [ "entityId", "entityName" ], "additionalProperties": false }, "ComponentType": { "type": "object", "properties": { "description": { "description": "The description of the component type.", "$ref": "#/definitions/DescriptionWithDefault" }, "componentTypeId": { "$ref": "#/definitions/ComponentTypeId" }, "componentTypeExternalId": { "description": "The external ID of the component type.", "$ref": "#/definitions/ExternalId" }, "componentTypeName": { "$ref": "#/definitions/ComponentTypeName" AWS IoT TwinMaker metadata transfer job schema 64 AWS IoT TwinMaker }, User Guide "extendsFrom": { "description": "Specifies the parent component type to extend.", "type": "array", "minItems": 1, "maxItems": 256, "uniqueItems": true, "insertionOrder": false, "items": { "$ref": "#/definitions/ComponentTypeId" }, "default": null }, "functions": { "description": "a Map of functions in the component type. Each function's key must be unique to this map.", "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/Function" } }, "additionalProperties": { "$ref": "#/definitions/Function" } }, "isSingleton": { "description": "A Boolean value that specifies whether an entity can have more than one component of this type.", "type": "boolean", "default": false }, "propertyDefinitions": { "description": "An map of the property definitions in the component type. Each property definition's key must be unique to this map.", "$ref": "#/definitions/PropertyDefinitions" }, "propertyGroups": { "description": "An object that maps strings to the property groups to set in the component type. Each string in the |
twinmaker-guide-018 | twinmaker-guide.pdf | 18 | component type. Each function's key must be unique to this map.", "type": "object", "patternProperties": { "[a-zA-Z_\\-0-9]+": { "$ref": "#/definitions/Function" } }, "additionalProperties": { "$ref": "#/definitions/Function" } }, "isSingleton": { "description": "A Boolean value that specifies whether an entity can have more than one component of this type.", "type": "boolean", "default": false }, "propertyDefinitions": { "description": "An map of the property definitions in the component type. Each property definition's key must be unique to this map.", "$ref": "#/definitions/PropertyDefinitions" }, "propertyGroups": { "description": "An object that maps strings to the property groups to set in the component type. Each string in the mapping must be unique to this object.", "$ref": "#/definitions/PropertyGroups" }, "tags": { "$ref": "#/definitions/Tags" AWS IoT TwinMaker metadata transfer job schema 65 User Guide AWS IoT TwinMaker } }, "required": [ "componentTypeId" ], "additionalProperties": false }, "EntityComponent": { "type": "object", "properties": { "entityId": { "$ref": "#/definitions/EntityId" }, "componentName": { "$ref": "#/definitions/ComponentName" }, "componentExternalId": { "description": "The external ID of the component.", "$ref": "#/definitions/ExternalId" }, "componentTypeId": { "$ref": "#/definitions/ComponentTypeId" }, "description": { "description": "The description of the component.", "$ref": "#/definitions/Description" }, "properties": { "description": "An object that maps strings to the properties to set in the component. Each string in the mapping must be unique to this object.", "$ref": "#/definitions/Properties" }, "propertyGroups": { "description": "An object that maps strings to the property groups to set in the component. Each string in the mapping must be unique to this object.", "$ref": "#/definitions/PropertyGroups" } }, "required": [ "entityId", "componentTypeId", "componentName" ], "additionalProperties": false AWS IoT TwinMaker metadata transfer job schema 66 User Guide AWS IoT TwinMaker } }, "additionalProperties": false, "properties": { "entities": { "type": "array", "uniqueItems": false, "items": { "$ref": "#/definitions/Entity" } }, "componentTypes": { "type": "array", "uniqueItems": false, "items": { "$ref": "#/definitions/ComponentType" } }, "entityComponents": { "type": "array", "uniqueItems": false, "items": { "$ref": "#/definitions/EntityComponent" }, "default": null } } } Here is an example that creates a new componentType called component.type.intial and creates an entity called initial: { "componentTypes": [ { "componentTypeId": "component.type.initial", "tags": { "key": "value" } } ], "entities": [ { AWS IoT TwinMaker metadata transfer job schema 67 AWS IoT TwinMaker User Guide "entityName": "initial", "entityId": "initial" } ] } Here is an example that updates existing entities: { "componentTypes": [ { "componentTypeId": "component.type.initial", "description": "updated" } ], "entities": [ { "entityName": "parent", "entityId": "parent" }, { "entityName": "child", "entityId": "child", "components": { "testComponent": { "componentTypeId": "component.type.initial", "properties": { "testProperty": { "definition": { "configuration": { "alias": "property" }, "dataType": { "relationship": { "relationshipType": "parent", "targetComponentTypeId": "test" }, "type": "STRING", "unitOfMeasure": "t" }, "displayName": "displayName" } } AWS IoT TwinMaker metadata transfer job schema 68 User Guide AWS IoT TwinMaker } } }, "parentEntityId": "parent" } ], "entityComponents": [ { "entityId": "initial", "componentTypeId": "component.type.initial", "componentName": "entityComponent", "description": "additionalDescription", "properties": { "additionalProperty": { "definition": { "configuration": { "alias": "additionalProperty" }, "dataType": { "type": "STRING" }, "displayName": "additionalDisplayName" }, "value": { "stringValue": "test" } } } } ] } AWS IoT TwinMaker metadata transfer job schema 69 AWS IoT TwinMaker User Guide AWS IoT TwinMaker data connectors AWS IoT TwinMaker uses a connector-based architecture so that you can connect data from your own data store to AWS IoT TwinMaker. This means you don't need to migrate data prior to using AWS IoT TwinMaker. Currently, AWS IoT TwinMaker supports first-party connectors for AWS IoT SiteWise. If you store modeling and property data in AWS IoT SiteWise, then you don’t need to implement your own connectors. If you store your modeling or property data in other data stores, such as Timestream, DynamoDB, or Snowflake, then you must implement AWS Lambda connectors with the AWS IoT TwinMaker data connector interface so that AWS IoT TwinMaker can invoke your connector when necessary. Topics • AWS IoT TwinMaker data connectors • AWS IoT TwinMaker Athena tabular data connector • Developing AWS IoT TwinMaker time-series data connectors AWS IoT TwinMaker data connectors Connectors need access to your underlying data store to resolve sent queries and to return either results or an error. To learn about the available connectors, their request interfaces, and their response interfaces, see the following topics. For more information about the properties used in the connector interfaces, see the GetPropertyValueHistory API action. Note Some connectors have two timestamp fields in both the request and response interfaces for start time and end time properties. Both startDateTime and endDateTime use a long number to represent epoch second, which is no longer supported. To maintain backwards-compatibility, we still send a timestamp value to that field, but we recommend using the startTime and endTime fields that are consistent with our API timestamp format. Data connectors 70 User Guide AWS IoT TwinMaker Topics • Schema initializer connector • DataReaderByEntity • DataReaderByComponentType • DataReader • AttributePropertyValueReaderByEntity • DataWriter • Examples Schema initializer connector You can use the schema initializer in |
twinmaker-guide-019 | twinmaker-guide.pdf | 19 | connectors have two timestamp fields in both the request and response interfaces for start time and end time properties. Both startDateTime and endDateTime use a long number to represent epoch second, which is no longer supported. To maintain backwards-compatibility, we still send a timestamp value to that field, but we recommend using the startTime and endTime fields that are consistent with our API timestamp format. Data connectors 70 User Guide AWS IoT TwinMaker Topics • Schema initializer connector • DataReaderByEntity • DataReaderByComponentType • DataReader • AttributePropertyValueReaderByEntity • DataWriter • Examples Schema initializer connector You can use the schema initializer in the component type or entity lifecycle to fetch the component type or component properties from the underlying data source. The schema initializer automatically imports component type or component properties without explicitly calling an API action to set up properties. SchemaInitializer request interface { "workspaceId": "string", "entityId": "string", "componentName": "string", "properties": { // property name as key, // value is of type PropertyRequest "string": "PropertyRequest" } } Note The map of properties in this request interface is a PropertyRequest. For more information, see PropertyRequest. Schema initializer connector 71 AWS IoT TwinMaker User Guide SchemaInitializer response interface { "properties": { // property name as key, // value is of type PropertyResponse "string": "PropertyResponse" } } Note The map of properties in this request interface is a PropertyResponse. For more information, see PropertyResponse. DataReaderByEntity DataReaderByEntity is a data plane connector that's used to get the time-series values of properties in a single component. For information about the property types, syntax, and format of this connector, see the GetPropertyValueHistory API action. DataReaderByEntity request interface { "startDateTime": long, // In epoch sec, deprecated "startTime": "string", // ISO-8601 timestamp format "endDateTime": long, // In epoch sec, deprecated "endTime": "string", // ISO-8601 timestamp format "properties": { // A map of properties as in the get-entity API response // property name as key, // value is of type PropertyResponse "string": "PropertyResponse" }, "workspaceId": "string", "selectedProperties": List:"string", "propertyFilters": List:PropertyFilter, DataReaderByEntity 72 AWS IoT TwinMaker User Guide "entityId": "string", "componentName": "string", "componentTypeId": "string", "interpolation": InterpolationParameters, "nextToken": "string", "maxResults": int, "orderByTime": "string" } DataReaderByEntity response interface { "propertyValues": [ { "entityPropertyReference": EntityPropertyReference, // The same as EntityPropertyReference "values": [ { "timestamp": long, // Epoch sec, deprecated "time": "string", // ISO-8601 timestamp format "value": DataValue // The same as DataValue } ] } ], "nextToken": "string" } DataReaderByComponentType To get the time-series values of common properties that come from the same component type, use the data plane connector DataReaderByEntity. For example, if you define time-series properties in the component type and you have multiple components using that component type, then you can query those properties across all components in a given a time range. A common use case for this is when you want to query the alarm status of multiple components for a global view of your entities. For information about the property types, syntax, and format of this connector, see the GetPropertyValueHistory API action. DataReaderByComponentType 73 AWS IoT TwinMaker User Guide DataReaderByComponentType request interface { "startDateTime": long, // In epoch sec, deprecated "startTime": "string", // ISO-8601 timestamp format "endDateTime": long, // In epoch sec, deprecated "endTime": "string", // ISO-8601 timestamp format "properties": { // A map of properties as in the get-entity API response // property name as key, // value is of type PropertyResponse "string": "PropertyResponse" }, "workspaceId": "string", "selectedProperties": List:"string", "propertyFilters": List:PropertyFilter, "componentTypeId": "string", "interpolation": InterpolationParameters, "nextToken": "string", "maxResults": int, "orderByTime": "string" } DataReaderByComponentType response interface { "propertyValues": [ { "entityPropertyReference": EntityPropertyReference, // The same as EntityPropertyReference "entityId": "string", "componentName": "string", "values": [ { "timestamp": long, // Epoch sec, deprecated "time": "string", // ISO-8601 timestamp format "value": DataValue // The same as DataValue } ] } ], "nextToken": "string" } DataReaderByComponentType 74 AWS IoT TwinMaker DataReader User Guide DataReader is a data plane connector that can handle both the case of DataReaderByEntity and DataReaderByComponentType. For information about the property types, syntax, and format of this connector, see the GetPropertyValueHistory API action. DataReader request interface The EntityId and componentName are optional. { "startDateTime": long, // In epoch sec, deprecated "startTime": "string", // ISO-8601 timestamp format "endDateTime": long, // In epoch sec, deprecated "endTime": "string", // ISO-8601 timestamp format "properties": { // A map of properties as in the get-entity API response // property name as key, // value is of type PropertyRequest "string": "PropertyRequest" }, "workspaceId": "string", "selectedProperties": List:"string", "propertyFilters": List:PropertyFilter, "entityId": "string", "componentName": "string", "componentTypeId": "string", "interpolation": InterpolationParameters, "nextToken": "string", "maxResults": int, "orderByTime": "string" } DataReader response interface { "propertyValues": [ { "entityPropertyReference": EntityPropertyReference, // The same as EntityPropertyReference "values": [ DataReader 75 AWS IoT TwinMaker { "timestamp": long, // Epoch sec, deprecated "time": "string", // ISO-8601 timestamp format "value": DataValue // The same as DataValue User Guide } ] } ], "nextToken": "string" } AttributePropertyValueReaderByEntity AttributePropertyValueReaderByEntity is |
twinmaker-guide-020 | twinmaker-guide.pdf | 20 | { // A map of properties as in the get-entity API response // property name as key, // value is of type PropertyRequest "string": "PropertyRequest" }, "workspaceId": "string", "selectedProperties": List:"string", "propertyFilters": List:PropertyFilter, "entityId": "string", "componentName": "string", "componentTypeId": "string", "interpolation": InterpolationParameters, "nextToken": "string", "maxResults": int, "orderByTime": "string" } DataReader response interface { "propertyValues": [ { "entityPropertyReference": EntityPropertyReference, // The same as EntityPropertyReference "values": [ DataReader 75 AWS IoT TwinMaker { "timestamp": long, // Epoch sec, deprecated "time": "string", // ISO-8601 timestamp format "value": DataValue // The same as DataValue User Guide } ] } ], "nextToken": "string" } AttributePropertyValueReaderByEntity AttributePropertyValueReaderByEntity is a data plane connector that you can use to fetch the value of static properties in a single entity. For information about the property types, syntax, and format of this connector, see the GetPropertyValue API action. AttributePropertyValueReaderByEntity request interface { "properties": { // property name as key, // value is of type PropertyResponse "string": "PropertyResponse" } "workspaceId": "string", "entityId": "string", "componentName": "string", "selectedProperties": List:"string", } AttributePropertyValueReaderByEntity response interface { "propertyValues": { "string": { // property name as key "propertyReference": EntityPropertyReference, // The same as EntityPropertyReference "propertyValue": DataValue // The same as DataValue AttributePropertyValueReaderByEntity 76 AWS IoT TwinMaker } } DataWriter User Guide DataWriter is a data plane connector that you can use to write time-series data points back to the underlying data store for properties in a single component. For information about the property types, syntax, and format of this connector, see the BatchPutPropertyValues API action. DataWriter request interface { "workspaceId": "string", "properties": { // entity id as key "String": { // property name as key, // value is of type PropertyResponse "string": PropertyResponse } }, "entries": [ { "entryId": "string", "entityPropertyReference": EntityPropertyReference, // The same as EntityPropertyReference "propertyValues": [ { "timestamp": long, // Epoch sec, deprecated "time": "string", // ISO-8601 timestamp format "value": DataValue // The same as DataValue } ] } ] } DataWriter response interface { DataWriter 77 AWS IoT TwinMaker "errorEntries": [ { "errors": List:BatchPutPropertyError // The value is a list of type BatchPutPropertyError User Guide } ] } Examples The following JSON samples are examples of response and request syntax for multiple connectors. • SchemaInitializer: The following examples show the schema initializer in a component type lifecycle. Request: { "workspaceId": "myWorkspace", "properties": { "modelId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false, "defaultValue": { "stringValue": "myModelId" } }, "value": { "stringValue": "myModelId" } }, "tableName": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": false, Examples 78 AWS IoT TwinMaker User Guide "isFinal": false, "isImported": false, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": false, "defaultValue": { "stringValue": "myTableName" } }, "value": { "stringValue": "myTableName" } } } } Response: { "properties": { "myProperty1": { "definition": { "dataType": { "type": "DOUBLE", "unitOfMeasure": "%" }, "configuration": { "myProperty1Id": "idValue" }, "isTimeSeries": true } }, "myProperty2": { "definition": { "dataType": { "type": "STRING" }, "isTimeSeries": false, "defaultValue": { "stringValue": "property2Value" } } } } Examples 79 AWS IoT TwinMaker } • Schema initializer in entity lifecycle: Request: { "workspaceId": "myWorkspace", "entityId": "myEntity", "componentName": "myComponent", "properties": { "assetId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "myAssetId" } }, "tableName": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": false, "isFinal": false, "isImported": false, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "myTableName" } } } } Examples User Guide 80 User Guide AWS IoT TwinMaker Response: { "properties": { "myProperty1": { "definition": { "dataType": { "type": "DOUBLE", "unitOfMeasure": "%" }, "configuration": { "myProperty1Id": "idValue" }, "isTimeSeries": true } }, "myProperty2": { "definition": { "dataType": { "type": "STRING" }, "isTimeSeries": false }, "value": { "stringValue": "property2Value" } } } } • DataReaderByEntity and DataReader: Request: { "workspaceId": "myWorkspace", "entityId": "myEntity", "componentName": "myComponent", "selectedProperties": [ "Temperature", "Pressure" ], "startTime": "2022-04-07T04:04:42Z", "endTime": "2022-04-07T04:04:45Z", Examples 81 AWS IoT TwinMaker User Guide "maxResults": 4, "orderByTime": "ASCENDING", "properties": { "assetId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "myAssetId" } }, "Temperature": { "definition": { "configuration": { "temperatureId": "xyz123" }, "dataType": { "type": "DOUBLE", "unitOfMeasure": "DEGC" }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } }, "Pressure": { "definition": { "configuration": { "pressureId": "xyz456" }, "dataType": { "type": "DOUBLE", "unitOfMeasure": "MPA" Examples 82 User Guide AWS IoT TwinMaker }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } } } } Response: { "propertyValues": [ { "entityPropertyReference": { "entityId": "myEntity", "componentName": "myComponent", "propertyName": "Temperature" }, "values": [ { "time": "2022-04-07T04:04:42Z", "value": { "doubleValue": 588.168 } }, { |
twinmaker-guide-021 | twinmaker-guide.pdf | 21 | }, "Temperature": { "definition": { "configuration": { "temperatureId": "xyz123" }, "dataType": { "type": "DOUBLE", "unitOfMeasure": "DEGC" }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } }, "Pressure": { "definition": { "configuration": { "pressureId": "xyz456" }, "dataType": { "type": "DOUBLE", "unitOfMeasure": "MPA" Examples 82 User Guide AWS IoT TwinMaker }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } } } } Response: { "propertyValues": [ { "entityPropertyReference": { "entityId": "myEntity", "componentName": "myComponent", "propertyName": "Temperature" }, "values": [ { "time": "2022-04-07T04:04:42Z", "value": { "doubleValue": 588.168 } }, { "time": "2022-04-07T04:04:43Z", "value": { "doubleValue": 592.4224 } } ] } ], "nextToken": "qwertyuiop" } • AttributePropertyValueReaderByEntity: Examples 83 User Guide AWS IoT TwinMaker Request: { "workspaceId": "myWorkspace", "entityId": "myEntity", "componentName": "myComponent", "selectedProperties": [ "manufacturer", ], "properties": { "assetId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "myAssetId" } }, "manufacturer": { "definition": { "dataType": { "type": "STRING" }, "configuration": { "manufacturerPropId": "M001" }, "isExternalId": false, "isFinal": false, "isImported": false, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": true, "isTimeSeries": false } } } } Examples 84 AWS IoT TwinMaker Response: User Guide { "propertyValues": { "manufacturer": { "propertyReference": { "propertyName": "manufacturer", "entityId": "myEntity", "componentName": "myComponent" }, "propertyValue": { "stringValue": "Amazon" } } } } • DataWriter: Request: { "workspaceId": "myWorkspaceId", "properties": { "myEntity": { "Temperature": { "definition": { "configuration": { "temperatureId": "xyz123" }, "dataType": { "type": "DOUBLE", "unitOfMeasure": "DEGC" }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true Examples 85 User Guide AWS IoT TwinMaker } } } }, "entries": [ { "entryId": "myEntity", "entityPropertyReference": { "entityId": "myEntity", "componentName": "myComponent", "propertyName": "Temperature" }, "propertyValues": [ { "timestamp": 1626201120, "value": { "doubleValue": 95.6958 } }, { "timestamp": 1626201132, "value": { "doubleValue": 80.6959 } } ] } ] } Response: { "errorEntries": [ { "errors": [ { "errorCode": "409", "errorMessage": "Conflict value at same timestamp", "entry": { "entryId": "myEntity", "entityPropertyReference": { Examples 86 AWS IoT TwinMaker User Guide "entityId": "myEntity", "componentName": "myComponent", "propertyName": "Temperature" }, "propertyValues": [ "time": "2022-04-07T04:04:42Z", "value": { "doubleValue": 95.6958 } ] } } ] } ] } AWS IoT TwinMaker Athena tabular data connector With the Athena tabular data connector, you can access and use your Athena data stores in AWS IoT TwinMaker. You can use your Athena data to build digital twins without an intensive data migration effort. You can either use the prebuilt connector or create a custom Athena connector to access data from your Athena data sources. AWS IoT TwinMaker Athena data connector prerequisites Before you use the Athena tabular data connector, complete the following prerequisites: • Create managed Athena tables and their associated Amazon S3 resources. For information on using Athena, see the Athena documentation. • Create an AWS IoT TwinMaker workspace. You can create a workspace in the AWS IoT TwinMaker console. • Update your workspace IAM role with Athena permissions. For more information, see Modify your workspace IAM role to use the Athena data connector. • Become familiar with AWS IoT TwinMaker's entity-component system and how to create entities. For more information, see Create your first entity. • Become familiar with AWS IoT TwinMaker's data connectors. For more information, see AWS IoT TwinMaker data connectors. Athena tabular data connector 87 AWS IoT TwinMaker User Guide Using the Athena data connector To use the Athena data connector, you must create a component, using the Athena connector as the component type. Then you attach the component to an entity within your scene for use in AWS IoT TwinMaker. Create a component type with the Athena data connector Use this procedure to create an AWS IoT TwinMaker component type with the Athena tabular data connector: 1. Navigate to the AWS IoT TwinMaker console. 2. Open an existing workspace or create a new one. 3. From the left side navigation menu, choose Component types, and select Create component type to open the component type creation page. 4. On the Create component type page, fill in the ID field with an ID that matches your use case. 5. Choose the Base type. From the dropdown list, select the Athena tabular data connector which is labeled as com.amazon.athena.connector. Using the Athena data connector 88 AWS IoT TwinMaker User Guide 6. Configure the component type's Data source by choosing Athena resources for the following fields: • Choose an Athena datasource. • Choose an Athena database. • Choose a Table name. • Choose a Athena workGroup. 7. Once you have selected the Athena resources you want to use as the data source, choose which columns from the table you want to include. 8. Select an External ID column name. Select a column from the previous step to serve as |
twinmaker-guide-022 | twinmaker-guide.pdf | 22 | select the Athena tabular data connector which is labeled as com.amazon.athena.connector. Using the Athena data connector 88 AWS IoT TwinMaker User Guide 6. Configure the component type's Data source by choosing Athena resources for the following fields: • Choose an Athena datasource. • Choose an Athena database. • Choose a Table name. • Choose a Athena workGroup. 7. Once you have selected the Athena resources you want to use as the data source, choose which columns from the table you want to include. 8. Select an External ID column name. Select a column from the previous step to serve as the external ID column. The external Id is the id that's used to represent an Athena asset and map it to an AWS IoT TwinMaker entity. Using the Athena data connector 89 AWS IoT TwinMaker User Guide Using the Athena data connector 90 AWS IoT TwinMaker User Guide 9. (Optional) Add AWS tags to these resources, so you can group and organize them. 10. Choose Create component type to finish creating the component type. Create a component with the Athena data connector type and attach it to an entity Use this procedure to create an AWS IoT TwinMaker component with the Athena tabular data connector and attach it to an entity: Note You must have an existing component type that uses the Athena tabular data connector as a data source in order to complete this procedure. See the previous procedure Create a component type with the Athena data connector before starting this walkthrough. 1. Navigate to the AWS IoT TwinMaker console. 2. Open an existing workspace or create a new one. 3. From the left side navigation menu choose Entities, and select the entity you want to add the component to or create a new entity. 4. Create a new entity. 5. Next select Add component., fill in the Component name field with a name that match your use case. 6. 7. From the Component type drop down menu select the component type ID that you created in the previous procedure. Enter Component information, a Component Name, and select the child ComponentType created previously. This is the ComponentType you created with the Athena data connector. 8. In the Properties section, enter the athenaComponentExternalId for the component. 9. Choose Add component to finish creating the component. Using the Athena data connector 91 AWS IoT TwinMaker User Guide You have now successfully created a component with the Athena data connector as the component type and attached it to an entity. Using the Athena tabular data connector JSON reference The following example is the full the JSON reference for the Athena tabular data connector. Use this as a resource to create custom data connectors and component types. { "componentTypeId": "com.amazon.athena.connector", "description": "Athena connector for syncing tabular data", "workspaceId":"AmazonOwnedTypesWorkspace", "propertyGroups": { "tabularPropertyGroup": { "groupType": "TABULAR", "propertyNames": [] } }, "propertyDefinitions": { "athenaDataSource": { "dataType": { "type": "STRING" }, "isRequiredInEntity": true }, "athenaDatabase": { "dataType": { "type": "STRING" }, "isRequiredInEntity": true }, "athenaTable": { "dataType": { "type": "STRING" }, "isRequiredInEntity": true }, "athenaWorkgroup": { "dataType": { "type": "STRING" }, "isRequiredInEntity": true }, "athenaExternalIdColumnName": { "dataType": { "type": "STRING" }, "isRequiredInEntity": true, "isExternalId": false }, "athenaComponentExternalId": { "dataType": { "type": "STRING" }, "isStoredExternally": false, Using the Athena tabular data connector JSON reference 92 AWS IoT TwinMaker User Guide "isRequiredInEntity": true, "isExternalId": true } }, "functions": { "tabularDataReaderByEntity": { "implementedBy": { "isNative": true } } } } Using the Athena data connector You can surface your entities that are using Athena tables in Grafana. For more information, see AWS IoT TwinMaker Grafana dashboard integration. Read the Athena documentation for information on creating and using Athena tables to store data. Troubleshooting the Athena data connector This topic covers common issues you may encounter when configuring the Athena data connector. Athena workgroup location: When creating Athena connector componentType, an Athena workgroup has to have output location setup. See How workgroups work. Missing IAM role permissions: The AWS IoT TwinMaker; workspace role may be missing Athena API access permission when creating a componentType, adding a Ca component to an entity, or running the GetPropertyValue API. To update IAM permissions see Create and manage a service role for AWS IoT TwinMaker. Visualize Athena tabular data in Grafana A Grafana plugin is also available to visualize your tabular data on Grafana a dashboard panel with additional features such as sorting and filtering based on selected properties without making API Using the Athena data connector 93 AWS IoT TwinMaker User Guide calls to AWS IoT TwinMaker, or interactions with Athena. This topic shows you how to configure Grafana to visualize Athena tabular data. Prerequisites Before configuring a Grafana panel for visualizing Athena tabular data, review the following prerequisites: • You have set up a Grafana environment. For more information see, AWS IoT TwinMaker |
twinmaker-guide-023 | twinmaker-guide.pdf | 23 | Visualize Athena tabular data in Grafana A Grafana plugin is also available to visualize your tabular data on Grafana a dashboard panel with additional features such as sorting and filtering based on selected properties without making API Using the Athena data connector 93 AWS IoT TwinMaker User Guide calls to AWS IoT TwinMaker, or interactions with Athena. This topic shows you how to configure Grafana to visualize Athena tabular data. Prerequisites Before configuring a Grafana panel for visualizing Athena tabular data, review the following prerequisites: • You have set up a Grafana environment. For more information see, AWS IoT TwinMaker Grafana integration. • You can configure a Grafana datasource. For more information see, Grafana AWS IoT TwinMaker. • You are familiar with creating a new dashboard and add a new panel. Visualize Athena tabular data in Grafana This procedure shows you how to setup a Grafana panel to visualize Athena tabular data. 1. Open your AWS IoT TwinMaker Grafana dashboard. 2. 3. 4. 5. 6. 7. 8. Select the Table panel in the panel settings. Select your datasource in the query configuration. Select the Get Property Value query. Select an entity. Select a component that has a componentType that extends the Athena base component type. Select the property group of your Athena table. Select any number of properties from the property group. 9. Configure the tabular conditions through a list of filters and property orders. With the following options: • Filter: define an expression for a property value to filter your data. • OrderBy: specify whether data should be returned in ascending or descending order for a property. Visualize Athena tabular data in Grafana 94 AWS IoT TwinMaker User Guide Developing AWS IoT TwinMaker time-series data connectors This section explains how to develop a time-series data connector in a step-by-step process. Additionally, we present an example time-series data connector based of the entire cookie factory sample, which includes 3D models, entities, components, alarms, and connectors. The cookie factory sample source is available on the AWS IoT TwinMaker samples GitHub repository . Topics • AWS IoT TwinMaker time-series data connector prerequisites • Time-series data connector background • Developing a time-series data connector • Improving your data connector • Testing your connector AWS IoT TwinMaker time-series data connector 95 AWS IoT TwinMaker • Security • Creating AWS IoT TwinMaker resources • What's next • AWS IoT TwinMakercookie factory example time-series connector User Guide AWS IoT TwinMaker time-series data connector prerequisites Before developing your time-series data connector, we recommend that you complete the following tasks: • Create an AWS IoT TwinMaker workspace. • Create AWS IoT TwinMaker component types. • Create AWS IoT TwinMaker entities. • (Optional) Read Using and creating component types. • (Optional) Read AWS IoT TwinMaker data connector interface to get a general understanding of AWS IoT TwinMaker data connectors. Note For an example of a fully implemented connector, see our cookie factory example implementation. Time-series data connector background Imagine you are working with a factory that has a set of cookie mixers and a water tank. You would like to build AWS IoT TwinMaker digital twins of these physical entities so that you can monitor their operational states by checking various time-series metrics. You have on-site sensors set up and you are already streaming measurement data into a Timestream database. You want to be able to view and organize the measurement data in AWS IoT TwinMaker with minimal overhead. You can accomplish this task by using a time-series data connector. The following image shows an example telemetry table, which is populated through the use of a time-series connector. AWS IoT TwinMaker time-series data connector prerequisites 96 AWS IoT TwinMaker User Guide The datasets and the Timestream table used in this screenshot are available in the AWS IoT TwinMaker samples GitHub repository. Also see the cookie factory example connector for the implementation, which produces the result shown in the preceding screenshot. Time-series data connector data flow For data plane queries, AWS IoT TwinMaker fetches the corresponding properties of both components and component types from components and component types definitions. AWS IoT TwinMaker forwards properties to AWS Lambda functions along with any API query parameters in the query. AWS IoT TwinMaker uses Lambda functions to access and resolve queries from data sources and return the results of those queries. The Lambda functions use the component and component type properties from the data plane to resolve the initial request. The results of the Lambda query are mapped to an API response and returned to you. AWS IoT TwinMaker defines the data connector interface and uses that to interact with Lambda functions. Using data connectors, you can query your data source from AWS IoT TwinMaker API without any data migration efforts. The following image outlines the basic data flow described in the previous |
twinmaker-guide-024 | twinmaker-guide.pdf | 24 | uses Lambda functions to access and resolve queries from data sources and return the results of those queries. The Lambda functions use the component and component type properties from the data plane to resolve the initial request. The results of the Lambda query are mapped to an API response and returned to you. AWS IoT TwinMaker defines the data connector interface and uses that to interact with Lambda functions. Using data connectors, you can query your data source from AWS IoT TwinMaker API without any data migration efforts. The following image outlines the basic data flow described in the previous paragraphs. Time-series data connector background 97 AWS IoT TwinMaker User Guide Developing a time-series data connector The following procedure outlines a development model that incrementally builds up to a functional time-series data connector. The basic steps are as follows: 1. Create a valid basic component type In a component type, you define common properties that are shared across your components. To learn more about defining component types, see Using and creating component types. AWS IoT TwinMaker uses an entity-component modeling pattern so each component is attached to an entity. We recommend that you model each physical item as an entity and model different data sources with their own component types. The following example shows a Timestream template component type with one property: {"componentTypeId": "com.example.timestream-telemetry", "workspaceId": "MyWorkspace", "functions": { "dataReader": { "implementedBy": { "lambda": { "arn": "lambdaArn" } } } }, "propertyDefinitions": { "telemetryType": { "dataType": { "type": "STRING" }, "isExternalId": false, "isStoredExternally": false, "isTimeSeries": false, "isRequiredInEntity": true }, "telemetryId": { "dataType": { "type": "STRING" }, "isExternalId": true, "isStoredExternally": false, "isTimeSeries": false, "isRequiredInEntity": true }, Developing a time-series data connector 98 AWS IoT TwinMaker User Guide "Temperature": { "dataType": { "type": "DOUBLE" }, "isExternalId": false, "isTimeSeries": true, "isStoredExternally": true, "isRequiredInEntity": false } } } The key elements of the component type are the following: • The telemetryId property identifies the unique key of the physical item in the corresponding data source. The data connector uses this property as a filter condition to only query values associated with the given item. Additionally, if you include the telemetryId property value in the data plane API response, then the client side takes the ID and can perform a reverse lookup if necessary. • The lambdaArn field identifies the Lambda function with which the component type engages. • The isRequiredInEntity flag enforces the ID creation. This flag is required so that when the component is created, the item's ID is also instantiated. • The TelemetryId is added to the component type as an external id so that the item can be identified in the Timestream table. 2. Create a component with the component type To use the component type you created, you must create a component and attach it to the entity from which you wish to retrieve data. The following steps detail the process of creating that component: a. Navigate to the AWS IoT TwinMaker console. b. Select and open the same workspace in which you created the component types. c. Navigate to the entity page. d. Create a new entity or select an existing entity from the table. e. Once you have selected the entity you wish to use, choose Add component to open the Add component page. f. Give the component a name and for the Type, choose the component type you created with the template in 1. Create a valid basic component type. Developing a time-series data connector 99 AWS IoT TwinMaker User Guide 3. Make your component type call a Lambda connector The Lambda connector needs to access the data source and generate the query statement based on the input and forward it to the data source. The following example shows a JSON request template that does this. { "workspaceId": "MyWorkspace", "entityId": "MyEntity", "componentName": "TelemetryData", "selectedProperties": ["Temperature"], "startTime": "2022-08-25T00:00:00Z", "endTime": "2022-08-25T00:00:05Z", "maxResults": 3, "orderByTime": "ASCENDING", "properties": { "telemetryType": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": false, "isFinal": false, "isImported": false, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "Mixer" } }, "telemetryId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false }, "value": { Developing a time-series data connector 100 AWS IoT TwinMaker User Guide "stringValue": "item_A001" } }, "Temperature": { "definition": { "dataType": { "type": "DOUBLE", }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } } } } The key elements of the request: • The selectedProperties is a list you populate with the properties for which you want Timestream measurements. • The startDateTime, startTime, EndDateTime, and endTime fields specify a time range for the request. This determines the sample range for the measurements returned. • The entityId is the name of the |
twinmaker-guide-025 | twinmaker-guide.pdf | 25 | }, "value": { Developing a time-series data connector 100 AWS IoT TwinMaker User Guide "stringValue": "item_A001" } }, "Temperature": { "definition": { "dataType": { "type": "DOUBLE", }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } } } } The key elements of the request: • The selectedProperties is a list you populate with the properties for which you want Timestream measurements. • The startDateTime, startTime, EndDateTime, and endTime fields specify a time range for the request. This determines the sample range for the measurements returned. • The entityId is the name of the entity from which you are querying data. • The componentName is the name of the component from which you are querying data. • Use the orderByTime field to organize the order in which the results are displayed. In the preceding example request, we would expect to get a series of samples for the selected properties during the given time window for the given item, with the selected time order. The response statement can be summarized as the following: { "propertyValues": [ { "entityPropertyReference": { "entityId": "MyEntity", "componentName": "TelemetryData", "propertyName": "Temperature" }, Developing a time-series data connector 101 User Guide AWS IoT TwinMaker "values": [ { "time": "2022-08-25T00:00:00Z", "value": { "doubleValue": 588.168 } }, { "time": "2022-08-25T00:00:01Z", "value": { "doubleValue": 592.4224 } }, { "time": "2022-08-25T00:00:02Z", "value": { "doubleValue": 594.9383 } } ] } ], "nextToken": "..." } 4. Update your component type to have two properties The following JSON template shows a valid component type with two properties: { "componentTypeId": "com.example.timestream-telemetry", "workspaceId": "MyWorkspace", "functions": { "dataReader": { "implementedBy": { "lambda": { "arn": "lambdaArn" } } } }, "propertyDefinitions": { "telemetryType": { "dataType": { "type": "STRING" }, Developing a time-series data connector 102 AWS IoT TwinMaker User Guide "isExternalId": false, "isStoredExternally": false, "isTimeSeries": false, "isRequiredInEntity": true }, "telemetryId": { "dataType": { "type": "STRING" }, "isExternalId": true, "isStoredExternally": false, "isTimeSeries": false, "isRequiredInEntity": true }, "Temperature": { "dataType": { "type": "DOUBLE" }, "isExternalId": false, "isTimeSeries": true, "isStoredExternally": true, "isRequiredInEntity": false }, "RPM": { "dataType": { "type": "DOUBLE" }, "isExternalId": false, "isTimeSeries": true, "isStoredExternally": true, "isRequiredInEntity": false } } } 5. Update the Lambda connector to handle the second property The AWS IoT TwinMaker data plane API supports querying multiple properties in a single request, and AWS IoT TwinMaker follows a single request to a connector by providing a list of selectedProperties. The following JSON request shows a modified template that now supports a request for two properties. { "workspaceId": "MyWorkspace", "entityId": "MyEntity", "componentName": "TelemetryData", "selectedProperties": ["Temperature", "RPM"], Developing a time-series data connector 103 AWS IoT TwinMaker User Guide "startTime": "2022-08-25T00:00:00Z", "endTime": "2022-08-25T00:00:05Z", "maxResults": 3, "orderByTime": "ASCENDING", "properties": { "telemetryType": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": false, "isFinal": false, "isImported": false, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "Mixer" } }, "telemetryId": { "definition": { "dataType": { "type": "STRING" }, "isExternalId": true, "isFinal": true, "isImported": false, "isInherited": false, "isRequiredInEntity": true, "isStoredExternally": false, "isTimeSeries": false }, "value": { "stringValue": "item_A001" } }, "Temperature": { "definition": { "dataType": { "type": "DOUBLE" }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, Developing a time-series data connector 104 AWS IoT TwinMaker User Guide "isTimeSeries": true } }, "RPM": { "definition": { "dataType": { "type": "DOUBLE" }, "isExternalId": false, "isFinal": false, "isImported": true, "isInherited": false, "isRequiredInEntity": false, "isStoredExternally": false, "isTimeSeries": true } } } } Similarly, the corresponding response is also updated, as shown in the following example: { "propertyValues": [ { "entityPropertyReference": { "entityId": "MyEntity", "componentName": "TelemetryData", "propertyName": "Temperature" }, "values": [ { "time": "2022-08-25T00:00:00Z", "value": { "doubleValue": 588.168 } }, { "time": "2022-08-25T00:00:01Z", "value": { "doubleValue": 592.4224 } }, { "time": "2022-08-25T00:00:02Z", Developing a time-series data connector 105 AWS IoT TwinMaker User Guide "value": { "doubleValue": 594.9383 } } ] }, { "entityPropertyReference": { "entityId": "MyEntity", "componentName": "TelemetryData", "propertyName": "RPM" }, "values": [ { "time": "2022-08-25T00:00:00Z", "value": { "doubleValue": 59 } }, { "time": "2022-08-25T00:00:01Z", "value": { "doubleValue": 60 } }, { "time": "2022-08-25T00:00:02Z", "value": { "doubleValue": 60 } } ] } ], "nextToken": "..." } Note In terms of the pagination for this case, the page size in the request applies to all properties. This means that with five properties in the query and a page size of 100, if Developing a time-series data connector 106 AWS IoT TwinMaker User Guide there are enough data points in the source, you should expect to see 100 data points per property, with 500 data points in total. For an example implementation, see Snowflake connector sample on GitHub. Improving your data connector Handling exceptions It is safe for the Lambda connector to throw exceptions. In the data plane API call, the AWS IoT TwinMaker service waits for the Lambda function to return a response. If the connector |
twinmaker-guide-026 | twinmaker-guide.pdf | 26 | This means that with five properties in the query and a page size of 100, if Developing a time-series data connector 106 AWS IoT TwinMaker User Guide there are enough data points in the source, you should expect to see 100 data points per property, with 500 data points in total. For an example implementation, see Snowflake connector sample on GitHub. Improving your data connector Handling exceptions It is safe for the Lambda connector to throw exceptions. In the data plane API call, the AWS IoT TwinMaker service waits for the Lambda function to return a response. If the connector implementation throws an exception, AWS IoT TwinMaker translates the exception type to be ConnectorFailure, making the API client aware that an issue happened inside the connector. Handling pagination In the example, Timestream provides a utility function which can help support pagination natively. However, for some other query interfaces, such as SQL, it might need extra effort to implement an efficient pagination algorithm. There is a Snowflake connector example that handles pagination in an SQL interface. When the new token is returned to AWS IoT TwinMaker through the connector response interface, the token is encrypted before being returned to the API client. When the token is included in another request, AWS IoT TwinMaker decrypts it before forwarding it to the Lambda connector. We recommend that you avoid adding sensitive information to the token. Testing your connector Though you can still update the implementation after you link the connector to the component type, we strongly recommend you verify the Lambda connector before integrating with AWS IoT TwinMaker. There are multiple ways to test your Lambda connector: you can test the Lambda connector in the Lambda console or locally in the AWS CDK. For more information on testing your Lambda functions, see Testing Lambda functions and Locally testing AWS CDK applications. Improving your data connector 107 AWS IoT TwinMaker Security User Guide For documentation on security best practices with Timestream, see Security in Timestream. For an example of SQL injection prevention, see the following Python script in AWS IoT TwinMaker Samples GitHub Repository. Creating AWS IoT TwinMaker resources Once you have implemented the Lambda function, you can create AWS IoT TwinMaker resources such as component types, entities, and components through the AWS IoT TwinMaker console or API. Note If you follow the setup instructions in the GitHub sample, all AWS IoT TwinMaker resources are available automatically. You can check the component type definitions in the AWS IoT TwinMaker GitHub sample. Once the component type is used by any components, the property definitions and functions of the component type cannot be updated. Integration testing We recommend having an integrated test with AWS IoT TwinMaker to verify the data plane query works end-to-end. You can perform that through GetPropertyValueHistory API or easily in AWS IoT TwinMaker console. Security 108 AWS IoT TwinMaker User Guide In the AWS IoT TwinMaker console, go to component details and then under the Test, you’ll see all the properties in the component are listed there. The Test area of the console allows you to test time-series properties as well as non-time-series properties. For time-series properties you can also use the GetPropertyValueHistory API and for non-time-series properties use GetPropertyValue API. If your Lambda connector supports multiple property query, you can choose more than one property. Creating AWS IoT TwinMaker resources 109 AWS IoT TwinMaker User Guide What's next You can now set up an AWS IoT TwinMaker Grafana dashboard to visualize metrics. You can also explore other data connector samples in the AWS IoT TwinMaker samples GitHub repository to see if they fit your use case. AWS IoT TwinMakercookie factory example time-series connector The complete code of the cookie factory Lambda function is available on GitHub. Though you can still update the implementation after you link the connector to the component type, we strongly recommend you verify the Lambda connector before integrating with AWS IoT TwinMaker. You can test your Lambda function in the Lambda console or locally in the AWS CDK. For more information on testing your Lambda functions, see Testing Lambda functions, and Locally testing AWS CDK applications. Example cookie factory component types In a component type, we define common properties that are shared across components. For the cookie factory example, physical components of the same type share the same measurements, so we can define the measurements schema in the component type. As an example, the mixer type is defined in the following example. What's next 110 AWS IoT TwinMaker User Guide { "componentTypeId": "com.example.cookiefactory.mixer" "propertyDefinitions": { "RPM": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "Temperature": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true } } } For example, a physical component might have measurements |
twinmaker-guide-027 | twinmaker-guide.pdf | 27 | common properties that are shared across components. For the cookie factory example, physical components of the same type share the same measurements, so we can define the measurements schema in the component type. As an example, the mixer type is defined in the following example. What's next 110 AWS IoT TwinMaker User Guide { "componentTypeId": "com.example.cookiefactory.mixer" "propertyDefinitions": { "RPM": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "Temperature": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true } } } For example, a physical component might have measurements in a Timestream database, maintenance records in an SQL database, or alarm data in alarm systems. Creating multiple components and associating them with an entity links different data sources to the entity and populates the entity-component graph. In this context, each component needs an telemetryId property to identify the unique key of the component in the corresponding data source. Specifying the telemetryId property has two benefits: the property can be used in the data connector as a filter condition to only query values of the given component and, if you include the telemetryId property value in the data plane API response, then the client side takes the ID and can perform a reverse lookup if necessary. If you add the TelemetryId to the component type as an external id, it identifies the component in the TimeStream table. { "componentTypeId": "com.example.cookiefactory.mixer" "propertyDefinitions": { "telemetryId": { "dataType": { "type": "STRING" }, "isTimeSeries": false, "isRequiredInEntity": true, "isExternalId": true, AWS IoT TwinMaker cookie factory data connector 111 AWS IoT TwinMaker User Guide "isStoredExternally": false }, "RPM": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "Temperature": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true } } } Similarly we have the component type for the WaterTank, as shown in the following JSON example. { "componentTypeId": "com.example.cookiefactory.watertank", "propertyDefinitions": { "flowRate1": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "flowrate2": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "tankVolume1": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, AWS IoT TwinMaker cookie factory data connector 112 AWS IoT TwinMaker User Guide "isExternalId": false, "isStoredExternally": true }, "tankVolume2": { "dataType": { "type": "DOUBLE" }, "isTimeSeries": true, "isRequiredInEntity": false, "isExternalId": false, "isStoredExternally": true }, "telemetryId": { "dataType": { "type": "STRING" }, "isTimeSeries": false, "isRequiredInEntity": true, "isExternalId": true, "isStoredExternally": false } } } The TelemetryType is an optional property in the component type if it's aimed at querying property values in the entity scope. For an example, see the defined component types in the AWS IoT TwinMaker samples GitHub repository. There are alarm types also embedded into the same table, so the TelemetryType is defined and you extract common properties like the TelemetryId and TelemetryType to a parent component type for other child types to share. Example Lambda The Lambda connector needs to access the data source and generate the query statement based on the input and forward it to the data source. An example request sent to the Lambda is shown in the following JSON example. { 'workspaceId': 'CookieFactory', 'selectedProperties': ['Temperature'], 'startDateTime': 1648796400, 'startTime': '2022-04-01T07:00:00.000Z', 'endDateTime': 1650610799, 'endTime': '2022-04-22T06:59:59.000Z', 'properties': { 'telemetryId': { 'definition': { AWS IoT TwinMaker cookie factory data connector 113 AWS IoT TwinMaker User Guide 'dataType': { 'type': 'STRING' }, 'isTimeSeries': False, 'isRequiredInEntity': True, 'isExternalId': True, 'isStoredExternally': False, 'isImported': False, 'isFinal': False, 'isInherited': True, }, 'value': { 'stringValue': 'Mixer_22_680b5b8e-1afe-4a77-87ab-834fbe5ba01e' } } 'Temperature': { 'definition': { 'dataType': { 'type': 'DOUBLE' }, 'isTimeSeries': True, 'isRequiredInEntity': False, 'isExternalId': False, 'isStoredExternally': True, 'isImported': False, 'isFinal': False, 'isInherited': False } } 'RPM': { 'definition': { 'dataType': { 'type': 'DOUBLE' }, 'isTimeSeries': True, 'isRequiredInEntity': False, 'isExternalId': False, 'isStoredExternally': True, 'isImported': False, 'isFinal':False, 'isInherited': False } }, 'entityId': 'Mixer_22_d133c9d0-472c-48bb-8f14-54f3890bc0fe', 'componentName': 'MixerComponent', 'maxResults': 100, 'orderByTime': 'ASCENDING' } AWS IoT TwinMaker cookie factory data connector 114 AWS IoT TwinMaker User Guide The goal of the Lambda function is to query historical measurement data for a given entity. AWS IoT TwinMaker provides a component-properties map, and you should specify an instantiated value for the component ID. For example, to handle the component type-level query (which is common for alarm use cases) and return the alarm status of all components in the workspace, then the properties map has component type properties definitions. For the most straightforward case, as in the preceding request, we want a series of temperature samples during the given time window for the given component, in ascending time order. The query statement can be summarized as the following: ... SELECT measure_name, time, measure_value::double FROM {database_name}.{table_name} WHERE time < from_iso8601_timestamp('{request.start_time}') AND time >= from_iso8601_timestamp('{request.end_time}') AND TelemetryId = '{telemetry_id}' AND measure_name = '{selected_property}' ORDER BY time {request.orderByTime} ... |
twinmaker-guide-028 | twinmaker-guide.pdf | 28 | to handle the component type-level query (which is common for alarm use cases) and return the alarm status of all components in the workspace, then the properties map has component type properties definitions. For the most straightforward case, as in the preceding request, we want a series of temperature samples during the given time window for the given component, in ascending time order. The query statement can be summarized as the following: ... SELECT measure_name, time, measure_value::double FROM {database_name}.{table_name} WHERE time < from_iso8601_timestamp('{request.start_time}') AND time >= from_iso8601_timestamp('{request.end_time}') AND TelemetryId = '{telemetry_id}' AND measure_name = '{selected_property}' ORDER BY time {request.orderByTime} ... AWS IoT TwinMaker cookie factory data connector 115 AWS IoT TwinMaker User Guide Creating and editing AWS IoT TwinMaker scenes Scenes are three-dimensional visualizations of your digital twin. They're the primary way for you to edit your digital twin. Learn how to add alarms, time series data, color overlays, tags, and visual rules to your scene to align your digital twin visualizations with your real-world use case. This section covers the following topics: • Before you create your first scene • Upload resources to the AWS IoT TwinMaker Resource Library • Create your scenes • Add fixed cameras to entities • Scene enhanced editing • Edit your scenes • 3D Tiles model format • Dynamic scenes Before you create your first scene Scenes rely on resources to represent your digital twin. These resources are made up of 3D models, data, or texture files. The size and complexity of your resources, elements in the scene such as lighting, and your computer hardware, impact the performance of AWS IoT TwinMaker scenes. Use the information in this topic to reduce lag, loading times, and improve the frame rate of your scenes. Optimize your resources before importing them into AWS IoT TwinMaker You can use AWS IoT TwinMaker to interact with your digital twin in real time. For the best experience with your scenes, we recommend optimizing your resources for use in a real-time environment. Your 3D models can have a significant impact on performance. Complex model geometry and meshes can reduce performance. For example, industrial CAD models have a high level of detail. We recommend compressing these model's meshes and reducing their polygon count before using them in AWS IoT TwinMaker scenes. If you're creating new 3D models for AWS IoT TwinMaker, Before creating scenes 116 AWS IoT TwinMaker User Guide you should establish a level of detail and maintain it across all your models. Remove details from models that don’t affect the visualization or interpretation of your use case.. To compress models and reduce the file size, use open source mesh compression tools, such as DRACO 3D data compression. Unoptimized textures can also impact performance. If you don’t require any transparency in your textures, considering choosing the PEG image format over the PNG format. You can compress your texture files by using open source texture compression tools, such as Basis Universal texture compression. Best practices for performance in AWS IoT TwinMaker For the best performance with AWS IoT TwinMaker, note the following limitations and best practices. • AWS IoT TwinMaker scene rendering performance is hardware dependent. Performance varies across different computer hardware configurations. • We recommend a total polygon count of under 1 million across all your objects in your AWS IoT TwinMaker. • We recommend a total of 200 objects per scene. Increasing the number of objects in a scene beyond 200 can decrease your scene frame rate. • We recommend the that total size of all unique 3D assets in your scene does not exceed 100 megabytes. Otherwise, you may encounter slow loading times or degraded performance depending on your browser and hardware. • Scenes have ambient lighting by default. You can add extra lights into a scene to bring certain objects into focus, or cast shadows on objects. We recommend using one light per scene. Use lights where needed, and avoid replicating real-world lights within a scene. Learn more Use these resources to learn more about optimization techniques that you can use to improve performance in your scenes. • How to convert and compress OBJ models to GLTF for use with AWS IoT TwinMaker • Optimize your 3D models for web content • Optimizing scenes for better WebGL performance Best practices for performance in AWS IoT TwinMaker 117 AWS IoT TwinMaker User Guide Upload resources to the AWS IoT TwinMaker Resource Library You can use the Resource Library to control and manage any resource you want to place into scenes for your digital twin application. To make AWS IoT TwinMaker aware of the resources, upload them using the Resource Library console page. Upload files to the Resource Library using the console Follow these steps to add files to the Resource Library using the AWS IoT TwinMaker console. 1. 2. |
twinmaker-guide-029 | twinmaker-guide.pdf | 29 | for web content • Optimizing scenes for better WebGL performance Best practices for performance in AWS IoT TwinMaker 117 AWS IoT TwinMaker User Guide Upload resources to the AWS IoT TwinMaker Resource Library You can use the Resource Library to control and manage any resource you want to place into scenes for your digital twin application. To make AWS IoT TwinMaker aware of the resources, upload them using the Resource Library console page. Upload files to the Resource Library using the console Follow these steps to add files to the Resource Library using the AWS IoT TwinMaker console. 1. 2. In the left navigation menu, under Workspaces, select Resource Library. Select Add resources and choose the files you want to upload. Create your scenes In this section, you'll set up a scene so that you can edit your digital twin. You can import a 3D model that was uploaded to the resource library, then add widgets and bind property data to objects to complete your digital twin. Scene objects can include an entire building or space, or individual pieces of equipment positioned in their physical location. Note Before you create a scene, you must create a workspace. Uploading resources in AWS IoT TwinMaker 118 AWS IoT TwinMaker User Guide Use the following procedure to create your scene in AWS IoT TwinMaker. 1. To open the scene pane, in the left navigation of your workspace, choose Scenes. 2. Choose Create scene. The new scene creation pane opens. 3. In the scene creation pane, enter a name and description for your new scene. If you have a standard or tiered bundle pricing plan, you can select your scene type. It is recommended to use a dynamic scene. 4. When you're ready to create the scene, choose Create scene. The new scene opens and is ready for you to work with it. Use 3D navigation in your AWS IoT TwinMaker in scenes The AWS IoT TwinMaker scene has a set of navigation controls that you can use to navigate efficiently through your scene's 3D space. To interact with the 3D space and objects represented by your scene, you use the following widgets and menu options. • Inspector: Use the Inspector window to view and edit properties and settings of a selected entity or component in your hierarchy. Use 3D navigation in your AWS IoT TwinMaker in scenes 119 AWS IoT TwinMaker User Guide • Scene Canvas: The Scene Canvas is the 3D space where you can position and orient any 3D resources you want to use. • Scene Graph Hierarchy: You can use this panel to see all of the entities present in your scene. It appears on the left side of the window. • Object gizmo: Use this gizmo to move objects around the canvas. It appears at the center of a selected 3D object in the Scene Canvas. • Edit Camera gizmo: Use the Edit Camera gizmo to quickly view the scene view camera’s current orientation and modify the viewing angle. You can find this gizmo in the lower-right corner of the scene view. • Zoom controls: To navigate on the Scene Canvas, use right click and drag in the direction you want to move. To rotate , left click and drag to rotate. To zoom, use the scroll wheel on your mouse, or pinch and move your fingers apart on the track pad of your laptop. The scene buttons on the hierarchy pane have the following functions listed, in order of the buttons' layout: Use 3D navigation in your AWS IoT TwinMaker in scenes 120 AWS IoT TwinMaker User Guide • Undo: Undo your last change in the scene. • Redo: Redo your last change in the scene. • Plus (+): Use this button to gain access to the following actions: Add empty node, Add 3D model, Add tag, Add light, and Add model shader. • Change navigation method: Gain access to the scene camera navigation options, Orbit and Pan. • Trashcan (delete): Use this button to delete a selected object in your scene. • Object manipulation tools: Use this button to translate, rotate, and scale the selected object. Add fixed cameras to entities You can attach fixed camera views to your entities within your AWS IoT TwinMaker scenes. These cameras provide a fixed perspective on a 3d model, allowing you to quickly and easily shift your perspective in a scene to a targeted entity. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. In the scene hierarchy menu, select the entity you want to attach the camera to. 3. Press the + button, and from the drop down options select Add camera from current view. To apply a camera with the current perspective to the entity. 4. In the inspector, you can configure your camera and |
twinmaker-guide-030 | twinmaker-guide.pdf | 30 | to your entities within your AWS IoT TwinMaker scenes. These cameras provide a fixed perspective on a 3d model, allowing you to quickly and easily shift your perspective in a scene to a targeted entity. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. In the scene hierarchy menu, select the entity you want to attach the camera to. 3. Press the + button, and from the drop down options select Add camera from current view. To apply a camera with the current perspective to the entity. 4. In the inspector, you can configure your camera and adjust the following settings: • A camera Name • The camera position and rotation • The camera focal length • The zoom level • Near and Far clipping planes 5. To access your camera after you have placed it. Select the entity you added the camera to in the hierarchy. Look for the camera name listed under the entity. 6. Once you select the placed camera from your entity, the scenes camera view will snap to the set perspective of the placed camera. Scene enhanced editing AWS IoT TwinMaker scenes feature a set of tools for enhanced and editing and manipulation of resources present in your scene. Add fixed cameras 121 AWS IoT TwinMaker User Guide The following topics teach you how to used the enhanced editing features in your AWS IoT TwinMaker scenes. • Targeted placement of scene objects • Submodel selection • Edit entities in the scene hierarchy Targeted placement of scene objects AWS IoT TwinMaker allows you to precisely place and add objects into your scene. This enhanced editing feature gives you greater control of where you're placing tags, entities lights and models in your scene. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. Press the + button, and from the drop down options select one of the options. This could be a model, a light, a tag, or anything from the + menu. When you move your cursor in the 3d space of your scene you should see a target around your cursor . 3. Use the target to precisely place elements in your scene. Submodel selection AWS IoT TwinMaker lets you select submodels of 3d models in scenes and apply standard properties to them, such as tags, lights, or rules. 3d model file formats contain metadata that can specify sub areas of the model as submodels within the larger model. For example a model could be a filtration system, individual parts of the system like tanks, pipes, or a motor are marked as submodels of the filtration's 3d model. Supported 3D file formats in scenes: GLB, and GLTF. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. If you have no models in your scene, make sure to add one by selecting the option from the + menu. 3. Select model listed in your scene hierarchy, once selected the hierarchy should display any submodels beneath the model. Targeted placement of scene objects 122 AWS IoT TwinMaker Note User Guide If you do not see any submodels listed then it is likely the model was not configured to have any submodels. 4. To toggle the visibility of a submodel, press the eye icon, located to the right of the submodel's name in the hierarchy. 5. To edit submodel data, such as its name or position, the scene inspector will open when a submodel is selected. Use the inspector menu to update or change submodel data. 6. To add tags, lights, rules, or other properties to submodels, press the +, while the submodel is selected in the hierarchy. Edit entities in the scene hierarchy AWS IoT TwinMaker scenes let you directly edit properties of entities within the hierarchy table. The following procedure shows you which actions you can perform on an entity through the hierarchy menu. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. Open the scene hierarchy, and select a sub element of an entity you wish to manipulate. 3. Once the element is selected, press the + button, and from the drop down select one of the options: • Add empty node • Add 3D model • Add light • Add camera from current view • Add tag • Add model shader • Add motion indicator 4. After selecting one of the options from the drop down, the selection will be applied to the scene as child of the selected element from step 2. 5. You can reorder child elements and reparent elements, by selecting a child element and dragging in the hierarchy to a new parent. Edit entities in the scene hierarchy 123 AWS IoT TwinMaker User Guide Add annotations to entities The AWS IoT TwinMaker scene composer lets you annotate any element |
twinmaker-guide-031 | twinmaker-guide.pdf | 31 | 3D model • Add light • Add camera from current view • Add tag • Add model shader • Add motion indicator 4. After selecting one of the options from the drop down, the selection will be applied to the scene as child of the selected element from step 2. 5. You can reorder child elements and reparent elements, by selecting a child element and dragging in the hierarchy to a new parent. Edit entities in the scene hierarchy 123 AWS IoT TwinMaker User Guide Add annotations to entities The AWS IoT TwinMaker scene composer lets you annotate any element in your scene hierarchy. The annotation is authored in markdown. For more information on writing in Markdown, see the official documentation on markdown syntax, Basic Syntax. Note AWS IoT TwinMaker annotations and overlay Markdown syntax only and not HTML. Add an annotation to an entity 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. Select an element from the scene hierarchy that you want to annotate. If no element in the hierarchy is selected, then you can add annotation to the root. 3. Press the plus + button and choose the Add annotation option. 4. In the Inspector window on the left, scroll down to the annotation section. Using Markdown syntax, write the text you want your annotation to display. Add annotations to entities 124 AWS IoT TwinMaker User Guide For more information on writing in Markdown, see the official documentation on markdown syntax, Basic Syntax. 5. To bind your AWS IoT TwinMaker scene data to an annotation choose Add data binding, add the Entity Id, then select the Component Name and Property Name of the entity you wish to surface data from. You can update the binding name to use it as a Markdown variable, and surface the data in the annotation. Add annotations to entities 125 AWS IoT TwinMaker User Guide Add annotations to entities 126 AWS IoT TwinMaker User Guide Add annotations to entities 127 AWS IoT TwinMaker User Guide 6. The Binding Name is used to represent the annotation's variable. Enter a Binding Name to surface the latest historical value of an entities time-series in the annotation through AWS IoT TwinMaker's variable syntax: ${variable-name} As an example, this overlay displays the value of the mixer0alarm, in the annotation with the syntax ${mixer0alarm}. Add overlays to Tags You can create overlays for your AWS IoT TwinMaker scenes. Scene overlays are associated with tags and can be used to surface critical data associated with your scene entities. The overlay is authored and rendered in Markdown. For more information on writing in Markdown, see the official documentation on markdown syntax, Basic Syntax. Add overlays to Tags 128 AWS IoT TwinMaker Note User Guide By default, an Overlay is visible in a scene only when the tag associated with it is selected. You can toggle this in the scene Settings so that all Overlays are visible at once. 1. Navigate to your scene in the AWS IoT TwinMaker console. 2. The AWS IoT TwinMaker overlay is associated with a tag scene, you can update an existing tag or add a new one. Press the plus + button and choose the Add tag option. 3. In the Inspector panel on the right, select the + (plus symbol) button then select Add overlay. Add overlays to Tags 129 AWS IoT TwinMaker User Guide Add overlays to Tags 130 AWS IoT TwinMaker User Guide 4. In Markdown syntax, write the text you want your overlay to display. For more information on writing in Markdown, see the official documentation on markdown syntax, Basic Syntax. 5. To bind your AWS IoT TwinMaker scene data to an overlay, select Add data binding. Add overlays to Tags 131 AWS IoT TwinMaker User Guide Add the Binding name and Entity Id, then select the Component Name and Property Name of the entity you wish to surface data from. 6. You can surface the latest historical value of an entities time-series data in the overlay through AWS IoT TwinMaker's variable syntax: ${variable-name}. As an example, this overlay displays the value of the mixer0alarm, in the overlay with the syntax ${mixer0alarm}. Add overlays to Tags 132 AWS IoT TwinMaker User Guide Add overlays to Tags 133 AWS IoT TwinMaker User Guide 7. To enable Overlay visibility, open the Settings tab in the top left, and make sure the toggle for Overlay is switched on so that all Overlays are visible at once. Note By default, an Overlay is visible in a scene only when the tag associated with it is selected. Add overlays to Tags 134 AWS IoT TwinMaker User Guide Add overlays to Tags 135 AWS IoT TwinMaker Edit your scenes User Guide After you've created a scene, you can add entities, components, and configure augmented |
twinmaker-guide-032 | twinmaker-guide.pdf | 32 | IoT TwinMaker User Guide Add overlays to Tags 133 AWS IoT TwinMaker User Guide 7. To enable Overlay visibility, open the Settings tab in the top left, and make sure the toggle for Overlay is switched on so that all Overlays are visible at once. Note By default, an Overlay is visible in a scene only when the tag associated with it is selected. Add overlays to Tags 134 AWS IoT TwinMaker User Guide Add overlays to Tags 135 AWS IoT TwinMaker Edit your scenes User Guide After you've created a scene, you can add entities, components, and configure augmented widgets into your scene. Use entity components and widgets to model your digital twin and provide functionality that matches your use case. Add models to your scenes To add models to your scene, use the following procedure. Note To add models in your scene, you must first upload the models to the AWS IoT TwinMaker Resource Library. For more information, see Upload resources to the AWS IoT TwinMaker Resource Library. 1. On the scene composer page, choose the plus (+) sign, and then choose Add 3D model. 2. On the Add resource from resource library window, choose the CookieFactorMixer.glb file, and then choose Add. Scene composer opens. 3. Optional: Choose the plus (+) sign, and then choose Add light. 4. Choose each light option to see how they affect the scene. Edit your scenes 136 AWS IoT TwinMaker User Guide Note Scenes have default ambient lighting. To avoid frame rate loss, consider limiting the number of additional lights placed in your scene. Add model shader augmented UI widgets to your scene Model shader widgets can change the color of an object under conditions that you define. For example, you can create a color widget that changes the color of a cookie mixer in your scene based on the mixer's temperature data. Use the following procedure to add model shader widgets to a selected object. 1. 2. Select an object in the hierarchy that you want to add a widget to. Press the + button and then choose Model Shader. To add a new visual rule group, first follow the instructions below to create the ColorRule, then in the Inspector panel for the object of the Rule ID, choose ColorRule. Add widgets 137 AWS IoT TwinMaker User Guide 3. Select the entityID, ComponentName, and PropertyName you want to bind the model shader to. Create visual rules for your scenes You can use visual rule maps to specify the data driven conditions that change the visual appearance of an augmented UI widget, such as a tag or a model shader. There are sample rules provided, but you can also create your own. The following example shows a visual rule. Add widgets 138 AWS IoT TwinMaker User Guide Add widgets 139 AWS IoT TwinMaker User Guide The image above shows a rule for when a previously defined data property with ID 'temperature' is checked against a certain value. For example, if the 'temperature' is greater than or equal to 40, the state will change the appearance of the tag to a red circle. The target, when chosen in the Grafana dashboard, populates a detail panel that is configured to use the same data source. The following procedure shows you how to add a new visual rule group for the mesh colorization augmented UI layer. 1. Under the rules tab in the console, enter a name such as ColorRule in the text field and choose Add New Rule Group. 2. Define a new rule for your use case. For example, you can create one based on the data property 'temperature', where the reported value is less than 20. Use the following syntax for rule expressions: Less than is <, greater than is >, less than or equal is <=, greater than or equal is >=, and equal is ==. (For more information, see the Apache Commons JEXL syntax.) 3. Set the target to a color. To define a color, such as #fcba03, use hex values. (For more information about hex values, see Hexadecimal.) Add widgets 140 AWS IoT TwinMaker User Guide Creating tags for your scenes A tag is an annotation added to a specific x,y,z coordinate position of a scene. The tag uses an entity property to connect a scene part to the knowledge graph. You can use a tag to configure the behavior or visual appearance of an item in the scene, such as an alarm. Note To add functionality to tags, you apply visual rules to them. Use the following procedure to add tags to your scene. 1. Select an object in the hierarchy, choose the + button, and then choose Add Tag. 2. Name the tag. Then, to apply a visual rule, select a visual group Id. 3. 4. In |
twinmaker-guide-033 | twinmaker-guide.pdf | 33 | specific x,y,z coordinate position of a scene. The tag uses an entity property to connect a scene part to the knowledge graph. You can use a tag to configure the behavior or visual appearance of an item in the scene, such as an alarm. Note To add functionality to tags, you apply visual rules to them. Use the following procedure to add tags to your scene. 1. Select an object in the hierarchy, choose the + button, and then choose Add Tag. 2. Name the tag. Then, to apply a visual rule, select a visual group Id. 3. 4. In the dropdown lists, choose the EntityID, ComponentName, and PropertyName. To populate the Data Path field, choose Create DataFrameLabel. 3D Tiles model format Using 3D Tiles in your scene If you experience long wait times when you load 3D scenes in AWS IoT TwinMaker or have poor rendering performance when you navigate a complex 3D model, then you may want to convert your models to 3D tiles. This section describes the 3D tiles format and available third-party tools. Read on to decide if 3D Tiles are right for your use case and for help getting started. Complex model use case A 3D model in your AWS IoT TwinMaker scene may cause performance issues like slow loading times and lagging navigation if the model is: • Large: its file size is larger than 100MB. • Dense: it is made up of hundreds or thousands of distinct meshes. • Complex: mesh geometry has millions of triangles to form complex shapes. Adding tags 141 AWS IoT TwinMaker 3D Tiles format User Guide The 3D Tiles format is a solution for streaming model geometry and improving 3D rendering performance. It enables instantaneous loading of 3D models in an AWS IoT TwinMaker scene, and optimizes 3D interactions by loading in chunks of a model based on what is visible in the camera view. The 3D Tiles format was created by Cesium. Cesium has a managed service to convert 3D models to 3D Tiles called Cesium Ion. This is currently the best solution for creating 3D Tiles, and we recommend this for your complex models in the supported formats. You can register Cesium and choose the appropriate subscription plan based on your business requirements on Cesium's pricing page. To prepare a 3D Tiles model that you can add to an AWS IoT TwinMaker scene, follow the instructions documented by Cesium Ion: • Import a model to Cesium Ion Upload Cesium 3D tiles to AWS Once your model has been converted to 3D Tiles, download the model files then upload them to your AWS IoT TwinMaker workspace Amazon S3 bucket: 1. Create and download your 3D Tiles model archive. 2. Unzip the archive into a folder. 3. Upload the entire 3D Tiles folder into the Amazon S3 bucket associated with your AWS IoT TwinMaker workspace. (See Uploading objects in the Amazon S3 User Guide.) 4. If your 3D Tiles model was uploaded successfully, you will see an Amazon S3 folder path in your AWS IoT TwinMaker Resource Library with type Tiles3D. Note The AWS IoT TwinMaker Resource Library doesn't support directly uploading 3D Tiles models. Using 3D Tiles in your scene 142 AWS IoT TwinMaker User Guide Using 3D Tiles in AWS IoT TwinMaker AWS IoT TwinMaker is aware of any 3D Tiles model uploaded to your workspace S3 bucket. The model must have a tileset.json and all dependent files (.gltf, .b3dm, .i3dm, .cmpt, .pnts) available in the same Amazon S3 directory. The Amazon S3 directory path will appear in the Resource Library with the type Tiles3D. To add the 3D Tiles model to your scene, follow these steps: 1. On the scene composer page, choose the plus (+) sign, and then choose Add 3D model. 2. On the Add resource from resource library window, choose the path to your 3D Tiles model with the type Tiles3D, and then choose Add. 3. Click on the canvas to place the model in your scene. 3D Tiles differences 3D Tiles does not currently support geometric and semantic metadata, which means that the mesh hierarchy of the original model is not available for the sub-model selection feature. You can still add widgets to your 3D Tiles model, but you cannot use features fine-tuned to sub-models: model shader, separated 3D transformations, or entity binding for a sub-model mesh. It is recommended to use the 3D Tiles conversion for large assets that serve as context for the background of a scene. If you want a sub-model to be further broken down and annotated then it should be extracted as a separate glTF/glb asset and added directly to the scene. This can be done with free and common 3D tools like Blender. Example use case: • You have a 1GB model of a factory |
twinmaker-guide-034 | twinmaker-guide.pdf | 34 | widgets to your 3D Tiles model, but you cannot use features fine-tuned to sub-models: model shader, separated 3D transformations, or entity binding for a sub-model mesh. It is recommended to use the 3D Tiles conversion for large assets that serve as context for the background of a scene. If you want a sub-model to be further broken down and annotated then it should be extracted as a separate glTF/glb asset and added directly to the scene. This can be done with free and common 3D tools like Blender. Example use case: • You have a 1GB model of a factory with detailed machine rooms and floors, electrical boxes, and plumbing pipes. The electrical boxes and pipes need to glow red when associated property data cross a threshold. • You isolate the box and pipe meshes in the model and export it into a separate glTF using Blender. • You convert the factory without electrical and plumbing elements into a 3D Tiles model and upload it to S3. • You add both the 3D Tiles model and glTF model to an AWS IoT TwinMaker scene at the origin (0,0,0). Using 3D Tiles in your scene 143 AWS IoT TwinMaker User Guide • You add model shader components to the electrical box and pipe sub-models of the glTF to make the meshes red based on property rules. Dynamic scenes AWS IoT TwinMaker scenes unlock the power of the knowledge graph by storing scene nodes and settings in an entity component. Use the AWS IoT TwinMaker console to create dynamic scenes to more easily manage, build, and render 3D scenes. Key features: • All 3D scene node objects, settings, and data bindings are rendered "dynamically" based on knowledge graph queries. • If you use the read-only Scene Viewer in a Grafana or custom application, you can get updates to your scenes on a 30 second interval. Static versus dynamic scenes Static scenes are composed of a scene JSON file stored in S3 that has details of all scene nodes and settings. Any change to the scene must be made to the JSON document and saved to S3. A static scene is the only option if you have a basic pricing plan. Dynamic scenes are composed of a scene JSON file that has global settings for the scene, while all other scene nodes and node settings are stored as entity components in the knowledge graph. Dynamic scenes are only supported in standard and tiered bundle pricing plans. See Switch AWS IoT TwinMaker pricing modes for information on how to upgrade your pricing plan). You can convert an existing static scene to a dynamic scene by following these steps: • Navigate to your scene in the AWS IoT TwinMaker console. • On the left hand panel, click the Settings tab. • Expand the Convert scene section at the bottom of the panel. • Click the Convert scene button, then click Confirm. Dynamic scenes 144 AWS IoT TwinMaker Warning The conversion from a static to dynamic scene is irreversible. User Guide Scene component types and entities In order to create scene-specific entity components, the following 1P component types are supported: • com.amazon.iottwinmaker.3d.component.camera A component type that stores the settings of a camera widget. • com.amazon.iottwinmaker.3d.component.dataoverlay A component type that stores the settings for an overlay of an annotation or tag widget. • com.amazon.iottwinmaker.3d.component.light A component type that stores the settings of a light widget. • com.amazon.iottwinmaker.3d.component.modelref A component type that stores the settings and S3 location of a 3D model used in a scene. • com.amazon.iottwinmaker.3d.component.modelshader A component type that stores the settings of a model shader on a 3D model. Scene component types and entities 145 AWS IoT TwinMaker User Guide • com.amazon.iottwinmaker.3d.component.motionindicator A component type that stores the settings of a motion indicator widget. • com.amazon.iottwinmaker.3d.component.submodelref A component type that stores the settings of a submodel of a 3D model. • com.amazon.iottwinmaker.3d.component.tag A component type that stores the settings of a tag widget. • com.amazon.iottwinmaker.3d.node A component type that stores the basic settings of a scene node like its 3D transform, name, and generic properties. Dynamic scene concepts Dynamic scene entities are stored under a global entity labelled $SCENES. Each scene is made up of a root entity and a hierarchy of children entities that match the scene node hierarchy. Each scene node under the root has a com.amazon.iottwinmaker.3d.node component and a component for the type of node (3D model, widget, and so on). Warning Do not manually delete any scene entities or your scene may be in a broken state. If you want to partially or fully delete a scene, use the scene composer page to add and delete scene nodes, and use the scenes page to select and delete a scene. Dynamic scene concepts 146 AWS IoT TwinMaker User Guide Create a |
twinmaker-guide-035 | twinmaker-guide.pdf | 35 | of a root entity and a hierarchy of children entities that match the scene node hierarchy. Each scene node under the root has a com.amazon.iottwinmaker.3d.node component and a component for the type of node (3D model, widget, and so on). Warning Do not manually delete any scene entities or your scene may be in a broken state. If you want to partially or fully delete a scene, use the scene composer page to add and delete scene nodes, and use the scenes page to select and delete a scene. Dynamic scene concepts 146 AWS IoT TwinMaker User Guide Create a customized web application using AWS IoT TwinMaker UI Components AWS IoT TwinMaker provides open-source UI components for AWS IoT Application developers. Using those UI components, developers can build customized web applications with AWS IoT TwinMaker feature enabled for their digital twins. AWS IoT TwinMaker UI components are part of the AWS IoT Application Kit, an open-source, client- side library that enables IoT application developers to simplify the development of complex IoT applications AWS IoT TwinMaker UI components include: • AWS IoT TwinMaker source: A data connector component that enables you to retrieve data and interact with your AWS IoT TwinMaker data and digital twins. For more information, see AWS IoT TwinMaker source documentation. • Scene viewer: A 3D rendering component built over @react-three/fiber that renders your digital twin and enables you to interact with it. For more information, see Scene Viewer documentation. • Video player: A video player component that allows you to stream a video from the Kinesis Video Streams through AWS IoT TwinMaker. For more information, see Video Player documentation. To learn more about using AWS IoT Application Kit, please visit AWS IoT Application Kit Github page. For instructions on how to start a new web application using AWS IoT Application Kit, please visit the official IoT App Kit documentation page. 147 AWS IoT TwinMaker User Guide Switch AWS IoT TwinMaker pricing modes AWS IoT TwinMaker currently has three pricing modes, basic, standard or tiered bundle. Standard pricing mode is set as the default pricing mode. You can switch from the usage-based to the tiered-based pricing mode at any time, but the change takes effect at the beginning of your next billing cycle. Once you have switched from usage-based to the tiered-based pricing mode, you cannot switch back to the usage-based pricing mode for the next three usage cycles. If you switch from basic to standard, the change is effective immediately. For details and cost information, see AWS IoT TwinMaker Pricing This procedure shows you how to switch your pricing mode in the AWS IoT TwinMaker console: 1. Open the AWS IoT TwinMaker console. 2. In the left navigation pane, select Settings. The Pricing page opens. 3. Choose Change price mode. 4. Select either the Standard or Tiered bundle modes, as shown in the following screenshot. 148 AWS IoT TwinMaker User Guide 5. Choose Save to confirm your new pricing mode. 6. You have now changed your pricing mode. Note You can switch from the usage-based to the tiered-based pricing mode at any time, but the change takes effect at the beginning of your next billing cycle. Once you have switched from usage-based to the tiered-based pricing mode, you cannot switch back to the usage-based pricing mode for the next three usage cycles. If you switch from basic to standard, the change is effective immediately. 149 AWS IoT TwinMaker User Guide AWS IoT TwinMaker knowledge graph The AWS IoT TwinMaker knowledge graph organizes all the information contained within your AWS IoT TwinMaker workspaces and presents it in a visual graph format. You can run queries against your entities, components, and component types to generate visual graphs that show you the relationships between your AWS IoT TwinMaker resources. The following topics show you how to use and integrate the knowledge graph. Topics • AWS IoT TwinMaker knowledge graph core concepts • How to Run AWS IoT TwinMaker knowledge graph queries • Knowledge graph scene integration • How to use AWS IoT TwinMaker knowledge graph with Grafana • AWS IoT TwinMaker knowledge graph additional resources AWS IoT TwinMaker knowledge graph core concepts This topic covers the key concepts and vocabulary of the knowledge graph feature. How knowledge graph works: Knowledge graph creates relationships between entities and their components with the existing CreateEntity or UpdateEntity APIs. A relationship is just a property of a special data type RELATIONSHIP that is defined on a component of an entity. AWS IoT TwinMaker knowledge graph calls the ExecuteQuery API to make a query based on any data in the entities or the relationships between them. Knowledge graph uses the flexible PartiQL query language (used by many AWS services) that has newly added graph match syntax support to help you write your queries. After the |
twinmaker-guide-036 | twinmaker-guide.pdf | 36 | of the knowledge graph feature. How knowledge graph works: Knowledge graph creates relationships between entities and their components with the existing CreateEntity or UpdateEntity APIs. A relationship is just a property of a special data type RELATIONSHIP that is defined on a component of an entity. AWS IoT TwinMaker knowledge graph calls the ExecuteQuery API to make a query based on any data in the entities or the relationships between them. Knowledge graph uses the flexible PartiQL query language (used by many AWS services) that has newly added graph match syntax support to help you write your queries. After the calls are made, you can view the results as a table or visualize them as a graph of connected nodes and edges. Knowledge graph key terms: • Entity graph: A collection of nodes and edges within a workspace. • Node: Every entity in your workspace becomes a node in the entity graph. • Edge: Every relationship property defined on a component of an entity becomes an edge in the entity graph. In addition, a hierarchical parent-child relationship defined using the AWS IoT TwinMaker knowledge graph core concepts 150 AWS IoT TwinMaker User Guide parentEntityId field of an entity also becomes an edge in the entity graph with an "isChildOf" relationship name. All edges are directional edges. • Relationship: An AWS IoT TwinMaker Relationship is a special type of property of an Entity’s component. You can use the AWS IoT TwinMaker CreateEntity or UpdateEntity API to define and edit a relationship. In AWS IoT TwinMaker, a relationship must be defined in a component of an entity. A relationship cannot be defined as an isolated resource. A relationship must be directional from one entity to another. How to Run AWS IoT TwinMaker knowledge graph queries Before you use the AWS IoT TwinMaker knowledge graph, make sure you have completed the following prerequisites: • Create an AWS IoT TwinMaker workspace. You can create a workspace in the AWS IoT TwinMaker console. • Become familiar with AWS IoT TwinMaker's entity-component system and how to create entities. For more information, see Create your first entity. • Become familiar with AWS IoT TwinMaker's data connectors. For more information, see AWS IoT TwinMaker data connectors. Note In order to use the AWS IoT TwinMaker knowledge graph, you need to be in either the standard or tiered bundle pricing modes. For more information, see Switch AWS IoT TwinMaker pricing modes. The following procedures show you how to write, run, save, and edit queries. Open the query editor To navigate to the knowledge graph query editor 1. Open the AWS IoT TwinMaker console. 2. Open the workspace in which you wish to use knowledge graph. 3. In the left navigation menu, choose Query editor. Using knowledge graph 151 AWS IoT TwinMaker User Guide 4. The query editor opens. You are now ready to run queries on your workspace's resources. Run a query To run a query and generate a graph 1. 2. In the query editor, choose the Editor tab to open the syntax editor. In the editor space, write the query you wish to run against your workspace's resources. In the example shown, the request searches for entities that contain vav_% in their name, then organizes these entities by the feed relationship between them, using the following code. SELECT ahu, vav, r FROM EntityGraph MATCH (vav)<-[r:feed]-(ahu) WHERE vav.entityName LIKE 'vav_%' Note The knowledge graph syntax uses PartiQL. For information on this syntax, see AWS IoT TwinMaker knowledge graph additional resources. 3. Choose Run query to run the request you created. A graph is generated based on your request. Using knowledge graph 152 AWS IoT TwinMaker User Guide The example graph shown above is based on the query example in step 2. 4. The results of the query are also presented in a list. Choose results to view the query results in a list. 5. Optionally, choose Export as to export the query results in JSON or CSV format. This covers the basic use of knowledge graph in the console. For more information and examples demonstrating the knowledge graph syntax, see AWS IoT TwinMaker knowledge graph additional resources. Knowledge graph scene integration You can use AWS IoT app kit components to build a web application that integrates knowledge graph into your AWS IoT TwinMaker scenes. This allows you to generate graphs based on the 3D nodes (the 3D models which represent your equipment or systems) that are present within your scene. To create an application that graphs 3D nodes from your scene, first bind the 3D nodes Generate a scene graph 153 AWS IoT TwinMaker User Guide to entities in your workspace. With this mapping, AWS IoT TwinMaker graphs the relationships between the 3D models present in your scene and the entities in your workspace. Then you can create |
twinmaker-guide-037 | twinmaker-guide.pdf | 37 | components to build a web application that integrates knowledge graph into your AWS IoT TwinMaker scenes. This allows you to generate graphs based on the 3D nodes (the 3D models which represent your equipment or systems) that are present within your scene. To create an application that graphs 3D nodes from your scene, first bind the 3D nodes Generate a scene graph 153 AWS IoT TwinMaker User Guide to entities in your workspace. With this mapping, AWS IoT TwinMaker graphs the relationships between the 3D models present in your scene and the entities in your workspace. Then you can create a web application, select 3D models with your scene, and explore their relationships to other entities in a graph format. For an example of a working web application that utilizes the AWS IoT app kit components to generate graphs in an AWS IoT TwinMaker scene, see the AWS IoT TwinMaker sample react app on github. AWS IoT TwinMaker scene graph prerequisites Before you create a web app that uses AWS IoT TwinMaker knowledge graph in your scenes, complete the following prerequisites: • Create an AWS IoT TwinMaker workspace. You can create a workspace in the AWS IoT TwinMaker console. • Become familiar with AWS IoT TwinMaker's entity-component system and how to create entities. For more information, see Create your first entity. • Create an AWS IoT TwinMaker scene populated with 3D models. • Become familiar with AWS IoT TwinMaker's AWS IoT app kit components. For more information on the AWS IoT TwinMaker components, see Create a customized web application using AWS IoT TwinMaker UI Components. AWS IoT TwinMaker scene graph prerequisites 154 AWS IoT TwinMaker User Guide • Become fimalliar with knowledge graph concepts and key terminology. See AWS IoT TwinMaker knowledge graph core concepts. Note To use the AWS IoT TwinMaker knowledge graph and any related features, you need to be in either the standard or tiered bundle pricing modes. For more information on AWS IoT TwinMaker pricing, see Switch AWS IoT TwinMaker pricing modes. Bind 3D nodes in your scene Before you create a web app that integrates knowledge graph with your scene, bind the 3D models, referred to as 3D nodes, that are present in your scene to the associated workspace entity. For example, if you have a model of mixer equipment in a scene, and a corresponding entity called mixer_0, create a data binding between the model of the mixer and the entity representing the mixer– so that the model and entity can be graphed. To perform a data binding action 1. Log in to the AWS IoT TwinMaker console. 2. Open your workspace and select a scene with the 3D nodes you wish to bind. 3. Select a node (3D model) in the scene composer. When you select a node, it will open an inspector panel on the right side of the screen. 4. In the inspector panel, navigate to the top of the panel and select the + button. Then choose the Add entity binding option. This will open a drop-down where you can select an entity to bind to your currently selected node. Bind 3D nodes in your scene 155 AWS IoT TwinMaker User Guide 5. From the data binding drop-down menu, select the entity id you want to map to the 3D model. For the Component name and Property name fields, select the components and properties you want to bind. Bind 3D nodes in your scene 156 AWS IoT TwinMaker User Guide Once you have made selections for the Entity Id, Component Name and Property Name fields, the binding is complete. 6. Repeat this process for all models and entities you want to graph. Note The same data binding operation can be performed on your scene tags, simply select a tag instead of an entity and follow the same process to bind the tag to a node. Create a web application After you bind your entities, use the AWS IoT app kit library to build a web app with a knowledge graph widget that lets you view your scene and explore the relationships between your scene nodes and entities. Create a web application 157 AWS IoT TwinMaker User Guide Use the following resources to create your own app: • The AWS IoT TwinMaker sample react app github Readme documentation. • The AWS IoT TwinMaker sample react app source on github. • The AWS IoT app kit Getting started documentation. • The AWS IoT app kit Video Player component documentation. • The AWS IoT app kit Scene Viewer component documentation. The following procedure demonstrates the functionality of the scene viewer component in a web app. Note This procedure is based on the implementation of the AWS IoT app kit scene viewer component in the AWS IoT TwinMaker sample react app. 1. Open the |
twinmaker-guide-038 | twinmaker-guide.pdf | 38 | resources to create your own app: • The AWS IoT TwinMaker sample react app github Readme documentation. • The AWS IoT TwinMaker sample react app source on github. • The AWS IoT app kit Getting started documentation. • The AWS IoT app kit Video Player component documentation. • The AWS IoT app kit Scene Viewer component documentation. The following procedure demonstrates the functionality of the scene viewer component in a web app. Note This procedure is based on the implementation of the AWS IoT app kit scene viewer component in the AWS IoT TwinMaker sample react app. 1. Open the scene viewer component of the AWS IoT TwinMaker sample react app. In the search field type an entity name or partial entity name (case sensitive search) then select the Search button. If a model is bound to the entity id, then the model in the scene will be highlighted and a node of the entity will be shown in the scene viewer panel. Create a web application 158 AWS IoT TwinMaker User Guide 2. To generate a graph of all relationships, select a node in the scene viewer widget and select the Explore button. 3. Press the Clear button to clear your current graph selection and start over. How to use AWS IoT TwinMaker knowledge graph with Grafana This section shows you how to add a query editor panel to your AWS IoT TwinMaker Grafana dashboard to run and display queries. AWS IoT TwinMaker query editor prerequisites Before you use the AWS IoT TwinMaker knowledge graph in Grafana, complete the following prerequisites: • Create an AWS IoT TwinMaker workspace. You can create a workspace in the AWS IoT TwinMaker console. • Configure AWS IoT TwinMaker for use with Grafana. For instructions, see AWS IoT TwinMaker Grafana dashboard integration. Knowledge graph Grafana panel 159 AWS IoT TwinMaker Note User Guide To use the AWS IoT TwinMaker knowledge graph, you need to be in either the standard or tiered bundle pricing modes. For more information, see Switch AWS IoT TwinMaker pricing modes. AWS IoT TwinMaker query editor permissions To use the AWS IoT TwinMaker query editor in Grafana, you must have an IAM role with permission for the action iottwinmaker:ExecuteQuery. Add that permission to your workspace dashboard role, as shown in this example: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "{s3Arn}", "{s3Arn}/" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get", "iottwinmaker:List", "iottwinmaker:ExecuteQuery" ], "Resource": [ "{workspaceArn}", "{workspaceArn}/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" Knowledge graph Grafana permissions 160 AWS IoT TwinMaker } ] } Note User Guide When you configure your AWS IoT TwinMaker Grafana data source, make sure to use the role with this permission for the Assume role ARN field. After you add it, you can select your workspace from the dropdown next to Workspace. For more information, see Creating a dashboard IAM role. Set up the AWS IoT TwinMaker query editor panel To set up a new Grafana dashboard panel for knowledge graph 1. Open your AWS IoT TwinMaker Grafana dashboard. 2. Create a new dashboard panel. For detailed steps on how to create a panel, see Create a dashboard in the Grafana documentation. 3. From the list of visualizations, select AWS IoT TwinMaker Query Editor. Knowledge graph Grafana permissions 161 AWS IoT TwinMaker User Guide 4. 5. 6. Select the data source to run queries against. (Optional) Add a name for the new panel in the provided field. Select Apply to save and confirm your new panel. Knowledge graph Grafana permissions 162 AWS IoT TwinMaker User Guide The knowledge graph panel works in a similar way as the query editor provided in the AWS IoT TwinMaker console. You can run, write, and clear queries you make in the panel. For more information on how to write queries, see AWS IoT TwinMaker knowledge graph additional resources. How to use the AWS IoT TwinMaker query editor The results of your queries are displayed in three ways, as shown in the following images: visualized in a graph, listed in a table, or presented as a run summary. • Graph visualization: The visual graph only displays data for queries that have at least one relation in the result. The graph displays entities as nodes and relationships as directed edges in the graph. • Tabular data: Knowledge graph Grafana permissions 163 AWS IoT TwinMaker User Guide The tabular data format displays the data for all queries. You can search the table for specific results or subsets of the results. The data can be exported in JSON or CSV format. • Run summary The run summary displays the query and metadata about the status of the query. AWS IoT TwinMaker knowledge graph additional resources This section provides basic examples of the PartiQL syntax used |
twinmaker-guide-039 | twinmaker-guide.pdf | 39 | least one relation in the result. The graph displays entities as nodes and relationships as directed edges in the graph. • Tabular data: Knowledge graph Grafana permissions 163 AWS IoT TwinMaker User Guide The tabular data format displays the data for all queries. You can search the table for specific results or subsets of the results. The data can be exported in JSON or CSV format. • Run summary The run summary displays the query and metadata about the status of the query. AWS IoT TwinMaker knowledge graph additional resources This section provides basic examples of the PartiQL syntax used to write queries in the knowledge graph, as well as links to PartiQL documentation that provide information on the knowledge graph data model. Knowledge graph additional resources 164 AWS IoT TwinMaker User Guide • PartiQL graph data model documentation • PartiQL graph query documentation This set of examples shows basic queries with their responses. Use this as a reference to write your own queries. Basic queries • Get all entities with a filter SELECT entity FROM EntityGraph MATCH (entity) WHERE entity.entityName = 'room_0' This query returns all the entities in a workspace with the name room_0. FROM clause: EntityGraph is the graph collection that contains all the entities and their relationships in a workspace. This collection is automatically created and managed by AWS IoT TwinMaker based on the entities in your workspace. MATCH clause: specifies a pattern that matches a portion of the graph. In this case, the pattern (entity) matches every node in the graph and is bound to the entity variable. The FROM clause must be followed by the MATCH clause. WHERE clause: specifies a filter on the entityName field of the node, where the value must match room_0. SELECT clause: specifies the entity variable so the whole entity node is returned. Response: { "columnDescriptions": [ { "name": "entity", "type": "NODE" } ], "rows": [ { "rowData": [ Knowledge graph additional resources 165 AWS IoT TwinMaker { User Guide "arn": "arn:aws:iottwinmaker:us-east-1: 577476956029: workspace / SmartBuilding8292022 / entity / room_18f3ef90 - 7197 - 53 d1 - abab - db9c9ad02781 ", "creationDate": 1661811123914, "entityId": "room_18f3ef90-7197-53d1-abab-db9c9ad02781", "entityName": "room_0", "lastUpdateDate": 1661811125072, "workspaceId": "SmartBuilding8292022", "description": "", "components": [ { "componentName": "RoomComponent", "componentTypeId": "com.example.query.construction.room", "properties": [ { "propertyName": "roomFunction", "propertyValue": "meeting" }, { "propertyName": "roomNumber", "propertyValue": 0 } ] } ] } ] } ] } The columnDescriptions returns metadata about the column, such as the name and type. The type returned is NODE. This indicates that the whole node has been returned. Other values for the type can be EDGE which would indicate a relationship or VALUE which would indicate a scalar value such as an integer or string. The rows returns a list of rows. As only one entity was matched, one rowData is returned which contains all the fields in an entity. Knowledge graph additional resources 166 AWS IoT TwinMaker Note User Guide Unlike SQL where you can only return scalar values, you can return an object (as JSON) using PartiQL. Each node contains all the entity-level fields such as entityId, arn and components, component-level fields such as componentName, componentTypeId and properties as well as property-level fields such as propertyName and propertyValue, all as a nested JSON. • Get all relationships with a filter: SELECT relationship FROM EntityGraph MATCH (e1)-[relationship]->(e2) WHERE relationship.relationshipName = 'isLocationOf' This query returns all the relationships in a workspace with relationship name isLocationOf. The MATCH clause: specifies a pattern that matches two nodes (indicated by ()) that are connected by a directed edge (indicated by -[]->) and bound to a variable called relationship. The WHERE clause: specifies a filter on the relationshipName field of the edge, where the value is isLocationOf. The SELECT clause: specifies the relationship variable so the whole edge node is returned. Response { "columnDescriptions": [{ "name": "relationship", "type": "EDGE" }], "rows": [{ "rowData": [{ "relationshipName": "isLocationOf", "sourceEntityId": "floor_83faea7a-ea3b-56b7-8e22-562f0cf90c5a", Knowledge graph additional resources 167 AWS IoT TwinMaker User Guide "targetEntityId": "building_4ec7f9e9-e67e-543f-9d1b- 235df7e3f6a8", "sourceComponentName": "FloorComponent", "sourceComponentTypeId": "com.example.query.construction.floor" }] }, ... //rest of the rows are omitted ] } The type of the column in columnDescriptions is an EDGE. Each rowData represents an edge with fields like relationshipName. This is the same as the relationship property name defined on the entity. The sourceEntityId, sourceComponentName and sourceComponentTypeId give information about which entity and component the relationship property was defined on. The targetEntityId specifies which entity this relationship is pointing towards. • Get all entities with a specific relationship to a specific entity SELECT e2.entityName FROM EntityGraph MATCH (e1)-[r]->(e2) WHERE relationship.relationshipName = 'isLocationOf' AND e1.entityName = 'room_0' This query returns all the entity names of all entities that have an isLocationOf relationship with the room_0 entity. The MATCH clause: specifies a pattern that matches any two nodes (e1, e2) that have a directed edge (r). The WHERE clause: |
twinmaker-guide-040 | twinmaker-guide.pdf | 40 | name defined on the entity. The sourceEntityId, sourceComponentName and sourceComponentTypeId give information about which entity and component the relationship property was defined on. The targetEntityId specifies which entity this relationship is pointing towards. • Get all entities with a specific relationship to a specific entity SELECT e2.entityName FROM EntityGraph MATCH (e1)-[r]->(e2) WHERE relationship.relationshipName = 'isLocationOf' AND e1.entityName = 'room_0' This query returns all the entity names of all entities that have an isLocationOf relationship with the room_0 entity. The MATCH clause: specifies a pattern that matches any two nodes (e1, e2) that have a directed edge (r). The WHERE clause: specifies a filter on the relationship name and source entity’s name. The SELECT clause: returns the entityName field in the e2 node. Response { "columnDescriptions": [ { "name": "entityName", "type": "VALUE" } Knowledge graph additional resources 168 AWS IoT TwinMaker User Guide ], "rows": [ { "rowData": [ "floor_0" ] } ] } In the columnDescriptions, the type of the column is VALUE since entityName is a string. One entity, floor_0, is returned. MATCH The following patterns are supported in a MATCH clause: • Match node 'b' pointing to node 'a': FROM EntityGraph MATCH (a)-[rel]-(b) • Match node 'a' pointing to node 'b': FROM EntityGraph MATCH (a)-[]->(b) There is no variable bound to a relationship assuming a filter doesn’t need to be specified on the relationship. • Match node 'a' pointing to node 'b' and node 'b' pointing to node 'a': FROM EntityGraph MATCH (a)-[rel]-(b) This will return two matches: one from 'a' to 'b' and another from 'b' to 'a', so the recommendation is to use directed edges wherever possible. • The relationship name is also a label of the property graph EntityGraph, so you can simply specify the relationship name following a colon (:) instead of specifying a filter on rel.relationshipName in the WHERE clause. FROM EntityGraph MATCH (a)-[:isLocationOf]-(b) • Chaining: patterns can be chained to match on multiple relationships. Knowledge graph additional resources 169 AWS IoT TwinMaker User Guide FROM EntityGraph MATCH (a)-[rel1]->(b)-[rel2]-(c) • Variable hop patterns can span multiple nodes and edges as well: FROM EntityGraph MATCH (a)-[]->{1,5}(b) This query matches any pattern with outgoing edges from node 'a' within 1 to 5 hops. The allowed quantifiers are: {m,n} - between m and n repetitions {m,} - m or more repetitions. FROM: An entity node can contain nested data, such as components which themselves contain further nested data such as properties. These can be accessed by unnesting the result of the MATCH pattern. SELECT e FROM EntityGraph MATCH (e), e.components AS c, c.properties AS p WHERE c.componentTypeId = 'com.example.query.construction.room', AND p.propertyName = 'roomFunction' AND p.propertyValue = 'meeting' Access nested fields by dotting . into a variable. A comma (,) is used to unnest (or join) entities with the components inside and then the properties inside those components. AS is used to bind a variable to the unnested variables so that they can be used in the WHERE or SELECT clauses. This query returns all entities that contains a property named roomFunction with value meeting in a component with component type id com.example.query.construction.room To access multiple nested fields of a field such as multiple components in an entity, use the comma notation to do a join. SELECT e FROM EntityGraph MATCH (e), e.components AS c1, e.components AS c2 SELECT: • Return a node: Knowledge graph additional resources 170 AWS IoT TwinMaker User Guide SELECT e FROM EntityGraph MATCH (e) • Return an edge: SELECT r FROM EntityGraph MATCH (e1)-[r]->(e2) • Return a scalar value: SELECT floor.entityName, room.description, p.propertyValue AS roomfunction FROM EntityGraph MATCH (floor)-[:isLocationOf]-(room), room.components AS c, c.properties AS p Format the name of the output field by aliasing it using AS. Here, instead of propertyValue as column name in the response, roomfunction is returned. • Return aliases: SELECT floor.entityName AS floorName, luminaire.entityName as luminaireName FROM EntityGraph MATCH (floor)-[:isLocationOf]-(room)-[:hasPart]- (lightingZone)-[:feed]-(luminaire) WHERE floor.entityName = 'floor_0' AND luminaire.entityName like 'lumin%' Using aliases is highly recommended to be explicit, increase readability, and avoid any ambiguities in your queries. WHERE: • The supported logical operators are AND, NOT, and OR. • The supported comparison operators are <, <=, >, =>,=, and !=. • Use the IN keyword if you want to specify multiple OR conditions on the same field. • Filter on an entity, component or property field: FROM EntityGraph MATCH (e), e.components AS c, c.properties AS p WHERE e.entityName = 'room_0' AND c.componentTypeId = 'com.example.query.construction.room', AND p.propertyName = 'roomFunction' AND NOT p.propertyValue = 'meeting' Knowledge graph additional resources 171 AWS IoT TwinMaker User Guide OR p.propertyValue = 'office' • Filter on the configuration property. Here unit is the key in the configuration map and Celsius is the value. WHERE p.definition.configuration.unit = 'Celsius' • Check if a map property contains a given key and value: WHERE p.propertyValue.length = 20.0 • Check |
twinmaker-guide-041 | twinmaker-guide.pdf | 41 | specify multiple OR conditions on the same field. • Filter on an entity, component or property field: FROM EntityGraph MATCH (e), e.components AS c, c.properties AS p WHERE e.entityName = 'room_0' AND c.componentTypeId = 'com.example.query.construction.room', AND p.propertyName = 'roomFunction' AND NOT p.propertyValue = 'meeting' Knowledge graph additional resources 171 AWS IoT TwinMaker User Guide OR p.propertyValue = 'office' • Filter on the configuration property. Here unit is the key in the configuration map and Celsius is the value. WHERE p.definition.configuration.unit = 'Celsius' • Check if a map property contains a given key and value: WHERE p.propertyValue.length = 20.0 • Check if a map property contains a given key: WHERE NOT p.propertyValue.length IS MISSING • Check if a list property contains a given value: WHERE 10.0 IN p.propertyValue • Use the lower() function for case insensitive comparisons. By default, all comparisons are case sensitive. WHERE lower(p.propertyValue) = 'meeting' LIKE: Useful if you do not know the exact value for a field and can perform full text search on the specified field. % represents zero or more. WHERE e.entityName LIKE '%room%' • Infix search: %room% • Prefix search: room% • Suffix search: %room • If you have '%' in your values, then put an escape character in the LIKE and specify the escape character with ESCAPE. WHERE e.entityName LIKE 'room\%' ESCAPE '\' Knowledge graph additional resources 172 AWS IoT TwinMaker DISTINCT: User Guide SELECT DISTINCT c.componentTypeId FROM EntityGraph MATCH (e), e.components AS c • The DISTINCT keyword eliminates duplicates from the final result. DISTINCT is not supported on complex data types. COUNT SELECT COUNT(e), COUNT(c.componentTypeId) FROM EntityGraph MATCH (e), e.components AS c • The COUNT keyword computes the number of items in a query result. • COUNT is not supported on nested complex fields and graph pattern fields. • COUNT aggregation is not supported with DISTINCT and nested queries. For example, COUNT(DISTINCT e.entityId) is not supported. PATH The following pattern projections are supported in querying using path projection: • Variable hop queries SELECT p FROM EntityGraph MATCH p = (a)-[]->{1, 3}(b) This query matches and projects nodes metadata of any patterns with outgoing edges from node a within 1 to 3 hops. • Fixed hop queries SELECT p FROM EntityGraph MATCH p = (a)-[]->(b)<-[]-(c) This query matches and projects metadata of entities and incoming edges to b. • Undirected queries SELECT p FROM EntityGraph MATCH p = (a)-[]-(b)-[]-(c) Knowledge graph additional resources 173 AWS IoT TwinMaker User Guide This query matches and projects the metadata of nodes in 1 hop patterns connecting a and c via b. { "columnDescriptions": [ { "name": "path", "type": "PATH" } ], "rows": [ { "rowData": [ { "path": [ { "entityId": "a", "entityName": "a" }, { "relationshipName": "a-to-b-relation", "sourceEntityId": "a", "targetEntityId": "b" }, { "entityId": "b", "entityName": "b" } ] } ] }, { "rowData": [ { "path": [ { "entityId": "b", "entityName": "b" }, { "relationshipName": "b-to-c-relation", Knowledge graph additional resources 174 AWS IoT TwinMaker User Guide "sourceEntityId": "b", "targetEntityId": "c" }, { "entityId": "c", "entityName": "c" } ] } ] } ] } This PATH query response comprises of only metadata that identifies all the nodes and edges of each path/pattern between a and c via b. LIMIT and OFFSET: SELECT e.entityName FROM EntityGraph MATCH (e) WHERE e.entityName LIKE 'room_%' LIMIT 10 OFFSET 5 LIMIT specifies the number of results to be returned in the query, and OFFSET specifies the number of results to skip. LIMIT and maxResults: The following example shows a query that returns 500 results in total, but only displays 50 at a time per API call. This pattern can be used where you need to limit the amount of displayed results, for example if you are only able to display 50 results in a UI. aws iottwinmaker execute-query \ --workspace-id exampleWorkspace \ --query-statement "SELECT e FROM EntityGraph MATCH (e) LIMIT 500"\ --max-results 50 • The LIMIT keyword affects the query and limits the resulting rows. If you need to control the number of results returned per API call without limiting the total number of returned results, use LIMIT. Knowledge graph additional resources 175 AWS IoT TwinMaker User Guide • max-results is an optional parameter for the ExecuteQuery API action. max-results only applies to the API and how results are read within the bounds of the above query. Using max-results in a query allows you to reduce the number of displayed results without limiting the actual number of returned results. The query below iterates through the next page of results. This query uses the ExecuteQuery API call to return rows 51-100, where the next page of results is specified by the next-token– in this case the token is: "H7kyGmvK376L". aws iottwinmaker execute-query \ --workspace-id exampleWorkspace \ --query-statement "SELECT e FROM EntityGraph MATCH (e) LIMIT 500"\ --max-results 50 --next-token "H7kyGmvK376L" • The |
twinmaker-guide-042 | twinmaker-guide.pdf | 42 | only applies to the API and how results are read within the bounds of the above query. Using max-results in a query allows you to reduce the number of displayed results without limiting the actual number of returned results. The query below iterates through the next page of results. This query uses the ExecuteQuery API call to return rows 51-100, where the next page of results is specified by the next-token– in this case the token is: "H7kyGmvK376L". aws iottwinmaker execute-query \ --workspace-id exampleWorkspace \ --query-statement "SELECT e FROM EntityGraph MATCH (e) LIMIT 500"\ --max-results 50 --next-token "H7kyGmvK376L" • The next-token string specifies the next page of results. For more information, see the ExecuteQuery API action. AWS IoT TwinMaker knowledge graph query has the following limits: Limit Name Quota Adjustable Query execution timeout 10 seconds Maximum number of hops Maximum number of self JOINs Maximum number of projected fields Maximum number of conditional expressions (AND, OR, NOT) 10 20 20 10 No Yes Yes Yes Yes Knowledge graph additional resources 176 AWS IoT TwinMaker User Guide Limit Name Quota Adjustable Maximum length of a LIKE expression pattern (including 20 wildcards and escapes) Maximum number of items 10 that can be specified in an IN clause Maximum value for OFFSET 3000 Maximum value for LIMIT 3000 Maximum value for traversals 3000 (OFFSET + LIMIT) Yes Yes Yes Yes Yes Knowledge graph additional resources 177 AWS IoT TwinMaker User Guide Asset synchronization with AWS IoT SiteWise AWS IoT TwinMaker supports asset synchronization (asset sync) for your AWS IoT SiteWise assets and asset models. Using the AWS IoT SiteWise component type, asset sync takes existing AWS IoT SiteWise assets and asset models and converts these resources into AWS IoT TwinMaker entities, components, and component types. The following sections walk you through how to configure asset sync and which AWS IoT SiteWise assets and asset models can be synced to your AWS IoT TwinMaker workspace. Topics • Using asset sync with AWS IoT SiteWise • Differences between custom and default workspaces • Resources synced from AWS IoT SiteWise • Analyze sync status and errors • Delete a sync job • Asset sync limits Using asset sync with AWS IoT SiteWise This topic shows you how to turn on and configure AWS IoT SiteWise asset sync. Follow the appropriate procedures based on which type of workspace you're using. Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. Topics • Using a custom workspace • Using the IoTSiteWiseDefaultWorkspace Using a custom workspace Review these prerequisites before turning on asset sync. Using asset sync with AWS IoT SiteWise 178 AWS IoT TwinMaker Prerequisites User Guide Before using AWS IoT SiteWise, the following must be completed: • You have an AWS IoT TwinMaker workspace. • You have assets and asset models in AWS IoT SiteWise. For more information, see Creating asset models. • An existing IAM role with read permissions for the following AWS IoT SiteWise actions: • ListAssets • ListAssetModels • DescribeAsset • DescribeAssetModel • The IAM role must have the following write permissions for AWS IoT TwinMaker: • CreateEntity • UpdateEntity • DeleteEntity • CreateComponentType • UpdateComponentType • DeleteComponentType • ListEntities • GetEntity • ListComponentTypes Use the following IAM role as a template for the required role: // trust relationships { { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ Using a custom workspace 179 AWS IoT TwinMaker User Guide "iottwinmaker.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } // permissions - replace ACCOUNT_ID, REGION, WORKSPACE_ID with actual values { "Version": "2012-10-17", "Statement": [{ "Sid": "SiteWiseAssetReadAccess", "Effect": "Allow", "Action": [ "iotsitewise:DescribeAsset" ], "Resource": [ "arn:aws:iotsitewise:REGION:ACCOUNT_ID:asset/*" ] }, { "Sid": "SiteWiseAssetModelReadAccess", "Effect": "Allow", "Action": [ "iotsitewise:DescribeAssetModel" ], "Resource": [ "arn:aws:iotsitewise:REGION:ACCOUNT_ID:asset-model/*" ] }, { "Sid": "SiteWiseAssetModelAndAssetListAccess", "Effect": "Allow", "Action": [ "iotsitewise:ListAssets", "iotsitewise:ListAssetModels" ], "Resource": [ "*" ] }, { "Sid": "TwinMakerAccess", Using a custom workspace 180 AWS IoT TwinMaker User Guide "Effect": "Allow", "Action": [ "iottwinmaker:GetEntity", "iottwinmaker:CreateEntity", "iottwinmaker:UpdateEntity", "iottwinmaker:DeleteEntity", "iottwinmaker:ListEntities", "iottwinmaker:GetComponentType", "iottwinmaker:CreateComponentType", "iottwinmaker:UpdateComponentType", "iottwinmaker:DeleteComponentType", "iottwinmaker:ListComponentTypes" ], "Resource": [ "arn:aws:iottwinmaker:REGION:ACCOUNT_ID:workspace/WORKSPACE_ID", "arn:aws:iottwinmaker:REGION:ACCOUNT_ID:workspace/WORKSPACE_ID/*" ] } ] } Use the following procedure to turn on and configure AWS IoT SiteWise asset sync. 1. In the AWS IoT TwinMaker console, navigate to the Settings page. 2. Open the Model sources tab. Using a custom workspace 181 AWS IoT TwinMaker User Guide 3. Choose Connect workspace to link your AWS IoT TwinMaker workspace to your AWS IoT SiteWise assets. Note You can only use asset sync with a single AWS IoT TwinMaker workspace. You must disconnect the sync from one workspace and connect to another workspace to if you wish to sync in a different workspace. 4. Next, navigate to the workspace in which you want to use asset sync. 5. Choose Add sources. This opens the Add entity |
twinmaker-guide-043 | twinmaker-guide.pdf | 43 | TwinMaker console, navigate to the Settings page. 2. Open the Model sources tab. Using a custom workspace 181 AWS IoT TwinMaker User Guide 3. Choose Connect workspace to link your AWS IoT TwinMaker workspace to your AWS IoT SiteWise assets. Note You can only use asset sync with a single AWS IoT TwinMaker workspace. You must disconnect the sync from one workspace and connect to another workspace to if you wish to sync in a different workspace. 4. Next, navigate to the workspace in which you want to use asset sync. 5. Choose Add sources. This opens the Add entity model source page. Using a custom workspace 182 AWS IoT TwinMaker User Guide 6. On the Add entity model source page, confirm that the source field displays AWS IoT SiteWise. Select the IAM role you created as a prerequisite for the IAM role. 7. You have now turned on AWS IoT SiteWise asset sync. You should see a conformation banner appear at the top of the selected Workspace page confirming that asset sync is active. You should also now see a sync source listed in the Entity model sources section. Using a custom workspace 183 AWS IoT TwinMaker User Guide Using the IoTSiteWiseDefaultWorkspace When you opt in to the AWS IoT SiteWiseAWS IoT TwinMaker integration, a default workspace named IoTSiteWiseDefaultWorkspace is created and automatically synced with AWS IoT SiteWise. You can also use the AWS IoT TwinMaker CreateWorkspace API to create a workspace named IoTSiteWiseDefaultWorkspace. Prerequisites Before creating IoTSiteWiseDefaultWorkspace, make sure you have done the following: • Create an AWS IoT TwinMaker service-linked role. See Using service-linked roles for AWS IoT TwinMaker for more information. • Open the IAM console at https://console.aws.amazon.com/iam/. Review the role or user and verify that it has permission to iotsitewise:EnableSiteWiseIntegration. If needed, add permission to the role or user: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iotsitewise:EnableSiteWiseIntegration", "Resource": "*" } ] } Using the IoTSiteWiseDefaultWorkspace 184 AWS IoT TwinMaker User Guide Differences between custom and default workspaces Important New AWS IoT SiteWise features, such as CompositionModel, are only available in IoTSiteWiseDefaultWorkspace. We encourage you to use a default workspace instead of custom workspace. When using the IoTSiteWiseDefaultWorkspace, there are a few notable differences from using a custom workspace with asset sync. • When you create a default workspace, the Amazon S3 location and IAM role are optional. Note You can use UpdateWorkspace to provide the Amazon S3 location and IAM role. • The IoTSiteWiseDefaultWorkspace doesn't have a resource count limit to sync AWS IoT SiteWise resources to AWS IoT TwinMaker. • When you sync resources from AWS IoT SiteWise, their SyncSource will be SITEWISE_MANAGED. This includes Entities and ComponentTypes. • New AWS IoT SiteWise features, such as CompositionModel are only available in the IoTSiteWiseDefaultWorkspace. There are a few limitations specific to IoTSiteWiseDefaultWorkspace, they are: • The default workspace can't be deleted. • To delete resources, you must delete the AWS IoT SiteWise resources first, then the corresponding resources in AWS IoT TwinMaker are deleted. Resources synced from AWS IoT SiteWise This topic lists which assets you can sync from AWS IoT SiteWise to your AWS IoT TwinMaker workspace. Differences between custom and default workspaces 185 AWS IoT TwinMaker Important User Guide See Differences between custom and default workspaces for information about the differences between the custom and default workspaces. Custom and default workspaces The following resources are synced and available in both custom and default workspaces: Asset Models AWS IoT TwinMaker creates a new component type for each asset model in AWS IoT SiteWise. • The component TypeId for the asset model will use one of the following patterns: • Custom workspace - iotsitewise.assetmodel:assetModelId • Default workspace - assetModelId • Each property in the asset model is a new property in the component type, with one of the following naming patterns: • Custom workspace - Property_propertyId • Default workspace - propertyId The property name in AWS IoT SiteWise is stored as the displayName in the property definition. • Each hierarchy in the asset model is a new property of type LIST and the nestedType is RELATIONSHIP in the component type. The hierarchy is mapped to the property with a name prefixed by one of the following: • Custom workspace - Hierarchy_hierarchyId • Default workspace - hierarchyId Asset AWS IoT TwinMaker creates a new entity for each asset in AWS IoT SiteWise. • The entityId is the same as the assetId in AWS IoT SiteWise. • These entities have a single component called sitewiseBase, which has the component type corresponding to the asset model for this asset. • Any asset level overrides, such as setting property alias or unit of measure, are reflected in the entity in AWS IoT TwinMaker. Custom and default workspaces 186 AWS IoT TwinMaker User Guide Default workspace only The |
twinmaker-guide-044 | twinmaker-guide.pdf | 44 | of the following: • Custom workspace - Hierarchy_hierarchyId • Default workspace - hierarchyId Asset AWS IoT TwinMaker creates a new entity for each asset in AWS IoT SiteWise. • The entityId is the same as the assetId in AWS IoT SiteWise. • These entities have a single component called sitewiseBase, which has the component type corresponding to the asset model for this asset. • Any asset level overrides, such as setting property alias or unit of measure, are reflected in the entity in AWS IoT TwinMaker. Custom and default workspaces 186 AWS IoT TwinMaker User Guide Default workspace only The following assets are synced and available in the default workspace only, IoTSiteWiseDefaultWorkspace. AssetModelComponents AWS IoT TwinMaker creates a new component type for each AssetModelComponents in AWS IoT SiteWise. • The component TypeId for the asset model uses the following pattern: assetModelId. • Each property in the asset model is a new property in the component type, with the property name as propertyId. The property name in AWS IoT SiteWise is stored as the displayName in the property definition. • Each hierarchy in the asset model is a new property of type LIST and the nestedType is RELATIONSHIP in the component type. The hierarchy is mapped to the property with a name prefixed by hierarchyId. AssetModelCompositeModel AWS IoT TwinMaker creates a new component type for each AssetModelCompositeModel in AWS IoT SiteWise. • The component TypeId for the asset model uses the following pattern: assetModelId_assetModelCompositeModelId. • Each property in the asset model is a new property in the component type, with the property name as propertyId. The property name in AWS IoT SiteWise is stored as the displayName in the property definition. AssetCompositeModels AWS IoT TwinMaker creates a new composite component for each AssetCompositeModel in AWS IoT SiteWise. • The componentName is the same as the assetModelCompositeModelId in AWS IoT SiteWise. Resources not synced The following resources are not synced: Default workspace only 187 AWS IoT TwinMaker User Guide Non-synced assets and asset models • Alarm models will be synced as compositeModels, but corresponding data in the asset related to alarms are not synced. • AWS IoT SiteWise data streams are not synced. Only properties modeled in the asset model are synced. • Property values for attributes, measurements, transforms, aggregates, and metadata calculation such as formula and window are not synced. Only the metadata about the properties, such as alias, unit of measure, and data type are synced. The values can be queried using the regular AWS IoT TwinMaker data connector API, GetPropertyValueHistory. Use synced entities and component types in AWS IoT TwinMaker Once assets are synced from AWS IoT SiteWise, the synced component types are read only in AWS IoT TwinMaker. Any update or delete action must be done in AWS IoT SiteWise, and those changes are synced to AWS IoT TwinMaker if the syncJob is still active. The synced entities and the AWS IoT SiteWise base component are also read only in AWS IoT TwinMaker. You can add additional non-synced components to the synced entity, as long as no entity-level attributes such as the description or entityName are updated. Some restrictions apply to how you can interact with synced entities. You can't create child entities under a synced entity in the synced entity's hierarchy. Additionally, you can't create non-synced component types that extend from a synced component type. Note Additional components are deleted along with the entity if the asset is deleted in AWS IoT SiteWise or if you delete the sync job. You can use these synced entities in Grafana dashboards and add them as tags in the scene composer like regular entities. You can also issue knowledge graph queries for these synced entities. Note Synced entities without modification are not charged, but you are charged for those entities if changes have been made in AWS IoT TwinMaker. For example, if you add a non- Use synced entities and component types in AWS IoT TwinMaker 188 AWS IoT TwinMaker User Guide synced component to a synced entity, that entity is now charged in AWS IoT TwinMaker. For more information, see AWS IoT TwinMaker Pricing. Analyze sync status and errors This topic provides guidance on how to analyze sync errors and statuses. Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. Sync job statuses A sync job has one of the following statuses depending on its state. • The sync job CREATING state means the job is checking for permissions and loading data from AWS IoT SiteWise to prepare the sync. • The sync job INITIALIZING state means all the existing resources in AWS IoT SiteWise are synced to AWS IoT TwinMaker. This step can take longer to complete if the user has a large number of |
twinmaker-guide-045 | twinmaker-guide.pdf | 45 | and statuses. Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. Sync job statuses A sync job has one of the following statuses depending on its state. • The sync job CREATING state means the job is checking for permissions and loading data from AWS IoT SiteWise to prepare the sync. • The sync job INITIALIZING state means all the existing resources in AWS IoT SiteWise are synced to AWS IoT TwinMaker. This step can take longer to complete if the user has a large number of assets and asset models in AWS IoT SiteWise. You can monitor the number of resources that have been synced by checking on the sync job in the AWS IoT TwinMaker console, or by calling the ListSyncResources API. • The sync job ACTIVE state means the initialization step is done. The job is now ready to sync any new updates from AWS IoT SiteWise. • The sync job ERROR state indicates an error with any of the preceding states. Review the error message. There may be an issue with the IAM role setup. If you want to use a new IAM role, delete the sync job that had the error and create a new one with the new role. Sync errors appear in the model source page, which is accessed from the Entity model sources table in your workspace. The model source page displays a list of resources that failed to sync. Most errors are automatically retried by the sync job, but if the resource requires an action, then it remains in the ERROR state. You can also obtain a list of errors by using the ListSyncResources API. To see all the listed errors for the current source, use the following procedure. Analyze sync status and errors 189 AWS IoT TwinMaker User Guide 1. Navigate to your workspace in the AWS IoT TwinMaker console. 2. Select the AWS IoT SiteWise source listed in the Entity model sources modal to open the asset sync details page. 3. As shown in the preceding screenshot, any resources with persisting errors are listed in the Errors table. You can use this table to track down and fix errors related to specific resources. Possible errors include the following: • While AWS IoT SiteWise supports duplicate asset names, AWS IoT TwinMaker only supports them at the ROOT level, not under the same parent entity. If you have two assets with the same name under a parent entity in AWS IoT SiteWise, one of them fails to sync. To fix this error, either delete one of the assets or move one under a different parent asset in AWS IoT SiteWise before you sync. • If you already have an entity with the same ID as the AWS IoT SiteWise asset ID, that asset fails to sync until you delete the existing entity. Sync job statuses 190 AWS IoT TwinMaker Delete a sync job Use the following procedure to delete a sync job. User Guide Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. 1. Navigate to the AWS IoT TwinMaker console. 2. Open the workspace from which you wish to delete the sync job. 3. Under Entity model sources, select the AWS IoT SiteWise source to open the source details page. 4. To stop the sync job, choose Disconnect. Confirm your choice to fully delete the sync job. Once a sync job is deleted, you can create the sync job again in the same or a different workspace. You can't delete a workspace if there are any sync jobs in that workspace. Delete the sync jobs first before deleting a workspace. Delete a sync job 191 AWS IoT TwinMaker User Guide If there are any errors during the deletion of the sync job, the sync job remains in the DELETING state and is automatically retried. You can now manually delete any of the synced entities or component types if there is any error related to deleting a resource. Note Any resources that were synced from AWS IoT SiteWise are deleted first, then the sync job itself is deleted. Asset sync limits Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. Because the AWS IoT SiteWise quotas are higher than the default AWS IoT TwinMaker quotas, we are increasing the following limits for entities and component types synced from AWS IoT SiteWise. • 1000 synced component types in a workspace, since it can only sync 1000 asset models from AWS IoT SiteWise. • 100,000 synced entities in a workspace, since it can only sync 100,000 assets from AWS IoT SiteWise. • 2000 maximum |
twinmaker-guide-046 | twinmaker-guide.pdf | 46 | itself is deleted. Asset sync limits Important See the section called “Differences between custom and default workspaces” for information about the differences between the custom and default workspaces. Because the AWS IoT SiteWise quotas are higher than the default AWS IoT TwinMaker quotas, we are increasing the following limits for entities and component types synced from AWS IoT SiteWise. • 1000 synced component types in a workspace, since it can only sync 1000 asset models from AWS IoT SiteWise. • 100,000 synced entities in a workspace, since it can only sync 100,000 assets from AWS IoT SiteWise. • 2000 maximum child entities per parent entity. It syncs 2000 child assets per single parent asset. Note The GetEntity API only returns the first 50 child entities for a hierarchy property, but you can use the GetPropertyValue API to paginate and retrieve the list of all child entities. • 600 properties per synced component from AWS IoT SiteWise, which can sync asset models with 600 total properties and hierarchies. Asset sync limits 192 AWS IoT TwinMaker Note User Guide These limits are only applicable for the synced entities. Request a quota increase if you need these limits increased for non-synced resources. Asset sync limits 193 AWS IoT TwinMaker User Guide AWS IoT TwinMaker Grafana dashboard integration AWS IoT TwinMaker supports Grafana integration through an application plugin. Use Grafana version 10.4.0 and later versions to interact with your digital twin application. The AWS IoT TwinMaker plugin provides custom panels, dashboard templates, and a datasource to connect to your digital twin data. For more information about how to onboard with Grafana and set up permissions for your dashboard, see the following topics: Topics • CORS configuration for Grafana scene viewer • Setting up your Grafana environment • Creating a dashboard IAM role • Creating an AWS IoT TwinMaker video player policy Note You need to modify CORS (cross-origin resource sharing) configuration of the Amazon S3 bucket to allow the Grafana user interface to load resources from the bucket. For the instructions, see CORS configuration for Grafana scene viewer. For more information about the AWS IoT TwinMaker Grafana plugin, see the AWS IoT TwinMaker App documentation. For more information about the key components of the Grafana plugin, see the following: • AWS IoT TwinMaker datasource • Dashboard templates • Scene Viewer panel • Video Player panel 194 AWS IoT TwinMaker User Guide CORS configuration for Grafana scene viewer The AWS IoT TwinMaker Grafana plugin requires a CORS (cross-origin resource sharing) configuration, which allows the Grafana user interface to load resources from the Amazon S3 bucket. Without the CORS configuration, you will receive an error message as "Load 3D Scene failed with Network Failure" on the Scene viewer since the Grafana domain can't access the resources in the Amazon S3 bucket. To configure your Amazon S3 bucket with CORS, use the following steps: 1. 2. Sign in to the IAM console and open the Amazon S3 console. In the Buckets list, choose the name of the bucket that you use as your AWS IoT TwinMaker workspace's resource bucket. 3. Choose Permissions. 4. 5. In the Cross-origin resource sharing section, select Edit, to open the CORS editor. In the CORS configuration editor text box, type or copy and paste the following JSON CORS configuration by replacing the Grafana workspace domain GRAFANA-WORKSPACE-DOMAIN with your domain. Note You need to keep the asterisk * character at the beginning of the "AllowedOrigins": JSON element. [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "PUT", "POST", "DELETE", "HEAD" ], "AllowedOrigins": [ CORS configuration 195 AWS IoT TwinMaker User Guide "*GRAFANA-WORKSPACE-DOMAIN" ], "ExposeHeaders": [ "ETag" ] } ] 6. Select Save changes to finish the CORS configuration. For more information on CORS with Amazon S3 buckets, see Using cross-origin resource sharing (CORS). Setting up your Grafana environment You can use Amazon Managed Grafana for a fully managed service, or set up a Grafana environment that you manage yourself. With Amazon Managed Grafana, you can quickly deploy, operate, and scale open source Grafana for your needs. Alternatively, you can set up your own infrastructure to manage Grafana servers. For more information about both Grafana environment options, see the following topics: • Amazon Managed Grafana • Self-managed Grafana Amazon Managed Grafana Amazon Managed Grafana provides an AWS IoT TwinMaker plugin so you can quickly integrate AWS IoT TwinMaker with Grafana. Because Amazon Managed Grafana manages Grafana servers for you, you can visualize your data without having to build, package, or deploy any hardware or any other Grafana infrastructure. For more information about Amazon Managed Grafana, see What is Amazon Managed Grafana?. Note Amazon Managed Grafana currently supports version 1.3.1 of the AWS IoT TwinMaker Grafana plugin. Setting up your Grafana environment 196 AWS IoT TwinMaker User Guide Amazon Managed Grafana prerequisites To use AWS |
twinmaker-guide-047 | twinmaker-guide.pdf | 47 | Grafana • Self-managed Grafana Amazon Managed Grafana Amazon Managed Grafana provides an AWS IoT TwinMaker plugin so you can quickly integrate AWS IoT TwinMaker with Grafana. Because Amazon Managed Grafana manages Grafana servers for you, you can visualize your data without having to build, package, or deploy any hardware or any other Grafana infrastructure. For more information about Amazon Managed Grafana, see What is Amazon Managed Grafana?. Note Amazon Managed Grafana currently supports version 1.3.1 of the AWS IoT TwinMaker Grafana plugin. Setting up your Grafana environment 196 AWS IoT TwinMaker User Guide Amazon Managed Grafana prerequisites To use AWS IoT TwinMaker in an Amazon Managed Grafana dashboard, first complete the following prerequisite: • Create an AWS IoT TwinMaker workspace. For more information about creating workspaces, see Getting started with AWS IoT TwinMaker. Note When you first create an Amazon Managed Grafana workspace in the AWS Management Console, AWS IoT TwinMaker isn't listed. However, the plugin is already installed on all workspaces. You can find the AWS IoT TwinMaker plugin on the open source Grafana plugins list. You can find the AWS IoT TwinMaker datasource by choosing Add a datasource on the Datasources page. When you create an Amazon Managed Grafana workspace, an IAM role is created automatically to manage the permissions for the Grafana instance. This is called the Workspace IAM Role. It's the authentication provider option you'll use to configure all AWS IoT TwinMaker datasources for Grafana. Amazon Managed Grafana doesn't support automatically adding permissions for AWS IoT TwinMaker, so you must set up these permissions manually. For more information about setting up manual permissions, see Creating a dashboard IAM role. Self-managed Grafana You can choose to host your own infrastructure to run Grafana. For information about running Grafana locally on your machine, see Install Grafana. The AWS IoT TwinMaker plugin is available on the public Grafana catalog. For information about installing this plugin in your Grafana environment, see AWS IoT TwinMaker App. When you run Grafana locally you can't easily share dashboards or provide access to multiple users. For a scripted quick start guide about sharing dashboards using local Grafana, see AWS IoT TwinMaker samples repository. This resource walks you through hosting a Grafana environment on Cloud9 and Amazon EC2 on a public endpoint. You must determine which authentication provider you'll use for configuring TwinMaker datasources. You configure the credentials for the environment based on the default credentials Self-managed Grafana 197 AWS IoT TwinMaker User Guide chain (see Using the Default Credential Provider Chain). The default credentials can be the permanent credentials of any user or role. For example, if you're running Grafana on Amazon EC2 , the default credentials chain has access to the Amazon EC2 execution role, which would then be your authentication provider. The IAM Amazon Resource Name (ARN) of the authentication provider is required in the steps to Creating a dashboard IAM role. Creating a dashboard IAM role With AWS IoT TwinMaker, you can control data access on your Grafana dashboards. Grafana dashboard users should have different permission scopes to view data, and in some cases, write data. For example, an alarm operator might not have permission to view videos, while an admin has permission for all resources. Grafana defines the permissions through datasources, where credentials and an IAM role are provided. The AWS IoT TwinMaker datasource fetches AWS credentials with permissions for that role. If an IAM role isn't provided, Grafana uses the scope of the credentials, which can't be reduced by AWS IoT TwinMaker. To use your AWS IoT TwinMaker dashboards in Grafana, you create an IAM role and attach policies. You can use the following templates to help you create these policies. Create an IAM policy Create an IAM policy called YourWorkspaceIdDashboardPolicy in the IAM Console. This policy gives your workspaces access to Amazon S3 bucket and AWS IoT TwinMaker resources. You can also decide to use AWS IoT Greengrass Edge Connector for Amazon Kinesis Video Streams, which requires permissions for the Kinesis Video Streams and AWS IoT SiteWise assets configured for the component. To fit your use case, choose one of the following policy templates. 1. No video permissions policy If you don't want to use the Grafana Video Player panel, create the policy using the following template. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], Creating a dashboard role 198 AWS IoT TwinMaker User Guide "Resource": [ "arn:aws:s3:::bucketName/*", "arn:aws:s3:::bucketName" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*" ], "Resource": [ "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId", "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" } ] } An Amazon S3 bucket is created for each workspace. It contains the 3D models and scenes to view on a dashboard. The SceneViewer panel loads items from this bucket. 2. Scoped down video permissions policy To limit access on |
twinmaker-guide-048 | twinmaker-guide.pdf | 48 | Player panel, create the policy using the following template. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], Creating a dashboard role 198 AWS IoT TwinMaker User Guide "Resource": [ "arn:aws:s3:::bucketName/*", "arn:aws:s3:::bucketName" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*" ], "Resource": [ "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId", "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" } ] } An Amazon S3 bucket is created for each workspace. It contains the 3D models and scenes to view on a dashboard. The SceneViewer panel loads items from this bucket. 2. Scoped down video permissions policy To limit access on the Video Player panel in Grafana, group your AWS IoT Greengrass Edge Connector for Amazon Kinesis Video Streams resources by tags. For more information about scoping down permissions for your video resources, see Creating an AWS IoT TwinMaker video player policy. 3. All video permissions If you don’t want to group your videos, you can make them all accessible from the Grafana Video Player. Anyone with access to a Grafana workspace is able to play video for any stream in your account, and have read only access to any AWS IoT SiteWise asset. This includes any resources that are created in the future. Create the policy with the following template: { Create an IAM policy 199 AWS IoT TwinMaker User Guide "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::bucketName/*", "arn:aws:s3:::bucketName" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*" ], "Resource": [ "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId", "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" }, { "Effect": "Allow", "Action": [ "kinesisvideo:GetDataEndpoint", "kinesisvideo:GetHLSStreamingSessionURL" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*" }, Create an IAM policy 200 AWS IoT TwinMaker { User Guide "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } } ] } This policy template provides the following permissions: • Read only access to an S3 bucket to load a scene. • Read only access to AWS IoT TwinMaker for all entities and components in a workspace. • Read only access to stream all Kinesis Video Streams videos in your account. • Read only access to the property value history of all AWS IoT SiteWise assets in your account. • Data ingestion into any property of a AWS IoT SiteWise asset tagged with the key EdgeConnectorForKVS and the value workspaceId. Tagging your camera AWS IoT SiteWise asset request video upload from edge Using the Video Player in Grafana , users can manually request that video is uploaded from the edge cache to Kinesis Video Streams. You can turn on this feature for any AWS IoT SiteWise asset that's associated with your AWS IoT Greengrass Edge Connector for Amazon Kinesis Video Streams and that is tagged with the key EdgeConnectorForKVS. The tag value can be a list of workspaceIds delimited by any of the following characters: . : + = @ _ / -. For example, if you want to use an AWS IoT SiteWise asset associated with an AWS IoT Greengrass Edge Connector for Amazon Kinesis Video Streams across AWS IoT TwinMaker workspaces, you can use a tag that follows this pattern: WorkspaceA/WorkspaceB/WorkspaceC. The Grafana plugin enforces that the AWS IoT TwinMaker workspaceId is used to group AWS IoT SiteWise asset data ingestion. Upload video from the edge 201 AWS IoT TwinMaker User Guide Add more permissions to your dashboard policy The AWS IoT TwinMaker Grafana plugin uses your authentication provider to call AssumeRole on the dashboard role you create. Internally, the plugin restricts the highest scope of permissions you have access to by using a session policy in the AssumeRole call. For more information about session policies, see Session policies. This is the maximum permissive policy you can have on your dashboard role for an AWS IoT TwinMaker workspace: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::bucketName/*", "arn:aws:s3:::bucketName" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*" ], "Resource": [ "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId", "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" }, { "Effect": "Allow", "Action": [ "kinesisvideo:GetDataEndpoint", Add more permissions 202 AWS IoT TwinMaker User Guide "kinesisvideo:GetHLSStreamingSessionURL" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } } ] } If you add statements that Allow more permissions, they won't work on the AWS IoT TwinMaker plugin. This is by design to ensure the minimum necessary permissions are used by the plugin. However, you can scope down permissions further. For information, see Creating an AWS IoT TwinMaker video player policy. Creating the Grafana Dashboard IAM role In the IAM Console, create an IAM role called YourWorkspaceIdDashboardRole. Attach the YourWorkspaceIdDashboardPolicy |
twinmaker-guide-049 | twinmaker-guide.pdf | 49 | { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } } ] } If you add statements that Allow more permissions, they won't work on the AWS IoT TwinMaker plugin. This is by design to ensure the minimum necessary permissions are used by the plugin. However, you can scope down permissions further. For information, see Creating an AWS IoT TwinMaker video player policy. Creating the Grafana Dashboard IAM role In the IAM Console, create an IAM role called YourWorkspaceIdDashboardRole. Attach the YourWorkspaceIdDashboardPolicy to the role. To edit the trust policy of the dashboard role, you must give permission for the Grafana authentication provider to call AssumeRole on the dashboard role. Update the trust policy with the following template: { Creating the Grafana Dashboard IAM role 203 AWS IoT TwinMaker User Guide "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "ARN of Grafana authentication provider" }, "Action": "sts:AssumeRole" } ] } For more information about creating a Grafana environment and finding your authentication provider, see Setting up your Grafana environment. Creating an AWS IoT TwinMaker video player policy The following is a policy template with all of the video permissions you need for the AWS IoT TwinMaker plugin in Grafana: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::bucketName/*", "arn:aws:s3:::bucketName" ] }, { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*" ], "Resource": [ "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId", "arn:aws:iottwinmaker:region:accountId:workspace/workspaceId/*" Creating an AWS IoT TwinMaker Video Player policy 204 User Guide AWS IoT TwinMaker ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" }, { "Effect": "Allow", "Action": [ "kinesisvideo:GetDataEndpoint", "kinesisvideo:GetHLSStreamingSessionURL" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } } ] } For more information about the full policy, see the All video permissions policy template in the Create an IAM policy topic. Creating an AWS IoT TwinMaker Video Player policy 205 AWS IoT TwinMaker User Guide Scope down access to your resources The Video Player panel in Grafana directly calls Kinesis Video Streams and IoT SiteWise to provide a complete video playback experience. To avoid unauthorized access to resources that aren't associated with your AWS IoT TwinMaker workspace, add conditions to the IAM policy for your workspace dashboard role. Scope down GET permissions You can scope down the access of your Amazon Kinesis Video Streams and AWS IoT SiteWise assets by tagging resources. You might have already tagged your AWS IoT SiteWise camera asset based on the AWS IoT TwinMaker workspaceId to enable the video upload request feature, see the Upload video from the edge topic. You can use the same tag key-value pair to limit GET access to AWS IoT SiteWise assets, and also to tag your Kinesis Video Streams the same way. You can then add this condition to the kinesisvideo and iotsitewise statements in the YourWorkspaceIdDashboardPolicy: "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } Real-life use case: Grouping cameras In this scenario, you have a large array of cameras monitoring the process of baking cookies in a factory. Batches of cookie batter are made in the Batter Room, batter is frozen in the Freezer Room, and cookies are baked in the Baking Room. There are cameras in each of these rooms with different teams of operators separately monitoring each process. You want each group of operators to be authorized for their respective room. When building a digital twin for the cookie factory, a single workspace is used, but the camera permissions need to be scoped by room. You can achieve this permission separation by tagging groups of cameras based on their groupingId. In this scenario, the groupingIds are BatterRoom, FreezerRoom, and BakingRoom. The camera in each room is connected to Kinesis Video Streams and should have a tag with: Key = EdgeConnectorForKVS, Value = BatterRoom. The value can be a list of groupings delimited by any of the following characters:. : + = @ _ / - To amend the YourWorkspaceIdDashboardPolicy, use the following policy statements: Scope down access to your resources 206 User Guide AWS IoT TwinMaker ..., { "Effect": "Allow", "Action": [ "kinesisvideo:GetDataEndpoint", "kinesisvideo:GetHLSStreamingSessionURL" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*groupingId*" } } }, { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*groupingId*" } } }, ... These statements restrict streaming video playback and AWS IoT SiteWise property history access to specific resources in a grouping. The groupingId is defined by your use case. In the our scenario, it would be the roomId. Scope down AWS IoT SiteWise BatchPutAssetPropertyValue permission Providing this permission turns on the video upload request feature in the |
twinmaker-guide-050 | twinmaker-guide.pdf | 50 | User Guide AWS IoT TwinMaker ..., { "Effect": "Allow", "Action": [ "kinesisvideo:GetDataEndpoint", "kinesisvideo:GetHLSStreamingSessionURL" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*groupingId*" } } }, { "Effect": "Allow", "Action": [ "iotsitewise:GetAssetPropertyValue", "iotsitewise:GetInterpolatedAssetPropertyValues" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*groupingId*" } } }, ... These statements restrict streaming video playback and AWS IoT SiteWise property history access to specific resources in a grouping. The groupingId is defined by your use case. In the our scenario, it would be the roomId. Scope down AWS IoT SiteWise BatchPutAssetPropertyValue permission Providing this permission turns on the video upload request feature in the Video Player. When you upload video, you can specify a time range and submit the request from by choosing Submit on the panel on the Grafana dashboard. To give iotsitewise:BatchPutAssetPropertyValue permissions, use the default policy: ..., Scope down AWS IoT SiteWise BatchPutAssetPropertyValue permission 207 User Guide AWS IoT TwinMaker { "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" } } }, ... By using this policy, users can call BatchPutAssetPropertyValue for any property on the AWS IoT SiteWise camera asset. You can restrict authorization for a specific propertyId by specifying it in the statement’s condition. { ... "Condition": { "StringEquals": { "iotsitewise:propertyId": "propertyId" } } ... } The Video Player panel in Grafana ingests data into the measurement property, named VideoUploadRequest, to initiate the uploading of video from the edge cache to Kinesis Video Streams. Find the propertyId of this property in the AWS IoT SiteWise Console. To amend the YourWorkspaceIdDashboardPolicy, use the following policy statement: ..., { "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": "*", "Condition": { "StringLike": { Scope down AWS IoT SiteWise BatchPutAssetPropertyValue permission 208 AWS IoT TwinMaker User Guide "aws:ResourceTag/EdgeConnectorForKVS": "*workspaceId*" }, "StringEquals": { "iotsitewise:propertyId": "VideoUploadRequestPropertyId" } } }, ... This statement restricts ingesting data to a specific property of your tagged AWS IoT SiteWise camera asset. For more information, see How AWS IoT SiteWise works with IAM. Scope down AWS IoT SiteWise BatchPutAssetPropertyValue permission 209 AWS IoT TwinMaker User Guide Connect AWS IoT SiteWise Alarms to AWS IoT TwinMaker Grafana dashboards Note This feature is in public preview and is subject to change. AWS IoT TwinMaker is able import AWS IoT SiteWise and Events alarms into AWS IoT TwinMaker components. This allows you to be able to query alarm status and configure alarm thresholds without implementing a custom data connector for AWS IoT SiteWise data migration. You can use the AWS IoT TwinMaker Grafana plugin to visualize the alarm status and configure the alarm threshold in Grafana, without making API calls to AWS IoT TwinMaker or interacting directly with AWS IoT SiteWise alarms. AWS IoT SiteWise alarm configuration prerequisites Before creating alarms and integrating them into your Grafana dashboard, make sure you have reviewed the following prerequisites: • Become familiar with AWS IoT SiteWise's model and asset system. For more information, see Creating asset models and Creating assets in the AWS IoT SiteWise User Guide. • Become familiar with the IoT Events alarm models and how to attach them to an AWS IoT SiteWise model. For more information, see Defining AWS IoT Events alarms in the AWS IoT SiteWise User Guide. • Integrate AWS IoT TwinMaker with Grafana so you can access your AWS IoT TwinMaker resources in Grafana. For more information see, AWS IoT TwinMaker Grafana dashboard integration. Define the AWS IoT SiteWise alarm component IAM role AWS IoT TwinMaker uses the workspace IAM role to query and configure the alarm threshold in Grafana. The following permissions are required in the AWS IoT TwinMaker workspace role, in order to interact with AWS IoT SiteWise alarms in Grafana: { AWS IoT SiteWise alarm configuration prerequisites 210 User Guide AWS IoT TwinMaker "Effect": "Allow", "Action": [ "iotevents:DescribeAlarmModel", ], "Resource": ["{IoTEventsAlarmModelArn}"] },{ "Effect": "Allow", "Action": [ "iotsitewise:BatchPutAssetPropertyValue" ], "Resource": ["{IoTSitewiseAssetArn}"] } In the AWS IoT TwinMaker console, create an entity that represents your AWS IoT SiteWise asset. Make sure you add a component for that entity using com.amazon.iotsitewise.alarm as the component type, and pick the corresponding asset and alarm models. The above screenshot is example of creating this entity with the type com.amazon.iotsitewise.alarm. Define the AWS IoT SiteWise alarm component IAM role 211 AWS IoT TwinMaker User Guide When you create this component, AWS IoT TwinMaker automatically imports the related alarm properties from AWS IoT SiteWise and AWS IoT Events. You can the repeat this alarm component type pattern to create alarm components for all the assets needed in your workspace. Query and update through the AWS IoT TwinMaker API After creating alarm components, you can query the alarm status, threshold, and update alarm thresholds through the AWS IoT TwinMaker API. Below is a sample request to query alarm status: aws iottwinmaker get-property-value-history --cli-input-json \ '{ "workspaceId": "{workspaceId}", "entityId": "{entityId}", "componentName": |
twinmaker-guide-051 | twinmaker-guide.pdf | 51 | role 211 AWS IoT TwinMaker User Guide When you create this component, AWS IoT TwinMaker automatically imports the related alarm properties from AWS IoT SiteWise and AWS IoT Events. You can the repeat this alarm component type pattern to create alarm components for all the assets needed in your workspace. Query and update through the AWS IoT TwinMaker API After creating alarm components, you can query the alarm status, threshold, and update alarm thresholds through the AWS IoT TwinMaker API. Below is a sample request to query alarm status: aws iottwinmaker get-property-value-history --cli-input-json \ '{ "workspaceId": "{workspaceId}", "entityId": "{entityId}", "componentName": "{componentName}", "selectedProperties": ["alarm_status"], "startTime": "{startTimeIsoString}", "endTime": "{endTimeIsoString}" }' Below is a sample request to query the alarm threshold. aws iottwinmaker get-property-value-history --cli-input-json \ '{ "workspaceId": "{workspaceId}", "entityId": "{entityId}", "componentName": "{componentName}", "selectedProperties": ["alarm_threshold"], "startTime": "{startTimeIsoString}", "endTime": "{endTimeIsoString}" }' Below is a sample request to update the alarm threshold: aws iottwinmaker batch-put-property-values --cli-input-json \ '{ "workspaceId": "{workspaceId}", "entries": [ { "entityPropertyReference": { "entityId": "{entityId}", Query and update through the AWS IoT TwinMaker API 212 AWS IoT TwinMaker User Guide "componentName": "{componentName}", "propertyName": "alarm_threshold" }, "propertyValues": [ { "value": { "doubleValue": "{newThreshold}" }, "time": "{effectiveTimeIsoString}" } ] } ] }' Configure your Grafana dashboard for alarms A second write enabled dashboard IAM role needs to be created , that is a normal role but with permission for the action iottwinmaker:BatchPutPropertyValues to add to the TwinMaker workspace arn like in the example below. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iottwinmaker:Get*", "iottwinmaker:List*", "iottwinmaker:BatchPutPropertyValues" ], "Resource": [ "{workspaceArn}", "{workspaceArn}/*" ] }, { "Effect": "Allow", "Action": "iottwinmaker:ListWorkspaces", "Resource": "*" } ] Configure your Grafana dashboard for alarms 213 AWS IoT TwinMaker } User Guide Alternatively you can add this statement at the end of your IAM role instead: { "Effect": "Allow", "Action": [ "iottwinmaker:BatchPutPropertyValues" ], "Resource": [ "{workspaceArn}", "{workspaceArn}/*" ] } The datasource needs to have it’s write arn set with the dashboard write role you created. After you modify your IAM role, login into your Grafana dashboard to assume the updated role arn. Select the checkbox for Define write permissions for Alarm Configuration Panel and copy in the arn for the Write role. Configure your Grafana dashboard for alarms 214 AWS IoT TwinMaker User Guide Use Grafana dashboard for alarm visualization Use the following procedure to add an alarm configuration panel to your dashboard and configure it: : 1. Select the workspace in the panel options. Use Grafana dashboard for alarm visualization 215 AWS IoT TwinMaker User Guide 2. Set your datasource in the query configuration. 3. Use the following query type: Get Property Value History by Entity. 4. Select an entity or entity variable, you wish to add an alarm to. 5. Once you have selected the entity, select a component or component variable, to apply a property to. 6. For the property, choose: alarm_status and alarm_threshold. When it's connected you should see the Id for the alarm Id and it’s current threshold. Note For the public preview, no notifications are shown. You should review your alarm status and threshold to make sure the properties were applied correctly. 7. 8. The default Query Order of Ascending should be used so the latest value shows. The filter section of the Query can be left empty. A complete configuration is pictured below: 9. By using the Edit Alarm button you can bring up a dialog to change the current alarm threshold. 10. Select Save to set the new threshold value. Use Grafana dashboard for alarm visualization 216 AWS IoT TwinMaker User Guide Note This panel should only be used with a live time range that includes the present. Using it with time ranges that end and start in the past may show unexpected values when editing alarm thresholds as the current threshold always. Use Grafana dashboard for alarm visualization 217 AWS IoT TwinMaker User Guide AWS IoT TwinMaker Matterport integration Matterport provides a variety of capture options to scan real-world environments and create immersive 3D models, also known as Matterport digital twins. These models are called Matterport spaces. AWS IoT TwinMaker supports Matterport integration, allowing you to import your Matterport digital twins into your AWS IoT TwinMaker scenes. By pairing Matterport digital twins with AWS IoT TwinMaker, you can visualize and monitor your digital twin system in a virtual environment. For more information about using Matterport, read Matterport's documentation on AWS IoT TwinMaker and Matterport page. Integration topics • Integration overview • Matterport integration prerequisites • Generate and record your Matterport credentials • Store your Matterport credentials in AWS Secrets Manager • Import Matterport spaces into AWS IoT TwinMaker scenes • Use Matterport spaces in your AWS IoT TwinMaker Grafana dashboard 218 AWS IoT TwinMaker User Guide • Use Matterport spaces in your AWS IoT TwinMaker web application Integration overview This integration enables |
twinmaker-guide-052 | twinmaker-guide.pdf | 52 | with AWS IoT TwinMaker, you can visualize and monitor your digital twin system in a virtual environment. For more information about using Matterport, read Matterport's documentation on AWS IoT TwinMaker and Matterport page. Integration topics • Integration overview • Matterport integration prerequisites • Generate and record your Matterport credentials • Store your Matterport credentials in AWS Secrets Manager • Import Matterport spaces into AWS IoT TwinMaker scenes • Use Matterport spaces in your AWS IoT TwinMaker Grafana dashboard 218 AWS IoT TwinMaker User Guide • Use Matterport spaces in your AWS IoT TwinMaker web application Integration overview This integration enables you to do the following: • Use your Matterport tags and spaces in the AWS IoT TwinMaker app kit. • View your imported matterport data in your AWS IoT TwinMaker Grafana dashboard. For more information on using AWS IoT TwinMaker and Grafana, read the Grafana dashboard integration documentation. • Import your Matterport spaces into your AWS IoT TwinMaker scenes. • Select and import your Matterport tags that you'd like to bind to data in your AWS IoT TwinMaker scene. • Automatically surface your Matterport space and tag changes in your AWS IoT TwinMaker scene and approve which to synchronize. The integration process is comprised of 3 critical steps. 1. Generate and record your Matterport credentials 2. Store your Matterport credentials in AWS Secrets Manager 3. Import Matterport spaces into AWS IoT TwinMaker scenes You start your integration in the AWS IoT TwinMaker console. In the console's Settings page, under 3rd party resources, open Matterport integration to navigate between the different resources required for the integration. Integration overview 219 AWS IoT TwinMaker User Guide Matterport integration prerequisites Before integrating Matterport with AWS IoT TwinMaker please make sure you meet the following prerequisites: • You have purchased an Enterprise-level Matterport account and the Matterport products necessary for the AWS IoT TwinMaker integration. • You have an AWS IoT TwinMaker workspace. For more information, see Getting started with AWS IoT TwinMaker. Matterport integration prerequisites 220 AWS IoT TwinMaker User Guide • You have updated your AWS IoT TwinMaker workspace role. For more information on creating a workspace role, see Create and manage a service role for AWS IoT TwinMaker. Add the following to your workspace role: { "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": [ "AWS Secrets Manager secret ARN" ] } • You must contact Matterport to configure the necessary licensing for enabling the integration. Matterport will also enable a Private Model Embed (PME) for the integration. If you already have a Matterport account manager, contact them directly. Use the following procedure to contact Matterport and request an integration if you don’t have a Matterport point of contact: 1. Open the Matterport and AWS IoT TwinMaker page. 2. Press the Contact us button, to open the contact form. 3. Fill out the required information on the form. 4. When you're ready, choose SAY HELLO to send your request to Matterport. Once you have requested integration, you can generate the required Matterport SDK and Private Model Embed (PME) credentials needed to continue the integration process. Note This may involve you incurring a fee for purchasing new products or services. Generate and record your Matterport credentials To integrate Matterport with AWS IoT TwinMaker, you must provide AWS Secrets Manager with Matterport credentials. Use the following procedure to generate the Matterport SDK credentials. Matterport SDK credentials 221 AWS IoT TwinMaker User Guide 1. Log in to your Matterport account. 2. Navigate to your account settings page. 3. Once in the settings page, select the Developer tools option. 4. On the Developer tools page, go to the SDK Key Management section. 5. Once in the SDK Key Management section, select the option to add a new SDK key. 6. Once you have the Matterport SDK key, add domains to the key for AWS IoT TwinMaker and your Grafana server. If you are using the AWS IoT TwinMaker app kit, then make sure to add your custom domain as well. 7. Next, find the Application integration Management section, you should see your PME application listed. Record the following information: • The Client ID • The Client Secret Note Since the Client Secret is only presented to you once, we strongly recommend that you record your Client Secret. You must present your Client Secret in the AWS Secrets Manager console to continue with the Matterport integration. These credentials are automatically created when you have purchased the necessary components and the PME for your account has been enabled by Matterport. If these credentials do not appear, contact Matterport. To request contact, see the Matterport and AWS IoT TwinMaker contact form. For more information on Matterport SDK credentials, see Matterport's official SDK documentation SDK Docs Overview. Store your Matterport credentials in AWS Secrets Manager Use the following procedure to store your Matterport |
twinmaker-guide-053 | twinmaker-guide.pdf | 53 | strongly recommend that you record your Client Secret. You must present your Client Secret in the AWS Secrets Manager console to continue with the Matterport integration. These credentials are automatically created when you have purchased the necessary components and the PME for your account has been enabled by Matterport. If these credentials do not appear, contact Matterport. To request contact, see the Matterport and AWS IoT TwinMaker contact form. For more information on Matterport SDK credentials, see Matterport's official SDK documentation SDK Docs Overview. Store your Matterport credentials in AWS Secrets Manager Use the following procedure to store your Matterport credentials in AWS Secrets Manager. Store Matterport credentials in AWS Secrets Manager 222 AWS IoT TwinMaker Note User Guide You need the Client Secret created from the procedure in the Generate and record your Matterport credentials topic to continue with the Matterport integration. 1. Log in to the AWS Secrets Manager console. 2. Navigate to the Secrets page and select Store a new secret. 3. 4. For the Secret type, select Other type of secret. In the Key/value pairs section, add in the following key-value pairs, with your Matterport credentials as the values: • Create a key-value pair, with Key: application_key, and Value: <your Matterport credentials>. • Create a key-value pair, with Key: client_id, and Value: <your Matterport credentials>. • Create a key-value pair, with Key: client_secret, and Value: <your Matterport credentials>. When completed, you should have a configuration similar to the following example: Store Matterport credentials in AWS Secrets Manager 223 AWS IoT TwinMaker User Guide 5. For the Encryption key, you can leave the default encryption key aws/secretsmanager selected. 6. Choose Next to move on to the Configure secret page. 7. Fill out the field for Secret name and the Description. 8. Add a tag to this secret in the Tags section. When creating the tag, assign the key as AWSIoTTwinMaker_Matterport as shown in the following screenshot: Note You must add a tag. Tags are required when adding 3rd party secrets into AWS Secrets Manager, despite Tags being listed as optional. The Value field is optional. Once you have provided a Key, you can select Add to move on to the next step. Store Matterport credentials in AWS Secrets Manager 224 AWS IoT TwinMaker User Guide 9. Choose Next to move on to the Configure rotation page. Setting up a secret rotation is optional. If you wish to finish adding your secret and don’t need a rotation, choose Next again. For more information on secret rotation, see Rotate AWS Secrets Manager secrets. 10. Confirm your secret configuration on the Review page. Once you're ready to add your secret, choose Store. For more information about using AWS Secrets Manager, see the following AWS Secrets Manager documentation: • Create and manage secrets with AWS Secrets Manager • What is AWS Secrets Manager? • Rotate AWS Secrets Manager secrets Now you are ready to import your Matterport assets into AWS IoT TwinMaker scenes. See the procedure in the following section, Import Matterport spaces into AWS IoT TwinMaker scenes Import Matterport spaces into AWS IoT TwinMaker scenes Add Matterport scans to your scene by selecting the connected Matterport account from within the scene settings page. Use the following procedure to import your Matterport scans and tags: 1. Log in to the AWS IoT TwinMaker console. 2. Create or open an existing AWS IoT TwinMaker scene in which you want to use a Matterport space. 3. Once the scene has opened, navigate to the Settings tab. 4. In Settings, under 3rd party resources, find the Connection name and enter the secret you created in the procedure from Store your Matterport credentials in AWS Secrets Manager. Matterport scans in AWS IoT TwinMaker scenes 225 AWS IoT TwinMaker User Guide Note If you see a message that states No connections, navigate to the AWS IoT TwinMaker console settings page to begin the process for Matterport integration. Matterport scans in AWS IoT TwinMaker scenes 226 AWS IoT TwinMaker User Guide 5. Next, choose the Matterport space you'd like to use in your scene by selecting it in the Matterport space drop-down. Matterport scans in AWS IoT TwinMaker scenes 227 AWS IoT TwinMaker User Guide 6. After selecting a space, you can import your Matterport tags and convert them to AWS IoT TwinMaker scene tags by pressing the Import tags button. Matterport scans in AWS IoT TwinMaker scenes 228 AWS IoT TwinMaker User Guide After you have imported Matterport tags, the button is replaced by an Update tags button. You can continually update your Matterport tags in AWS IoT TwinMaker so that they always reflect the most recent changes in your Matterport account. Matterport scans in AWS IoT TwinMaker scenes 229 AWS IoT TwinMaker User Guide 7. You have successfully integrated AWS IoT TwinMaker with Matterport, and |
twinmaker-guide-054 | twinmaker-guide.pdf | 54 | selecting a space, you can import your Matterport tags and convert them to AWS IoT TwinMaker scene tags by pressing the Import tags button. Matterport scans in AWS IoT TwinMaker scenes 228 AWS IoT TwinMaker User Guide After you have imported Matterport tags, the button is replaced by an Update tags button. You can continually update your Matterport tags in AWS IoT TwinMaker so that they always reflect the most recent changes in your Matterport account. Matterport scans in AWS IoT TwinMaker scenes 229 AWS IoT TwinMaker User Guide 7. You have successfully integrated AWS IoT TwinMaker with Matterport, and now your AWS IoT TwinMaker scene has both your imported Matterport space and tags. You can work within this scene as you would with any other AWS IoT TwinMaker scene. Matterport scans in AWS IoT TwinMaker scenes 230 AWS IoT TwinMaker User Guide For more information on working with AWS IoT TwinMaker scenes, see Creating and editing AWS IoT TwinMaker scenes. Use Matterport spaces in your AWS IoT TwinMaker Grafana dashboard Once you have imported your Matterport space into an AWS IoT TwinMaker scene, you can view that scene with the Matterport space in your Grafana dashboard. If you have already configured Grafana with AWS IoT TwinMaker, then you can simply open the Grafana dashboard to view your scene with the imported Matterport space. If you have not configured AWS IoT TwinMaker with Grafana yet, complete the Grafana integration process first. You have two choices when integrating AWS IoT TwinMaker with Grafana. You can use a self-managed Grafana instance or you can use Amazon Managed Grafana. See the following documentation to learn more about the Grafana options and integration process: • AWS IoT TwinMaker Grafana dashboard integration. • Amazon Managed Grafana. • Self-managed Grafana. Use Matterport spaces in your AWS IoT TwinMaker web application Once you have imported your Matterport space into an AWS IoT TwinMaker scene, you can view that scene with the Matterport space in your AWS IoT app kit web application. See the following documentation to learn more about using the AWS IoT application kit: • To learn more about using AWS IoT TwinMaker with the AWS IoT app kit, see Create a customized web application using AWS IoT TwinMaker UI Components. • To learn more about using AWS IoT application kit, please visit AWS IoT Application kit Github page. • For instructions on how to start a new web application using AWS IoT application kit, please visit the official IoT App Kit documentation page. Matterport in your AWS IoT TwinMaker Grafana dashboard 231 AWS IoT TwinMaker User Guide AWS IoT TwinMaker video integration Video cameras present a good opportunity for digital twin simulation. You can use AWS IoT TwinMaker to simulate your camera's location and physical conditions. Create entities in AWS IoT TwinMaker for your on-site cameras, and use video components to stream live video and metadata from your site to your AWS IoT TwinMaker scene or to a Grafana dashboard. AWS IoT TwinMaker can capture video from edge devices in two ways. You can stream video from edge devices with the edge connector for Kinesis video stream, or you can save video on the edge device and initiate video uploading with MQTT messages. Use this component to stream video data from your devices for use with AWS IoT services. To generate the required resources and deploy the edge connector for Kinesis Video Streams, see the Getting started with the edge connector for Kinesis video stream on GitHub. For more information about the AWS IoT Greengrass component, see the AWS IoT Greengrass documentation on edge connector for Kinesis Video Streams. After you've created the required AWS IoT SiteWise models and configured the Kinesis Video Streams Greengrass component, you can stream or record video on the edge to your digital twin application in the AWS IoT TwinMaker console. You can also view livestreams and metadata from your devices in a Grafana dashboard. For more information about integrating Grafana and AWS IoT TwinMaker, see AWS IoT TwinMaker Grafana dashboard integration. Use the edge connector for Kinesis video stream to stream video in AWS IoT TwinMaker With the edge connector for Kinesis video stream, you can stream video and data to an entity in your AWS IoT TwinMaker scene. You use a video component to do this. To create the video component for use in your scenes, complete the following procedure. Prerequisites Before you create the video component in your AWS IoT TwinMaker scene, make sure you've completed the following prerequisites. • Created the required AWS IoT SiteWise models and assets for the edge connector for Kinesis video stream. For more information about creating the AWS IoT SiteWise assets for the connector, see Getting started with the edge connector for Kinesis video stream. Use the edge connector for Kinesis |
twinmaker-guide-055 | twinmaker-guide.pdf | 55 | data to an entity in your AWS IoT TwinMaker scene. You use a video component to do this. To create the video component for use in your scenes, complete the following procedure. Prerequisites Before you create the video component in your AWS IoT TwinMaker scene, make sure you've completed the following prerequisites. • Created the required AWS IoT SiteWise models and assets for the edge connector for Kinesis video stream. For more information about creating the AWS IoT SiteWise assets for the connector, see Getting started with the edge connector for Kinesis video stream. Use the edge connector for Kinesis video stream to stream video in AWS IoT TwinMaker 232 AWS IoT TwinMaker User Guide • Deployed the Kinesis video stream edge connector on your AWS IoT Greengrass device. For more information about deploying the Kinesis video stream edge connector component, see the deployment README. Create video components for AWS IoT TwinMaker scenes Complete the following steps to create the edge connector for the Kinesis video stream component for your scene. 1. In the AWS IoT TwinMaker console, open the scene you want to add the video component to. 2. After the scene is opens, choose an existing entity or create the entity you want to add the component to, and then choose Add component. 3. In the Add component pane, enter a name for the component, and for the Type, choose com.amazon.iotsitewise.connector.edgevideo. 4. Choose an Asset Model by selecting the name of the AWS IoT SiteWise camera model you created. This name should have the following format: EdgeConnectorForKVSCameraModel-0abc, where the string of letters and numbers at the end matches your own asset name. 5. For Asset, choose the AWS IoT SiteWise camera assets you want to stream video from. A small window appears showing you a preview of the current video stream. Note To test your video streaming, choose test. This test sends out an MQTT event to initiate video live streaming. Wait for a few moments to see the video show up in the player. 6. To add the video component to your entity, choose Add component. Add video and metadata from Kinesis video stream to a Grafana dashboard After you've created a video component for your entity in your AWS IoT TwinMaker scene, you can configure the video panel in Grafana to see live streams. Make sure you have properly integrated AWS IoT TwinMaker with Grafana. For more information, see AWS IoT TwinMaker Grafana dashboard integration. Create video components for AWS IoT TwinMaker scenes 233 AWS IoT TwinMaker Important User Guide To view video in your Grafana dashboard, you must make sure the Grafana datasources have the proper IAM permissions. To create the required role and policy see Creating a dashboard IAM role. Complete the following steps to see Kinesis Video Streams and metadata in your Grafana dashboard. 1. Open the AWS IoT TwinMaker dashboard. 2. Choose Add panel, and then choose Add an empty panel. Note For Grafana v10.4, the AWS IoT TwinMaker video player is found under Widget. Select Add >> Widget. 3. 4. From the panels list, choose the AWS IoT TwinMaker video player panel. In the AWS IoT TwinMaker video player panel, enter the stream name of the KinesisVideoStreamName, with the name of the Kinesis video stream you want to stream video from. Note To stream metadata to the Grafana video panel, you must first have created an entity with a video streaming component. 5. Optional: To stream metadata from AWS IoT SiteWise assets to the video player, for Entity, choose the AWS IoT TwinMaker entity that you created in your AWS IoT TwinMaker scene. For the Component name, choose the video component you created for the entity in your AWS IoT TwinMaker scene. Add video and metadata from Kinesis video stream to a Grafana dashboard 234 AWS IoT TwinMaker User Guide Using the AWS IoT TwinMaker Flink library AWS IoT TwinMaker provides a Flink library that you can use to read and write data to external data stores used in your digital twins. You use the AWS IoT TwinMaker Flink library by installing it as a custom connector in Managed Service for Apache Flink and performing Flink SQL queries in a Zeppelin notebook in Managed Service for Apache Flink. The notebook can be promoted to a continuously running stream processing application. The library leverages AWS IoT TwinMaker components to retrieve data from your workspace. The AWS IoT TwinMaker Flink library requires the following. Prerequisites 1. A fully populated workspace with scenes and components. Use the built-in component types for data from AWS services (AWS IoT SiteWise and Kinesis Video Streams). Create custom component types for data from third-party sources. For more information, see ???. 2. An understanding of Studio notebooks with Managed Service for Apache Flink for Apache Flink. These notebooks are |
twinmaker-guide-056 | twinmaker-guide.pdf | 56 | notebook in Managed Service for Apache Flink. The notebook can be promoted to a continuously running stream processing application. The library leverages AWS IoT TwinMaker components to retrieve data from your workspace. The AWS IoT TwinMaker Flink library requires the following. Prerequisites 1. A fully populated workspace with scenes and components. Use the built-in component types for data from AWS services (AWS IoT SiteWise and Kinesis Video Streams). Create custom component types for data from third-party sources. For more information, see ???. 2. An understanding of Studio notebooks with Managed Service for Apache Flink for Apache Flink. These notebooks are powered by Apache Zeppelin and use the Apache Flink framework. For more information, see Using a Studio notebook with Managed Service for Apache Flink for Apache Flink. For instructions on using the library, see the AWS IoT TwinMaker Flink library user guide. For instructions on setting up AWS IoT TwinMaker with the quick start in AWS IoT TwinMaker samples, see README file for the sample insights application. 235 AWS IoT TwinMaker User Guide Logging and monitoring in AWS IoT TwinMaker Monitoring is an important part of maintaining the reliability, availability, and performance of AWS IoT TwinMaker and your other AWS solutions. AWS IoT TwinMaker supports the following monitoring tools to watch the service, report when something is wrong, and take automatic actions when appropriate: • Amazon CloudWatch monitors in real time your AWS resources and the applications that you run on AWS. You can collect and track metrics, create customized dashboards, and set alarms that notify you or take actions when a specified metric reaches a threshold that you specify. For example, you can have CloudWatch track CPU usage or other metrics for your Amazon EC2 instances and automatically launch new instances when needed. For more information, see the Amazon CloudWatch User Guide. • Amazon CloudWatch Logs monitors, stores, and provides access to your log files from AWS IoT TwinMaker gateways, CloudTrail, and other sources. CloudWatch Logs can monitor information in the log files and notify you when certain thresholds are met. You can also archive your log data in highly durable storage. For more information, see the Amazon CloudWatch Logs User Guide. • AWS CloudTrail captures API calls and related events made by or on behalf of your AWS account and delivers the log files to an Amazon S3 bucket that you specify. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred. For more information, see the AWS CloudTrail User Guide. Topics • Monitoring AWS IoT TwinMaker with Amazon CloudWatch metrics • Logging AWS IoT TwinMaker API calls with AWS CloudTrail Monitoring AWS IoT TwinMaker with Amazon CloudWatch metrics You can monitor AWS IoT TwinMaker by using CloudWatch, which collects raw data and processes it into readable, near real-time metrics. These statistics are kept for 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing. You can also set alarms that watch for certain thresholds, and send notifications or Monitoring with Amazon CloudWatch metrics 236 AWS IoT TwinMaker User Guide take actions when those thresholds are met. For more information, see the Amazon CloudWatch User Guide. AWS IoT TwinMaker publishes the metrics and dimensions listed in the following sections to the AWS/IoTTwinMaker namespace. Tip AWS IoT TwinMaker publishes metrics on a one minute interval. When you view these metrics in graphs in the CloudWatch console, we recommend that you choose a Period of 1 minute to see the highest available resolution of your metric data. Contents • Metrics Metrics AWS IoT TwinMaker publishes the following metrics. Metrics Metric ComponentTypeCreationFailure Description This metric reports whether the component type creation is successful. The metric is published when a component type is in CREATING state. This happens when a component type is created with the required properties in the schema initializer and these properties are instantiated with default values. The metric value can be either 0 for success or 1 for failure. Dimensions: ComponentTypeId, WorkspaceId. Units: Count Metrics 237 AWS IoT TwinMaker Metric ComponentTypeUpdateFailure EntityCreationFailure User Guide Description This metric reports whether the component type update is successful. The metric is published when a component type is in UPDATING state. This happens when a component type is updated with the required properties in the schema initializer and these properties are instantiated with default values. The metric value can be either 0 for success or 1 for failure. Dimensions: ComponentTypeId, WorkspaceId. Units: Count This metric reports whether the entity creation is successful. The metric is published when an entity is in CREATING state. This happens when an entity is created with a component. The metric value can be either 0 for success or 1 for |
twinmaker-guide-057 | twinmaker-guide.pdf | 57 | type update is successful. The metric is published when a component type is in UPDATING state. This happens when a component type is updated with the required properties in the schema initializer and these properties are instantiated with default values. The metric value can be either 0 for success or 1 for failure. Dimensions: ComponentTypeId, WorkspaceId. Units: Count This metric reports whether the entity creation is successful. The metric is published when an entity is in CREATING state. This happens when an entity is created with a component. The metric value can be either 0 for success or 1 for failure. Dimensions: EntityName, EntityId, Workspace Id. Units: Count Metrics 238 AWS IoT TwinMaker Metric EntityUpdateFailure EntityDeletionFailure User Guide Description This metric reports whether the entity update is successful. The metric is published when an entity is in UPDATING state. This happens when an entity is updated. The metric value can be either 0 for success or 1 for failure. Dimensions: EntityName, EntityId, Workspace Id. Units: Count This metric reports whether the entity deletion is successful. The metric is published when an entity is in DELETING state. This happens when an entity is deleted. The metric value can be either 0 for success or 1 for failure. Dimensions: EntityName, EntityId, Workspace Id. Units: Count Tip All metrics are published to the AWS/IoTTwinMaker namespace. Logging AWS IoT TwinMaker API calls with AWS CloudTrail AWS IoT TwinMaker is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in AWS IoT TwinMaker. CloudTrail captures API calls for AWS IoT TwinMaker as events. The calls captured include calls from the AWS IoT TwinMaker Logging API calls with AWS CloudTrail 239 AWS IoT TwinMaker User Guide console and code calls to the AWS IoT TwinMaker API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for AWS IoT TwinMaker. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to AWS IoT TwinMaker, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information about CloudTrail, see the AWS CloudTrail User Guide. AWS IoT TwinMaker information in CloudTrail When you create your AWS account, CloudTrail is automatically enabled. CloudTrail records support event activity that occurs in AWS IoT TwinMaker, along with other AWS service events in Event history. You can view, search, and download recent events in your AWS account. For more information, see Viewing events with CloudTrail event history. For an ongoing record of events in your AWS account, including events for AWS IoT TwinMaker, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. CloudTrail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following: • Overview for creating a trail • CloudTrail supported services and integrations • Configuring Amazon SNS notifications for CloudTrail • Receiving CloudTrail log files from multiple Regions and Receiving CloudTrail log files from multiple accounts Most AWS IoT TwinMaker operations are logged by CloudTrail and are documented in the AWS IoT TwinMaker API Reference. The following data plane operations aren't logged by CloudTrail: • GetPropertyValue • GetPropertyValueHistory • BatchPutPropertyValues AWS IoT TwinMaker information in CloudTrail 240 AWS IoT TwinMaker User Guide Every event or log entry contains information about who generated the request. The identity information helps you determine the following: • Whether the request was made with root or user credentials. • Whether the request was made with temporary security credentials for a role or federated user. • Whether the request was made by another AWS service. For more information, see the CloudTrail userIdentity element. AWS IoT TwinMaker information in CloudTrail 241 AWS IoT TwinMaker User Guide Security in AWS IoT TwinMaker Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services |
twinmaker-guide-058 | twinmaker-guide.pdf | 58 | IoT TwinMaker information in CloudTrail 241 AWS IoT TwinMaker User Guide Security in AWS IoT TwinMaker Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. To learn about the compliance programs that apply to AWS IoT TwinMaker, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company’s requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using AWS IoT TwinMaker. The following topics show you how to configure AWS IoT TwinMaker to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your AWS IoT TwinMaker resources. Topics • Data protection in AWS IoT TwinMaker • Identity and Access Management for AWS IoT TwinMaker • AWS IoT TwinMaker and interface VPC endpoints (AWS PrivateLink) • Compliance Validation for AWS IoT TwinMaker • Resilience in AWS IoT TwinMaker • Infrastructure Security in AWS IoT TwinMaker Data protection in AWS IoT TwinMaker The AWS shared responsibility model applies to data protection in AWS IoT TwinMaker. As described in this model, AWS is responsible for protecting the global infrastructure that runs all Data protection 242 AWS IoT TwinMaker User Guide of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the Data Privacy FAQ. For information about data protection in Europe, see the AWS Shared Responsibility Model and GDPR blog post on the AWS Security Blog. For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3. • Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see Working with CloudTrail trails in the AWS CloudTrail User Guide. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. • If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see Federal Information Processing Standard (FIPS) 140-3. We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a Name field. This includes when you work with AWS IoT TwinMaker or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server. Encryption at rest AWS IoT TwinMaker stores your workspace information in an Amazon S3 bucket that the service creates for you, if you choose. The bucket that the service creates for you has default server-side encryption enabled. If you choose to use your own Amazon S3 bucket when you create a new Encryption at rest 243 AWS IoT TwinMaker User Guide workspace, we recommend that you enable default server-side encryption. For more information about default encryption in Amazon S3, see Setting default server-side encryption behavior for Amazon S3 buckets. Encryption in transit All data sent to AWS IoT TwinMaker is sent over a TLS connection using the HTTPS protocol, so it's secure by |
twinmaker-guide-059 | twinmaker-guide.pdf | 59 | S3 bucket that the service creates for you, if you choose. The bucket that the service creates for you has default server-side encryption enabled. If you choose to use your own Amazon S3 bucket when you create a new Encryption at rest 243 AWS IoT TwinMaker User Guide workspace, we recommend that you enable default server-side encryption. For more information about default encryption in Amazon S3, see Setting default server-side encryption behavior for Amazon S3 buckets. Encryption in transit All data sent to AWS IoT TwinMaker is sent over a TLS connection using the HTTPS protocol, so it's secure by default while in transit. Note We recommend that you use HTTPS on Amazon S3 bucket addresses as a control to enforce encryption in transit when AWS IoT TwinMaker interacts with an Amazon S3 bucket. For more information on Amazon S3 buckets, see Creating, configuring, and working with Amazon S3 buckets. Identity and Access Management for AWS IoT TwinMaker AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use AWS IoT TwinMaker resources. IAM is an AWS service that you can use with no additional charge. Topics • Audience • Authenticating with identities • Managing access using policies • How AWS IoT TwinMaker works with IAM • Identity-based policy examples for AWS IoT TwinMaker • Troubleshooting AWS IoT TwinMaker identity and access • Using service-linked roles for AWS IoT TwinMaker • AWS managed policies for AWS IoT TwinMaker Encryption in transit 244 AWS IoT TwinMaker Audience User Guide How you use AWS Identity and Access Management (IAM) differs, depending on the work that you do in AWS IoT TwinMaker. Service user – If you use the AWS IoT TwinMaker service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more AWS IoT TwinMaker features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in AWS IoT TwinMaker, see Troubleshooting AWS IoT TwinMaker identity and access. Service administrator – If you're in charge of AWS IoT TwinMaker resources at your company, you probably have full access to AWS IoT TwinMaker. It's your job to determine which AWS IoT TwinMaker features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with AWS IoT TwinMaker, see How AWS IoT TwinMaker works with IAM. IAM administrator – If you're an IAM administrator, you might want to learn details about how you can write policies to manage access to AWS IoT TwinMaker. To view example AWS IoT TwinMaker identity-based policies that you can use in IAM, see Identity-based policy examples for AWS IoT TwinMaker. Authenticating with identities Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. Audience 245 AWS IoT TwinMaker User Guide If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. AWS |
twinmaker-guide-060 | twinmaker-guide.pdf | 60 | by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. AWS account root user When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Federated identity As a best practice, require human users, including users that require administrator access, to use federation with an identity provider to access AWS services by using temporary credentials. A federated identity is a user from your enterprise user directory, a web identity provider, the AWS Directory Service, the Identity Center directory, or any user that accesses AWS services by using credentials provided through an identity source. When federated identities access AWS accounts, they assume roles, and the roles provide temporary credentials. For centralized access management, we recommend that you use AWS IAM Identity Center. You can create users and groups in IAM Identity Center, or you can connect and synchronize to a set of users and groups in your own identity source for use across all your AWS accounts and applications. For information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. Authenticating with identities 246 AWS IoT TwinMaker IAM users and groups User Guide An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require long- term credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. IAM roles An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. |
twinmaker-guide-061 | twinmaker-guide.pdf | 61 | role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. Authenticating with identities 247 AWS IoT TwinMaker User Guide • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross- account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. Authenticating with identities 248 AWS IoT TwinMaker User Guide Managing access using policies You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal (user, root user, or role session) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action |
twinmaker-guide-062 | twinmaker-guide.pdf | 62 | For more information about the structure and contents of JSON policy documents, see Overview of JSON policies in the IAM User Guide. Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. By default, users and roles have no permissions. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API. Identity-based policies Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. Identity-based policies can be further categorized as inline policies or managed policies. Inline policies are embedded directly into a single user, group, or role. Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see Choose between managed policies and inline policies in the IAM User Guide. Resource-based policies Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that Managing access using policies 249 AWS IoT TwinMaker User Guide support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy. Access control lists (ACLs) Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see Access control list (ACL) overview in the Amazon Simple Storage Service Developer Guide. Other policy types AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. • Permissions boundaries – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of an entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the user or role in the Principal field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see Permissions boundaries for IAM entities in the IAM User Guide. • Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see Service control policies in the AWS Organizations User Guide. Managing access using policies 250 AWS IoT TwinMaker User Guide • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS |
twinmaker-guide-063 | twinmaker-guide.pdf | 63 | see Service control policies in the AWS Organizations User Guide. Managing access using policies 250 AWS IoT TwinMaker User Guide • Resource control policies (RCPs) – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see Resource control policies (RCPs) in the AWS Organizations User Guide. • Session policies – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the user or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see Session policies in the IAM User Guide. Multiple policy types When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see Policy evaluation logic in the IAM User Guide. How AWS IoT TwinMaker works with IAM Before you use IAM to manage access to AWS IoT TwinMaker, learn what IAM features are available to use with AWS IoT TwinMaker. IAM features you can use with AWS IoT TwinMaker IAM feature AWS IoT TwinMaker support Identity-based policies Resource-based policies Policy actions Policy resources Policy condition keys Yes No Yes Yes Yes How AWS IoT TwinMaker works with IAM 251 AWS IoT TwinMaker IAM feature ACLs ABAC (tags in policies) Temporary credentials Principal permissions Service roles Service-linked roles AWS IoT TwinMaker support User Guide No Partial Yes Yes Yes No To get a high-level view of how AWS IoT TwinMaker and other AWS services work with most IAM features, see AWS services that work with IAM in the AWS IAM Identity Center User Guide. Identity-based policies for AWS IoT TwinMaker Supports identity-based policies: Yes Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see Define custom IAM permissions with customer managed policies in the IAM User Guide. With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. You can't specify the principal in an identity-based policy because it applies to the user or role to which it is attached. To learn about all of the elements that you can use in a JSON policy, see IAM JSON policy elements reference in the IAM User Guide. Identity-based policy examples for AWS IoT TwinMaker To view examples of AWS IoT TwinMaker identity-based policies, see Identity-based policy examples for AWS IoT TwinMaker. Resource-based policies within AWS IoT TwinMaker Supports resource-based policies: No How AWS IoT TwinMaker works with IAM 252 AWS IoT TwinMaker User Guide Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services. To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. Adding a cross-account principal to a resource- based policy is only half of establishing the trust relationship. When the principal and the resource are in different AWS accounts, an IAM administrator in the trusted account must also grant the principal entity (user or role) permission to access the resource. They grant permission by attaching an identity-based policy to the entity. However, if a resource-based policy grants access to a principal in the same account, no additional identity-based policy is required. For more information, see Cross account resource access in IAM in the IAM User Guide. Policy actions for AWS IoT TwinMaker Supports policy actions: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on |
twinmaker-guide-064 | twinmaker-guide.pdf | 64 | AWS accounts, an IAM administrator in the trusted account must also grant the principal entity (user or role) permission to access the resource. They grant permission by attaching an identity-based policy to the entity. However, if a resource-based policy grants access to a principal in the same account, no additional identity-based policy is required. For more information, see Cross account resource access in IAM in the IAM User Guide. Policy actions for AWS IoT TwinMaker Supports policy actions: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Action element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Policy actions usually have the same name as the associated AWS API operation. There are some exceptions, such as permission-only actions that don't have a matching API operation. There are also some operations that require multiple actions in a policy. These additional actions are called dependent actions. Include actions in a policy to grant permissions to perform the associated operation. To see a list of AWS IoT TwinMaker actions, see Actions defined by AWS IoT TwinMaker in the Service Authorization Reference. Policy actions in AWS IoT TwinMaker use the following prefix before the action: iottwinmaker How AWS IoT TwinMaker works with IAM 253 AWS IoT TwinMaker User Guide To specify multiple actions in a single statement, separate them with commas. "Action": [ "iottwinmaker:action1", "iottwinmaker:action2" ] To view examples of AWS IoT TwinMaker identity-based policies, see Identity-based policy examples for AWS IoT TwinMaker. Policy resources for AWS IoT TwinMaker Supports policy resources: Yes Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Resource JSON policy element specifies the object or objects to which the action applies. Statements must include either a Resource or a NotResource element. As a best practice, specify a resource using its Amazon Resource Name (ARN). You can do this for actions that support a specific resource type, known as resource-level permissions. For actions that don't support resource-level permissions, such as listing operations, use a wildcard (*) to indicate that the statement applies to all resources. "Resource": "*" To see a list of AWS IoT TwinMaker resource types and their ARNs, see Resources defined by AWS IoT TwinMaker in the Service Authorization Reference. To learn with which actions you can specify the ARN of each resource, see Actions defined by AWS IoT TwinMaker. To view examples of AWS IoT TwinMaker identity-based policies, see Identity-based policy examples for AWS IoT TwinMaker. Policy condition keys for AWS IoT TwinMaker Supports service-specific policy condition keys: Yes How AWS IoT TwinMaker works with IAM 254 AWS IoT TwinMaker User Guide Administrators can use AWS JSON policies to specify who has access to what. That is, which principal can perform actions on what resources, and under what conditions. The Condition element (or Condition block) lets you specify conditions in which a statement is in effect. The Condition element is optional. You can create conditional expressions that use condition operators, such as equals or less than, to match the condition in the policy with values in the request. If you specify multiple Condition elements in a statement, or multiple keys in a single Condition element, AWS evaluates them using a logical AND operation. If you specify multiple values for a single condition key, AWS evaluates the condition using a logical OR operation. All of the conditions must be met before the statement's permissions are granted. You can also use placeholder variables when you specify conditions. For example, you can grant an IAM user permission to access a resource only if it is tagged with their IAM user name. For more information, see IAM policy elements: variables and tags in the IAM User Guide. AWS supports global condition keys and service-specific condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM User Guide. To see a list of AWS IoT TwinMaker condition keys, see Condition keys for AWS IoT TwinMaker in the Service Authorization Reference. To learn with which actions and resources you can use a condition key, see Actions defined by AWS IoT TwinMaker. To view examples of AWS IoT TwinMaker identity-based policies, see Identity-based policy examples for AWS IoT TwinMaker. Access control lists (ACLs) in AWS IoT TwinMaker Supports ACLs: No Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Attribute-based access control (ABAC) with AWS IoT TwinMaker Supports ABAC (tags in policies): |
twinmaker-guide-065 | twinmaker-guide.pdf | 65 | in the Service Authorization Reference. To learn with which actions and resources you can use a condition key, see Actions defined by AWS IoT TwinMaker. To view examples of AWS IoT TwinMaker identity-based policies, see Identity-based policy examples for AWS IoT TwinMaker. Access control lists (ACLs) in AWS IoT TwinMaker Supports ACLs: No Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format. Attribute-based access control (ABAC) with AWS IoT TwinMaker Supports ABAC (tags in policies): Partial Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes. In AWS, these attributes are called tags. You can attach tags to IAM entities (users or roles) and to many AWS resources. Tagging entities and resources is the first step of ABAC. Then How AWS IoT TwinMaker works with IAM 255 AWS IoT TwinMaker User Guide you design ABAC policies to allow operations when the principal's tag matches the tag on the resource that they are trying to access. ABAC is helpful in environments that are growing rapidly and helps with situations where policy management becomes cumbersome. To control access based on tags, you provide tag information in the condition element of a policy using the aws:ResourceTag/key-name, aws:RequestTag/key-name, or aws:TagKeys condition keys. If a service supports all three condition keys for every resource type, then the value is Yes for the service. If a service supports all three condition keys for only some resource types, then the value is Partial. For more information about ABAC, see Define permissions with ABAC authorization in the IAM User Guide. To view a tutorial with steps for setting up ABAC, see Use attribute-based access control (ABAC) in the IAM User Guide. Using Temporary credentials with AWS IoT TwinMaker Supports temporary credentials: Yes Some AWS services don't work when you sign in using temporary credentials. For additional information, including which AWS services work with temporary credentials, see AWS services that work with IAM in the IAM User Guide. You are using temporary credentials if you sign in to the AWS Management Console using any method except a user name and password. For example, when you access AWS using your company's single sign-on (SSO) link, that process automatically creates temporary credentials. You also automatically create temporary credentials when you sign in to the console as a user and then switch roles. For more information about switching roles, see Switch from a user to an IAM role (console) in the IAM User Guide. You can manually create temporary credentials using the AWS CLI or AWS API. You can then use those temporary credentials to access AWS. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see Temporary security credentials in IAM. Cross-service principal permissions for AWS IoT TwinMaker Supports forward access sessions (FAS): Yes How AWS IoT TwinMaker works with IAM 256 AWS IoT TwinMaker User Guide When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. Service roles for AWS IoT TwinMaker Supports service roles: Yes A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. Warning Changing the permissions for a service role might break AWS IoT TwinMaker functionality. Edit service roles only when AWS IoT TwinMaker provides guidance to do so. Service-linked roles for AWS IoT TwinMaker Supports service-linked roles: No A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. For details about creating or managing service-linked roles, see AWS services that work with IAM. Find a service in the table that includes a Yes in the Service-linked role column. Choose the Yes link to view the service-linked role documentation for |
twinmaker-guide-066 | twinmaker-guide.pdf | 66 | Supports service-linked roles: No A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. For details about creating or managing service-linked roles, see AWS services that work with IAM. Find a service in the table that includes a Yes in the Service-linked role column. Choose the Yes link to view the service-linked role documentation for that service. Identity-based policy examples for AWS IoT TwinMaker By default, users and roles don't have permission to create or modify AWS IoT TwinMaker resources. They also can't perform tasks by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. To grant users permission to perform actions on the Identity-based policy examples 257 AWS IoT TwinMaker User Guide resources that they need, an IAM administrator can create IAM policies. The administrator can then add the IAM policies to roles, and users can assume the roles. To learn how to create an IAM identity-based policy by using these example JSON policy documents, see Create IAM policies (console) in the IAM User Guide. For details about actions and resource types defined by AWS IoT TwinMaker, including the format of the ARNs for each of the resource types, see Actions, resources, and condition keys for AWS IoT TwinMaker in the Service Authorization Reference. Topics • Policy best practices • Using the AWS IoT TwinMaker console • Allow users to view their own permissions Policy best practices Identity-based policies determine whether someone can create, access, or delete AWS IoT TwinMaker resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations: • Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide. • Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide. • Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide. Identity-based policy examples 258 AWS IoT TwinMaker User Guide • Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see Validate policies with IAM Access Analyzer in the IAM User Guide. • Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see Secure API access with MFA in the IAM User Guide. For more information about best practices in IAM, see Security best practices in IAM in the IAM User Guide. Using the AWS IoT TwinMaker console To access the AWS IoT TwinMaker console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the AWS IoT TwinMaker resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy. You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to |
twinmaker-guide-067 | twinmaker-guide.pdf | 67 | IAM User Guide. Using the AWS IoT TwinMaker console To access the AWS IoT TwinMaker console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the AWS IoT TwinMaker resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy. You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform. To ensure that users and roles can still use the AWS IoT TwinMaker console, also attach the AWS IoT TwinMaker ConsoleAccess or ReadOnly AWS managed policy to the entities. For more information, see Adding permissions to a user in the AWS IAM Identity Center User Guide. Allow users to view their own permissions This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API. { "Version": "2012-10-17", "Statement": [ Identity-based policy examples 259 User Guide AWS IoT TwinMaker { "Sid": "ViewOwnUserInfo", "Effect": "Allow", "Action": [ "iam:GetUserPolicy", "iam:ListGroupsForUser", "iam:ListAttachedUserPolicies", "iam:ListUserPolicies", "iam:GetUser" ], "Resource": ["arn:aws:iam::*:user/${aws:username}"] }, { "Sid": "NavigateInConsole", "Effect": "Allow", "Action": [ "iam:GetGroupPolicy", "iam:GetPolicyVersion", "iam:GetPolicy", "iam:ListAttachedGroupPolicies", "iam:ListGroupPolicies", "iam:ListPolicyVersions", "iam:ListPolicies", "iam:ListUsers" ], "Resource": "*" } ] } Troubleshooting AWS IoT TwinMaker identity and access Use the following information to help you diagnose and fix common issues that you might encounter when working with AWS IoT TwinMaker and IAM. Topics • I am not authorized to perform an action in AWS IoT TwinMaker • I am not authorized to perform iam:PassRole • I want to allow people outside of my AWS account to access my AWS IoT TwinMaker resources Troubleshooting 260 AWS IoT TwinMaker User Guide I am not authorized to perform an action in AWS IoT TwinMaker If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action. The following example error occurs when the mateojackson IAM user tries to use the console to view details about a fictional my-example-widget resource but doesn't have the fictional iottwinmaker:GetWidget permissions. User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: iottwinmaker:GetWidget on resource: my-example-widget In this case, the policy for the mateojackson user must be updated to allow access to the my- example-widget resource by using the iottwinmaker:GetWidget action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. I am not authorized to perform iam:PassRole If you receive an error that you're not authorized to perform the iam:PassRole action, your policies must be updated to allow you to pass a role to AWS IoT TwinMaker. Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service. The following example error occurs when an IAM user named marymajor tries to use the console to perform an action in AWS IoT TwinMaker. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service. User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole In this case, Mary's policies must be updated to allow her to perform the iam:PassRole action. If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials. Troubleshooting 261 AWS IoT TwinMaker User Guide I want to allow people outside of my AWS account to access my AWS IoT TwinMaker resources You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether AWS IoT TwinMaker supports these features, see How AWS IoT TwinMaker works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM |
twinmaker-guide-068 | twinmaker-guide.pdf | 68 | lists (ACLs), you can use those policies to grant people access to your resources. To learn more, consult the following: • To learn whether AWS IoT TwinMaker supports these features, see How AWS IoT TwinMaker works with IAM. • To learn how to provide access to your resources across AWS accounts that you own, see Providing access to an IAM user in another AWS account that you own in the IAM User Guide. • To learn how to provide access to your resources to third-party AWS accounts, see Providing access to AWS accounts owned by third parties in the IAM User Guide. • To learn how to provide access through identity federation, see Providing access to externally authenticated users (identity federation) in the IAM User Guide. • To learn the difference between using roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. Using service-linked roles for AWS IoT TwinMaker AWS IoT TwinMaker uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that is linked directly to AWS IoT TwinMaker. Service-linked roles are predefined by AWS IoT TwinMaker and include all the permissions that the service requires to call other AWS services on your behalf. A service-linked role makes setting up AWS IoT TwinMaker easier because you don’t have to manually add the necessary permissions. AWS IoT TwinMaker defines the permissions of its service- linked roles, and unless defined otherwise, only AWS IoT TwinMaker can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. You can delete a service-linked role only after first deleting their related resources. This protects your AWS IoT TwinMaker resources because you can't inadvertently remove permission to access the resources. Using service-linked roles 262 AWS IoT TwinMaker User Guide For information about other services that support service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-linked roles column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions for AWS IoT TwinMaker AWS IoT TwinMaker uses the service-linked role named AWSServiceRoleForIoTTwinMaker – Allows AWS IoT TwinMaker to call other AWS services and to sync their resources on your behalf. The AWSServiceRoleForIoTTwinMaker service-linked role trusts the following services to assume the role: • iottwinmaker.amazonaws.com The role permissions policy named AWSIoTTwinMakerServiceRolePolicy allows AWS IoT TwinMaker to complete the following actions on the specified resources: • Action: iotsitewise:DescribeAsset, iotsitewise:ListAssets, iotsitewise:DescribeAssetModel, and iotsitewise:ListAssetModels, iottwinmaker:GetEntity, iottwinmaker:CreateEntity, iottwinmaker:UpdateEntity, iottwinmaker:DeleteEntity, iottwinmaker:ListEntities, iottwinmaker:GetComponentType, iottwinmaker:CreateComponentType, iottwinmaker:UpdateComponentType, iottwinmaker:DeleteComponentType, iottwinmaker:ListComponentTypes on all your iotsitewise asset and asset-model resources You must configure permissions to allow your users, groups, or roles to create, edit, or delete a service-linked role. For more information, see Service-linked role permissions in the IAM User Guide. Creating a service-linked role for AWS IoT TwinMaker You don't need to manually create a service-linked role. When you synchronize your AWS IoT SiteWise assets and asset models (asset sync) in the AWS Management Console, the AWS CLI, or the AWS API, AWS IoT TwinMaker creates the service-linked role for you. If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you synchronize your AWS IoT SiteWise assets and asset models (asset sync), AWS IoT TwinMaker creates the service-linked role for you again. Using service-linked roles 263 AWS IoT TwinMaker User Guide You can also use the IAM console to create a service-linked role with the "IoT TwinMaker - Managed Role" use case. In the AWS CLI or the AWS API, create a service-linked role with the iottwinmaker.amazonaws.com service name. For more information, see Creating a service- linked role in the IAM User Guide. If you delete this service-linked role, you can use this same process to create the role again. Editing a service-linked role for AWS IoT TwinMaker AWS IoT TwinMaker does not allow you to edit the AWSServiceRoleForIoTTwinMaker service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see Editing a service-linked role in the IAM User Guide. Deleting a service-linked role for AWS IoT TwinMaker If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up any serviceLinked-workspaces that are still using your service-linked role before you can manually delete the role. Note If the |
twinmaker-guide-069 | twinmaker-guide.pdf | 69 | various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see Editing a service-linked role in the IAM User Guide. Deleting a service-linked role for AWS IoT TwinMaker If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up any serviceLinked-workspaces that are still using your service-linked role before you can manually delete the role. Note If the AWS IoT TwinMaker service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again. To manually delete the service-linked role using IAM Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForIoTTwinMaker service-linked role. For more information, see Deleting a service-linked role in the IAM User Guide. Supported Regions for AWS IoT TwinMaker service-linked roles AWS IoT TwinMaker supports using service-linked roles in all of the Regions where the service is available. For more information, see AWS Regions and endpoints. AWS managed policies for AWS IoT TwinMaker AWS managed policies 264 AWS IoT TwinMaker User Guide To add permissions to users, groups, and roles, it is easier to use AWS managed policies than to write policies yourself. It takes time and expertise to create IAM customer managed policies that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see AWS managed policies in the IAM User Guide. AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services do not remove permissions from an AWS managed policy, so policy updates won't break your existing permissions. Additionally, AWS supports managed policies for job functions that span multiple services. For example, the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see AWS managed policies for job functions in the IAM User Guide. AWS managed policy: AWSIoTTwinMakerServiceRolePolicy You can't attach AWSIoTTwinMakerServiceRolePolicy to your IAM entities. This policy is attached to a service-linked role that allows to perform actions on your behalf. For more information, see Service-linked role permissions for AWS IoT TwinMaker. The role permissions policy named AWSIoTTwinMakerServiceRolePolicy allows AWS IoT TwinMaker to complete the following actions on the specified resources: • Action: iotsitewise:DescribeAsset, iotsitewise:ListAssets, iotsitewise:DescribeAssetModel, and iotsitewise:ListAssetModels, iottwinmaker:GetEntity, iottwinmaker:CreateEntity, iottwinmaker:UpdateEntity, iottwinmaker:DeleteEntity, iottwinmaker:ListEntities, iottwinmaker:GetComponentType, AWS managed policies 265 AWS IoT TwinMaker User Guide iottwinmaker:CreateComponentType, iottwinmaker:UpdateComponentType, iottwinmaker:DeleteComponentType, iottwinmaker:ListComponentTypes on all your iotsitewise asset and asset-model resources { "Version": "2012-10-17", "Statement": [{ "Sid": "SiteWiseAssetReadAccess", "Effect": "Allow", "Action": [ "iotsitewise:DescribeAsset" ], "Resource": [ "arn:aws:iotsitewise:*:*:asset/*" ] }, { "Sid": "SiteWiseAssetModelReadAccess", "Effect": "Allow", "Action": [ "iotsitewise:DescribeAssetModel" ], "Resource": [ "arn:aws:iotsitewise:*:*:asset-model/*" ] }, { "Sid": "SiteWiseAssetModelAndAssetListAccess", "Effect": "Allow", "Action": [ "iotsitewise:ListAssets", "iotsitewise:ListAssetModels" ], "Resource": [ "*" ] }, { "Sid": "TwinMakerAccess", AWS managed policies 266 AWS IoT TwinMaker User Guide "Effect": "Allow", "Action": [ "iottwinmaker:GetEntity", "iottwinmaker:CreateEntity", "iottwinmaker:UpdateEntity", "iottwinmaker:DeleteEntity", "iottwinmaker:ListEntities", "iottwinmaker:GetComponentType", "iottwinmaker:CreateComponentType", "iottwinmaker:UpdateComponentType", "iottwinmaker:DeleteComponentType", "iottwinmaker:ListComponentTypes" ], "Resource": [ "arn:aws:iottwinmaker:*:*:workspace/*" ], "Condition": { "ForAnyValue:StringEquals": { "iottwinmaker:linkedServices": [ "IOTSITEWISE" ] } } } ] } AWS IoT TwinMaker updates to AWS managed policies View details about updates to AWS managed policies for since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Document history page. Change Description Date AWSIoTTwinMakerSer viceRolePolicy – Added a policy AWS IoT TwinMaker added the role permissions policy named AWSIoTTwinMakerSer viceRolePolicy which allows November 17, 2023 AWS managed policies 267 AWS IoT TwinMaker User Guide Change Description Date AWS IoT TwinMaker to complete the following actions on the specified resources: • Action: iotsitewi se:DescribeAsset, iotsitewise:ListAs sets, iotsitewi se:DescribeAssetMo del, and iotsitewi se:ListAssetModels , iottwinma ker:GetEntity, iottwinmaker:Creat eEntity, iottwinma ker:UpdateEntity, iottwinmaker:Delet eEntity, iottwinma ker:ListEntities, iottwinmaker:GetCo mponentType, iottwinmaker:Creat eComponentType, iottwinmaker:Updat eComponentType, iottwinmaker:Delet eComponentType, iottwinmaker:ListC omponentTypes on all your iotsitewise asset and asset-mod el resources AWS managed policies 268 AWS IoT TwinMaker User Guide Change Description Date For more information, see Service-linked role permissio ns for AWS IoT TwinMaker. started tracking changes started tracking changes |
twinmaker-guide-070 | twinmaker-guide.pdf | 70 | allows November 17, 2023 AWS managed policies 267 AWS IoT TwinMaker User Guide Change Description Date AWS IoT TwinMaker to complete the following actions on the specified resources: • Action: iotsitewi se:DescribeAsset, iotsitewise:ListAs sets, iotsitewi se:DescribeAssetMo del, and iotsitewi se:ListAssetModels , iottwinma ker:GetEntity, iottwinmaker:Creat eEntity, iottwinma ker:UpdateEntity, iottwinmaker:Delet eEntity, iottwinma ker:ListEntities, iottwinmaker:GetCo mponentType, iottwinmaker:Creat eComponentType, iottwinmaker:Updat eComponentType, iottwinmaker:Delet eComponentType, iottwinmaker:ListC omponentTypes on all your iotsitewise asset and asset-mod el resources AWS managed policies 268 AWS IoT TwinMaker User Guide Change Description Date For more information, see Service-linked role permissio ns for AWS IoT TwinMaker. started tracking changes started tracking changes for its AWS managed policies. May 11, 2022 AWS IoT TwinMaker and interface VPC endpoints (AWS PrivateLink) You can establish a private connection between your virtual private cloud (VPC) and AWS IoT TwinMaker by creating an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink, which you can use to privately access AWS IoT TwinMaker APIs without an internet gateway, network address translation (NAT) device, VPN connection, or AWS Direct Connect connection. Instances in your VPC don't need public IP addresses to communicate with AWS IoT TwinMaker APIs. Traffic between your VPC and AWS IoT TwinMaker doesn't leave the Amazon network. Each interface endpoint is represented by one or more Elastic Network Interfaces in your subnets. For more information, see Interface VPC endpoints (AWS PrivateLink) in the Amazon VPC User Guide. Considerations for AWS IoT TwinMaker VPC endpoints Before you set up an interface VPC endpoint for AWS IoT TwinMaker, review Interface endpoint properties and limitations in the Amazon VPC User Guide. AWS IoT TwinMaker supports making calls to all of its API actions from your VPC. • For data plane API operations, use the following endpoint: data.iottwinmaker.region.amazonaws.com The data plane API operations include the following: • GetPropertyValue VPC endpoints (AWS PrivateLink) 269 AWS IoT TwinMaker • GetPropertyValueHistory • BatchPutPropertyValues • For the control plane API operations, use the following endpoint: api.iottwinmaker.region.amazonaws.com The supported control plane API operations include the following: User Guide • CreateComponentType • CreateEntity • CreateScene • CreateWorkspace • DeleteComponentType • DeleteEntity • DeleteScene • DeleteWorkspace • GetComponentType • GetEntity • GetScene • GetWorkspace • ListComponentTypes • ListComponentTypes • ListEntities • ListScenes • ListTagsForResource • ListWorkspaces • TagResource • UntagResource • UpdateComponentType • UpdateEntity • UpdateScene Considerations for AWS IoT TwinMaker VPC endpoints • UpdateWorkspace 270 AWS IoT TwinMaker User Guide Creating an interface VPC endpoint for AWS IoT TwinMaker You can create a VPC endpoint for the AWS IoT TwinMaker service by using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). For more information, see Creating an interface endpoint in the Amazon VPC User Guide. Create a VPC endpoint for AWS IoT TwinMaker that uses the following service name. • For data plane API operations, use the following service name: com.amazonaws.region.iottwinmaker.data • For control plane API operations, use the following service name: com.amazonaws.region.iottwinmaker.api If you enable private DNS for the endpoint, you can make API requests to AWS IoT TwinMaker by using its default DNS name for the Region, for example, iottwinmaker.us- east-1.amazonaws.com. For more information, see Accessing a service through an interface endpoint in the Amazon VPC User Guide. AWS IoT TwinMaker PrivateLink is supported in the following regions: • us-east-1 The ControlPlane service is supported in the following availability zones: use1-az1, use1-az2, and use1-az6. The DataPlane service is supported in the following availability zones: use1-az1, use1-az2, and use1-az4. • us-west-2 The ControlPlane and DataPlane services are supported in the following availability zones: usw2-az1, usw2-az2, and usw2-az3. • eu-west-1 • eu-central-1 • ap-southeast-1 Creating an interface VPC endpoint for AWS IoT TwinMaker 271 AWS IoT TwinMaker • ap-southeast-2 User Guide For more information on availability zones, see Availability Zone IDs for your AWS resources - AWS Resource Access Manager. Accessing AWS IoT TwinMaker through an interface VPC endpoint When you create an interface endpoint, AWS IoT TwinMaker generates endpoint-specific DNS hostnames that you can use to communicate with AWS IoT TwinMaker. The private DNS option is enabled by default. For more information, see Using private hosted zones in the Amazon VPC User Guide. If you enable private DNS for the endpoint, you can make API requests to AWS IoT TwinMaker through one of the following VPC endpoints. • For the data plane API operations, use the following endpoint. Replace region with your AWS Region. data.iottwinmaker.region.amazonaws.com • For the control plane API operations, use the following endpoint. Replace region with your AWS Region. api.iottwinmaker.region.amazonaws.com If you disable private DNS for the endpoint, you must do the following to access AWS IoT TwinMaker through the endpoint: • Specify the VPC endpoint URL in API requests. • For the data plane API operations, use the following endpoint URL. Replace vpc-endpoint- id and region with your VPC endpoint ID and Region. vpc-endpoint-id.data.iottwinmaker.region.vpce.amazonaws.com • |
twinmaker-guide-071 | twinmaker-guide.pdf | 71 | TwinMaker through one of the following VPC endpoints. • For the data plane API operations, use the following endpoint. Replace region with your AWS Region. data.iottwinmaker.region.amazonaws.com • For the control plane API operations, use the following endpoint. Replace region with your AWS Region. api.iottwinmaker.region.amazonaws.com If you disable private DNS for the endpoint, you must do the following to access AWS IoT TwinMaker through the endpoint: • Specify the VPC endpoint URL in API requests. • For the data plane API operations, use the following endpoint URL. Replace vpc-endpoint- id and region with your VPC endpoint ID and Region. vpc-endpoint-id.data.iottwinmaker.region.vpce.amazonaws.com • For the control plane API operations, use the following endpoint URL. Replace vpc- endpoint-id and region with your VPC endpoint ID and Region. vpc-endpoint-id.api.iottwinmaker.region.vpce.amazonaws.com Accessing AWS IoT TwinMaker through an interface VPC endpoint 272 AWS IoT TwinMaker User Guide • Disable host prefix injection. The AWS CLI and AWS SDKs prepend the service endpoint with various host prefixes when you call each API operation. This causes the AWS CLI and AWS SDKs to produce invalid URLs for AWS IoT TwinMaker when you specify a VPC endpoint. Important You can't disable host prefix injection in AWS CLI or AWS Tools for PowerShell. This means that if you've disabled private DNS, you won't be able to use AWS CLI or AWS Tools for PowerShell to access AWS IoT TwinMaker through the VPC endpoint. If you want to use these tools to access AWS IoT TwinMaker through the endpoint, enable private DNS. For more information about how to disable host prefix injection in the AWS SDKs, see the following documentation sections for each SDK: • AWS SDK for C++ • AWS SDK for Go • AWS SDK for Go v2 • AWS SDK for Java • AWS SDK for Java 2.x • AWS SDK for JavaScript • AWS SDK for .NET • AWS SDK for PHP • AWS SDK for Python (Boto3) • AWS SDK for Ruby For more information, see Accessing a service through an interface endpoint in the Amazon VPC User Guide. Creating a VPC endpoint policy for AWS IoT TwinMaker You can attach an endpoint policy to your VPC endpoint that controls access to AWS IoT TwinMaker. The policy specifies the following information: • The principal that can perform actions. Creating a VPC endpoint policy for AWS IoT TwinMaker 273 AWS IoT TwinMaker User Guide • The actions that can be performed. • The resources on which actions can be performed. For more information, see Controlling access to services with VPC endpoints in the Amazon VPC User Guide. Example: VPC endpoint policy for AWS IoT TwinMaker actions The following is an example of an endpoint policy for AWS IoT TwinMaker. When attached to an endpoint, this policy grants access to the listed AWS IoT TwinMaker actions for the IAM user iottwinmakeradmin in the AWS account 123456789012 on all resources. { "Statement":[ { "Principal": { "AWS": "arn:aws:iam::123456789012:user/role" }, "Resource": "*", "Effect":"Allow", "Action":[ "iottwinmaker:CreateEntity", "iottwinmaker:GetScene", "iottwinmaker:ListEntities" ] } ] } Compliance Validation for AWS IoT TwinMaker To learn whether an AWS service is within the scope of specific compliance programs, see AWS services in Scope by Compliance Program and choose the compliance program that you are interested in. For general information, see AWS Compliance Programs. You can download third-party audit reports using AWS Artifact. For more information, see Downloading Reports in AWS Artifact. Compliance Validation 274 AWS IoT TwinMaker User Guide Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: • Security Compliance & Governance – These solution implementation guides discuss architectural considerations and provide steps for deploying security and compliance features. • HIPAA Eligible Services Reference – Lists HIPAA eligible services. Not all AWS services are HIPAA eligible. • AWS Compliance Resources – This collection of workbooks and guides might apply to your industry and location. • AWS Customer Compliance Guides – Understand the shared responsibility model through the lens of compliance. The guides summarize the best practices for securing AWS services and map the guidance to security controls across multiple frameworks (including National Institute of Standards and Technology (NIST), Payment Card Industry Security Standards Council (PCI), and International Organization for Standardization (ISO)). • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS. Security Hub uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see Security Hub controls reference. • |
twinmaker-guide-072 | twinmaker-guide.pdf | 72 | and Technology (NIST), Payment Card Industry Security Standards Council (PCI), and International Organization for Standardization (ISO)). • Evaluating Resources with Rules in the AWS Config Developer Guide – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations. • AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS. Security Hub uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see Security Hub controls reference. • Amazon GuardDuty – This AWS service detects potential threats to your AWS accounts, workloads, containers, and data by monitoring your environment for suspicious and malicious activities. GuardDuty can help you address various compliance requirements, like PCI DSS, by meeting intrusion detection requirements mandated by certain compliance frameworks. • AWS Audit Manager – This AWS service helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards. Resilience in AWS IoT TwinMaker The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones Resilience 275 AWS IoT TwinMaker User Guide without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure. In addition to the AWS global infrastructure, AWS IoT TwinMaker offers several features to help support your data resiliency and backup needs. Infrastructure Security in AWS IoT TwinMaker As a managed service, AWS IoT TwinMaker is protected by the AWS global network security procedures that are described in the Amazon Web Services: Overview of Security Processes whitepaper. You use AWS published API calls to access AWS IoT TwinMaker through the network. Clients must support Transport Layer Security (TLS) 1.2 or later. We recommend TLS 1.3 or later. Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. Infrastructure Security 276 AWS IoT TwinMaker User Guide Endpoints and quotas AWS IoT TwinMaker endpoints and quotas You can find information about AWS IoT TwinMaker endpoints and quotas in the AWS General Reference. • For information about service endpoints, see AWS IoT TwinMaker service endpoints. • For information about quotas, see AWS IoT TwinMaker service quotas. • For information about API throttling limits, see AWS IoT TwinMaker API throttling limits. Additional information about AWS IoT TwinMaker endpoints To connect programmatically to AWS IoT TwinMaker, use an endpoint. If you use an HTTP client, you need to prefix control plane and data plane APIs as follows. However, it is unnecessary to add a prefix to AWS SDK and AWS Command Line Interface commands because they automatically add the necessary prefix. • Use the api prefix for control plane APIs. For example, api.iottwinmaker.us- west-1.amazonaws.com. • Use the data prefix for data plane APIs. For example, data.iottwinmaker.us- west-1.amazonaws.com. AWS IoT TwinMaker endpoints and quotas 277 AWS IoT TwinMaker User Guide Document history for the AWS IoT TwinMaker User Guide The following table describes the documentation releases for AWS IoT TwinMaker. Change Description Date New service-linked role and new IAM policy AWS IoT TwinMaker added a new service-linked role, November 17, 2023 called AWSServiceRoleForI oTTwinMaker. AWS IoT TwinMaker added this new service-linked role to allow AWS IoT TwinMaker to call other AWS services and to sync their resources on your behalf. The new AWSIoTTwi nMakerServiceRolePolicy IAM policy is attached to this role, and the policy grants permission to AWS IoT TwinMaker to call other AWS services and to sync their resources on your behalf. Initial release Initial release of the AWS IoT TwinMaker User Guide November 30, 2021 278 AWS IoT TwinMaker User Guide cclxxix |
user-guide-001 | user-guide.pdf | 1 | User Guide AWS Microservice Extractor for .NET Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. AWS Microservice Extractor for .NET User Guide AWS Microservice Extractor for .NET: User Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS Microservice Extractor for .NET Table of Contents User Guide What Is AWS Microservice Extractor for .NET? .............................................................................. 1 Primary features ........................................................................................................................................... 1 Supported use cases .................................................................................................................................... 2 Concepts ......................................................................................................................................................... 4 Access .............................................................................................................................................................. 4 Pricing ............................................................................................................................................................. 5 How AWS Microservice Extractor for .NET works .......................................................................... 6 Overview ......................................................................................................................................................... 6 Application analysis and extraction .......................................................................................................... 7 Visualization ................................................................................................................................................... 7 Runtime profiling ....................................................................................................................................... 10 Service limits ............................................................................................................................................... 10 Information collected ................................................................................................................................ 11 Get started ..................................................................................................................................... 12 Prerequisites ................................................................................................................................................ 12 Prerequisites for analysis and extraction ......................................................................................... 12 Required IAM policies ........................................................................................................................... 13 Install ............................................................................................................................................................ 15 Installation .............................................................................................................................................. 15 Runtime profiling prerequisites ......................................................................................................... 16 Use Microservice Extractor ....................................................................................................................... 18 Set up ...................................................................................................................................................... 19 Onboard .................................................................................................................................................. 21 View details ............................................................................................................................................ 22 APIs .......................................................................................................................................................... 23 Launch visualization ............................................................................................................................. 23 Work with visualization ....................................................................................................................... 24 Extract as independent services ........................................................................................................ 29 Manually deploy .................................................................................................................................... 31 Failure modes ........................................................................................................................................ 31 Remove application .............................................................................................................................. 32 Edit details .............................................................................................................................................. 32 Edit settings ........................................................................................................................................... 32 Security .......................................................................................................................................... 34 iii AWS Microservice Extractor for .NET User Guide Data protection ........................................................................................................................................... 34 Data collected by AWS Microservice Extractor for .NET ............................................................... 37 Identity and Access Management ........................................................................................................... 37 Configuration and vulnerability analysis ............................................................................................... 38 Security best practices .............................................................................................................................. 38 Troubleshooting ............................................................................................................................. 39 AWS profile errors ...................................................................................................................................... 39 Build failures ............................................................................................................................................... 40 Extraction errors ......................................................................................................................................... 41 Application artifact location .................................................................................................................... 41 Onboarding and visualization errors ...................................................................................................... 42 Creating groups .......................................................................................................................................... 42 Uninstalling application ............................................................................................................................ 42 Metrics and logs collected by AWS Microservice Extractor for .NET ................................................ 43 Questions and feedback ........................................................................................................................... 43 Version history ............................................................................................................................... 44 Document History .......................................................................................................................... 54 iv AWS Microservice Extractor for .NET User Guide What Is AWS Microservice Extractor for .NET? AWS Microservice Extractor for .NET is an assistive modernization tool that helps to reduce the time and effort required to break down large, monolithic applications running on the AWS Cloud or on premises into smaller, independent services. These services can be operated and managed independently. Microservice Extractor analyzes the code of your target application, and creates a visualization of the source code of the application. The visualization includes classes, namespaces, and method calls between them. The visualization of your application helps you to logically group functionalities based on criteria such as class dependencies, namespaces, and call counts. When you isolate the functionalities of the application into groups, Microservice Extractor provides assistive guidance to refactor your code base to prepare it for extraction into smaller services. When the code base is ready for extraction, Microservice Extractor extracts the functionalities into separate code solutions. You can then manually edit and deploy these code solutions as independent services. Microservice Extractor overview topics • Primary features • Supported use cases • Concepts • Access AWS Microservice Extractor for .NET • Pricing for AWS Microservice Extractor for .NET Primary features The primary features of AWS Microservice Extractor for .NET are: Application analysis and graphical representation of application classes Microservice Extractor analyzes your monolithic applications and, based on the analysis, produces a graphical representation that displays the application classes, optionally configured metrics for applicable classes, and dependencies between them. The interactive graph groups classes by functionality to help you make decisions about which parts of the application to extract as independent services. Primary features 1 AWS Microservice Extractor for .NET User Guide Automated packaging of grouped functionalities into smaller services You can designate the parts of an application to extract as separate services by grouping parts of the application code based on the functionality they implement. Microservice Extractor attempts to convert the grouped classes into code solutions. Internal application method calls can be converted to API operations so that the new, smaller services can function independently from the monolithic application. Porting Assistant for .NET integration You can determine whether your application dependencies are compatible with .NET Core. Dependencies that are compatible with .NET Core can be grouped together using the Porting Assistant for .NET integration with Microservice Extractor. Microservice Extractor detects whether Porting Assistant for .NET is installed on your machine and gives you the option to include .NET Core compatibility data. When this intergration is enabled, you can view .NET Core compatible dependencies in the visualization panel for your |
user-guide-002 | user-guide.pdf | 2 | calls can be converted to API operations so that the new, smaller services can function independently from the monolithic application. Porting Assistant for .NET integration You can determine whether your application dependencies are compatible with .NET Core. Dependencies that are compatible with .NET Core can be grouped together using the Porting Assistant for .NET integration with Microservice Extractor. Microservice Extractor detects whether Porting Assistant for .NET is installed on your machine and gives you the option to include .NET Core compatibility data. When this intergration is enabled, you can view .NET Core compatible dependencies in the visualization panel for your monolithic application. You can also perform single-step extract-and-port operations on the extracted microservice or monolithic application as part of the extraction workflow. Automated refactoring recommendations You can start refactoring older monolithic applications when you are not familiar with their original architecture or retrofitted features. The prescriptive guidance provided by AWS Microservice Extractor for .NET's automated recommendations reduces the time it takes to identify and refactor microservices from legacy applications. AWS Microservice Extractor for .NET's automated recommendations and prescriptive guidance allows you to start refactoring older monolithic applications when you are not familiar with their original architecture or retroffitted features. The prescriptive guidance and recommendations from AWS Microservice Extractor for .NET reduces the time it would normally take to refactor microservices from legacy applications. Supported use cases AWS Microservice Extractor for .NET supports the following use cases. .NET Versions AWS Microservice Extractor for .NET supports .NET Framework and .NET Core ASP.NET web service applications. Specifically, Microservice Extractor supports the following versions: Supported use cases 2 User Guide AWS Microservice Extractor for .NET • Application visualization: • .NET Framework version 4.0 and later • .NET Core version 3.1 • .NET version 5.0 • .NET version 6.0 • .NET version 7.0 • Application extraction: • .NET Framework version 4.5 and later • .NET Core version 3.1 • .NET version 5.0 • .NET version 6.0 • .NET version 7.0 Microservice Extractor supports analysis of C# source code. Extraction is supported for only ASP.NET MVC applications. Extraction Microservice Extractor supports extraction for the following use cases: • Classes are extracted in their entirety. Partial class extraction is not supported. • Classes do not change during compilation. Classes that change class structure during compilation are not supported. Controllers Microservice Extractor supports the following actions in relation to controllers: • For applications with controllers, Microservice Extractor converts local method calls at the controller level to network calls to the extracted service. • For other applications, Microservice Extractor adds code comments by default. If you choose the advanced option for Method invocations from the application to the extracted service during extraction, Microservice Extractor replaces local method calls with network calls, where possible. • For MVC applications, Microservice Extractor copies the views (.cshtml file) to the extracted service to be able to render the relevant HTML when returning the response. Supported use cases 3 AWS Microservice Extractor for .NET Concepts User Guide The following concepts and definitions can help you to understand the AWS Microservice Extractor for .NET tool. Nodes Nodes represent the classes in the source code of the monolithic application. Groups Closely related functions are organized as groups of nodes in the graphical representation of a monolithic application. Application nodes are displayed with their dependencies to help you understand the functional architecture of your application. This visualization of the application nodes and dependencies can help you to group them together by functionality. Visualization The Microservice Extractor visualization uses source code analysis and runtime metrics to produce a graphical representation of a monolithic application. The graph shows dependencies between application nodes, call counts, and static references between code artifacts. You can use the graph and call counts to understand the dependencies between nodes, and to identify heavily called ones. You can run the assessment tool from the standalone Microservice Extractor application. Canvas Independent views for arranging nodes and creating groups. Extraction Extraction is the process of separating out logically grouped parts of a monolithic application into smaller, independent services. These parts are referred to as islands in the visualization of an application. You can perform an extraction using Microservice Extractor after an application has been assessed. Access AWS Microservice Extractor for .NET AWS Microservice Extractor for .NET is a standalone tool that you download and install on your developer workstation. Specify the source files for your applications to start an analysis. You can view the analysis using the UI console. Concepts 4 AWS Microservice Extractor for .NET User Guide To install Microservice Extractor, see Install AWS Microservice Extractor for .NET. Pricing for AWS Microservice Extractor for .NET AWS Microservice Extractor for .NET is available for use at no cost. Pricing 5 AWS Microservice Extractor for .NET User Guide How AWS Microservice Extractor for .NET works This section describes |
user-guide-003 | user-guide.pdf | 3 | Microservice Extractor for .NET AWS Microservice Extractor for .NET is a standalone tool that you download and install on your developer workstation. Specify the source files for your applications to start an analysis. You can view the analysis using the UI console. Concepts 4 AWS Microservice Extractor for .NET User Guide To install Microservice Extractor, see Install AWS Microservice Extractor for .NET. Pricing for AWS Microservice Extractor for .NET AWS Microservice Extractor for .NET is available for use at no cost. Pricing 5 AWS Microservice Extractor for .NET User Guide How AWS Microservice Extractor for .NET works This section describes how AWS Microservice Extractor for .NET analyzes an application and extracts an application into smaller services. How Microservice Extractor works topics • Overview • Application analysis and extraction • Visualization • Runtime profiling • Service limits • Information collected Overview The following are the high-level steps for using AWS Microservice Extractor for .NET to modernize your monolithic application by extracting it into smaller services. 1. Onboard and analyze the application — Onboard the application to Microservice Extractor by providing access to the application source code and binaries. The backend service logic of the application is analyzed by Microservice Extractor to understand the application and node structure, and the dependencies between nodes. Nodes represent classes in the source code of the application. The results of this analysis can help you to understand how to better group functionalities into separate services. If you have runtime profiling data that represents production data, you can optionally use it with the analysis to collect actionable runtime metrics. If Porting Assistant for .NET is installed on your machine, you can optionally include .NET Core 5 and 6 compatibility data in the visualization side panel. 2. Assist with identifying grouped classes to extract as independent services — Microservice Extractor creates a graphical representation of the application that shows the nodes, node types, dependencies, and groupings based on dependency coupling. If you have uploaded runtime profiling data during application onboarding, for example, transactional call volume, then it will be displayed. This graphical representation assists you with extracting groupings of nodes as isolated services. Overview 6 AWS Microservice Extractor for .NET User Guide 3. Automated grouping recommendations — You can get grouping recommendations from Microservice Extractor instead of manually creating groupings. Microservice Extractor uses machine learning-driven analysis of your source code to generate grouping recommendations. 4. Refactor source code and extract grouped nodes — After the parts of the application that you want to extract are grouped and selected, refactor source code by isolating business domains and removing dependencies between them. Then, extract the groups as separate code solutions. After extracting the groups as separate solutions, you can manually edit and build the code solutions, and deploy them as independent services in containers. Application analysis and extraction AWS Microservice Extractor for .NET analyzes the source code of a monolithic application and creates a visualization of the application, which includes nodes, dependencies, call flows, and relevant metrics. You can use the visualization of the application to make informed decisions about the structure of the application, and to identify parts of the application to group together and extract as independent services. After Microservice Extractor extracts a specified functionality group within the application, you can manually package and deploy the functionalities as independent services in containers. You can then integrate the smaller services with your custom workflows. Extracting monolithic applications into smaller, independent services is an iterative process. Based on your requirements, you can repeat the process by onboarding the newly extracted monolithic application into Microservice Extractor. This further assists with identifying and extracting components as independent services. Visualization AWS Microservice Extractor for .NET creates a visualization of the monolithic application nodes, the metrics for each node, and the dependencies between them. Depending on the visualization level, a node could mean an aggregation of related objects such as projects, namespaces, logical groups of classes, or individual classes. It provides data on the application structure required to help you decide what parts of the application you want to extract as smaller, independent services. You can use the visualization to perform the following: Application analysis and extraction 7 AWS Microservice Extractor for .NET User Guide • Isolate dependencies — Use the visualization to help you isolate dependencies and automatically capture interdependencies to create groups of closely related nodes. Nodes within each group rely on other nodes within the group. • Narrow focus — View all of the dependencies for a selected node, and the shared dependencies between nodes. • Assess call count (class level nodes only) — View the method call count number between nodes. Call count data is provided by the runtime metrics that you upload to the tool. • Visualize at a high level — Get a high-level understanding of your monolithic |
user-guide-004 | user-guide.pdf | 4 | Guide • Isolate dependencies — Use the visualization to help you isolate dependencies and automatically capture interdependencies to create groups of closely related nodes. Nodes within each group rely on other nodes within the group. • Narrow focus — View all of the dependencies for a selected node, and the shared dependencies between nodes. • Assess call count (class level nodes only) — View the method call count number between nodes. Call count data is provided by the runtime metrics that you upload to the tool. • Visualize at a high level — Get a high-level understanding of your monolithic application, and investigate dependencies and call count metrics to make decisions about parts of the application to extract into smaller, independent services. • Automated grouping recommendations — Microservice Extractor can generate recommendations for groupings to extract as independent services using machine learning. The following image shows the Visualization canvas displaying root level groupings of applications nodes. Application nodes at this level are classified as either project or group nodes. The following image shows the Visualization canvas displaying namespace level aggregations of application nodes. Nodes at this level are aggregated as namespaces. Visualization 8 AWS Microservice Extractor for .NET User Guide The following image shows the Visualization canvas displaying a class level view within the namespace. Nodes at this level are individual classes. Visualization 9 AWS Microservice Extractor for .NET User Guide Runtime profiling The AWS Microservice Extractor for .NET tool includes an application runtime profiler to provide call count data with dependency details in the visualization of the application. The output of the profiler is processed by the assessment tool to create the graph. The visualization shows class level call counts to help you understand the traffic patterns of your application. This visual representation helps you to focus resources during the extraction process and to isolate areas of high value. The runtime profiler is a .dll file that must be included when you run your application in a test or integration environment with data that is representative of the production environment. CLR profiling is supported. For steps to run the profiler, see the Runtime profiling prerequisites. Service limits AWS Microservice Extractor for .NET has the following limits. • Large applications may reduce the performance of the Microservice Extractor application. • Some failure modes are detected later in the Microservice Extractor workflow. Runtime profiling 10 AWS Microservice Extractor for .NET User Guide • If the loading status persists for several minutes on the visualization, please contact us at aws- [email protected]. Information collected You can choose to share data when you first set up the Microservice Extractor application. You have the option to turn off usage data sharing by clearing the check box for usage data sharing on the AWS Microservice Extractor for .NET Settings page . Usage data sharing is enabled by default. You can disable usage data sharing by clearing the check box for usage data sharing on the AWS Microservice Extractor for .NET Settings page . When usage data sharing is enabled, Microservice Extractor collects the following information when you onboard your source code: • Success and failure operations performed during onboarding, static code analysis, application build, graph creation, and AI recommendations. • Resources consumed during operations, such as CPU and memory usage. • Number of nodes and dependencies. • Number of detected islands. • Types of nodes. • Number of canvases. Microservice Extractor doesn’t collect proprietary information, such as source code. In case of failure, the tool may collect stack traces to improve product experience. Microservice Extractor uses the information collected to continuously improve its API replacement suggestions. Microservice Extractor periodically analyzes the collected information and updates its replacement engine so that the Microservice Extractor experience is continuously improved. Information collected 11 AWS Microservice Extractor for .NET User Guide Get started with AWS Microservice Extractor for .NET This section describes the prerequisites, installation procedure, and steps to get started using AWS Microservice Extractor for .NET. Getting started topics • Prerequisites to use AWS Microservice Extractor for .NET • Install AWS Microservice Extractor for .NET • Use AWS Microservice Extractor for .NET Prerequisites to use AWS Microservice Extractor for .NET This section describes the prerequisites for installing and using Microservice Extractor. • Prerequisites for analysis and extraction of monolithic application • Required AWS Identity and Access Management policies Prerequisites for analysis and extraction of monolithic application To use Microservice Extractor to analyze and extract a monolithic application to deploy into smaller services, you must have the following: • A valid AWS CLI profile to publish metrics. For information about how to configure an AWS CLI profile, see Configuring the AWS CLI. • A monolithic application that must be one of the following: • A .NET Framework ASP.NET web service application hosted on IIS with the .NET Framework developer pack |
user-guide-005 | user-guide.pdf | 5 | Microservice Extractor. • Prerequisites for analysis and extraction of monolithic application • Required AWS Identity and Access Management policies Prerequisites for analysis and extraction of monolithic application To use Microservice Extractor to analyze and extract a monolithic application to deploy into smaller services, you must have the following: • A valid AWS CLI profile to publish metrics. For information about how to configure an AWS CLI profile, see Configuring the AWS CLI. • A monolithic application that must be one of the following: • A .NET Framework ASP.NET web service application hosted on IIS with the .NET Framework developer pack installed. • A .NET Core ASP.NET web service application with the developer pack installed. • The ability to build the application solution with MSBuild. • One of the following operating systems for analyzing the application and creating the visualization: • Windows 10 or later • Windows Server 2016 or later • For the application analysis, you must have: Prerequisites 12 AWS Microservice Extractor for .NET User Guide • .NET Framework version 4 or later, or .NET Core version 3.1 or later compatibility with source code solution. • 10 GB minimum of free disk space, in addition to the size of your application. • 8 GB minimum of available memory. • Compute power equivalent to or greater than that of an Intel Core i3 3-GHz processor. • For the extraction, you must have: • .NET Framework version 4.5 or later, or .NET Core version 3.1 or later compatibility with source code solution. • 20 GB minimum of free disk space, in addition to twice the size of your application. Required AWS Identity and Access Management policies To perform certain operations using AWS Microservice Extractor for .NET, your user must have the necessary permissions. This section includes the policies that your user must have, and also instructions for granting permissions to the user. You must use a valid AWS CLI profile to use the assessment tool and run the commands to complete an extraction. For information about how to configure your AWS CLI profile, see Configuring the AWS CLI. How to provide access to your user To provide access, add permissions to your users, groups, or roles: • Users and groups in AWS IAM Identity Center: Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide. • Users managed in IAM through an identity provider: Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide. • IAM users: • Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide. Required IAM policies 13 AWS Microservice Extractor for .NET User Guide • (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide. Permissions to use the AWS Microservice Extractor for .NET assessment tool To use the AWS Microservice Extractor for .NET assessment tool, you must create an IAM policy that includes the following permissions. To view the type of application data collected by Microservice Extractor, see Information collected. { "Version": "2012-10-17", "Statement": [ { "Sid": "ApplicationTransformationAccess", "Effect": "Allow", "Action": [ "application-transformation:*" ], "Resource": "*" }, { "Sid": "KMSPermissions", "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:Encrypt", "kms:Decrypt", "kms:CreateGrant", "kms:GenerateDataKey" ], "Resource": "arn:aws:kms:::*", "Condition": { "ForAnyValue:StringLike": { "kms:ResourceAliases": "alias/application-transformation*" } } }, { "Sid": "S3Access", "Effect": "Allow", "Action": [ "s3:GetObject", Required IAM policies 14 AWS Microservice Extractor for .NET User Guide "s3:PutObject", "s3:CreateBucket", "s3:ListBucket", "s3:PutBucketOwnershipControls", "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::*" ], "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } Install AWS Microservice Extractor for .NET This topic describes how to install AWS Microservice Extractor for .NET. It includes steps to configure prerequisites to use the runtime profiling agent on your application. You must configure the runtime profiling prerequisites after you download the Microservice Extractor installer. Installation topics • Installation • Runtime profiling prerequisites Installation AWS Microservice Extractor for .NET is available for download as an executable file (ServiceExtract.exe): Download Microservice Extractor For optional integrity detection, you can download the SHA256 checksum of the installer. Install 15 AWS Microservice Extractor for .NET User Guide To use the checksum file, calculate the SHA256 on your downloaded .exe file to compare against the output of the following PowerShell command: Get-FileHash -Algorithm SHA256 Service-Extract.exe You can verify the authenticity of the signatures of the Microservice Extractor .exe file by running the following command, which verifies that a valid certificate is contained in the file. Get-AuthenticodeSignature <file-name> After you have downloaded the .exe file, and performed optional integrity checks, you can run the installation executable for the |
user-guide-006 | user-guide.pdf | 6 | optional integrity detection, you can download the SHA256 checksum of the installer. Install 15 AWS Microservice Extractor for .NET User Guide To use the checksum file, calculate the SHA256 on your downloaded .exe file to compare against the output of the following PowerShell command: Get-FileHash -Algorithm SHA256 Service-Extract.exe You can verify the authenticity of the signatures of the Microservice Extractor .exe file by running the following command, which verifies that a valid certificate is contained in the file. Get-AuthenticodeSignature <file-name> After you have downloaded the .exe file, and performed optional integrity checks, you can run the installation executable for the Microservice Extractor assessment tool on your local computer. When the installation completes, you can find the ServiceExtractProfiler.dll in C:\Users \<username>\AppData\Local\Programs\AWS Microservice Extractor for .NET \resources\AWS Tools\serviceextract-profiler\ServiceExtractProfiler.dll Runtime profiling prerequisites To use the runtime profiling agent on your application, you must configure the following prerequisites after you have downloaded the ServiceExtract.exe file. 1. To ensure that IIS manager can access the .dll or output folder, copy the .dll from the default path into the inetpub folder (for example, C:\inetpub\wwwroot\...), which is the default folder for IIS. In addition, verify that your IIS user has read/write access to the output directory. 2. Create a folder for the Microservice Extractor runtime profiler to output to. 3. Register the Microservice Extractor runtime profiler on the server on which the application is running using one of the following commands. This step is not necessary for .NET Framework version 4 or later. Windows 64-bit version: %systemroot%\System32\regsvr32.exe C:\Users\<username>\AppData\Local\Programs \AWS Microservice Extractor for .NET\resources\AWS Tools\serviceextract-profiler \ServiceExtractProfiler.dll Windows 32-bit version: Runtime profiling prerequisites 16 AWS Microservice Extractor for .NET User Guide %systemroot%\SysWoW64\regsvr32.exe C:\Users\<username>\AppData\Local\Programs \AWS Microservice Extractor for .NET\resources\AWS Tools\serviceextract-profiler \ServiceExtractProfiler.dll 4. Set relevant environment variables before starting the target binary. You must manually update the following system variables. .NET Framework COR_ENABLE_PROFILING enables profiling COR_ENABLE_PROFILING=1 COR_PROFILER specifies the profiler to use using CLSID or ProgID COR_PROFILER={DCF75470-C2FC-4198-88EE-D07740A3FB9B} COR_PROFILER_PATH specifies the path of the serviceextract profiler COR_PROFILER_PATH=C:\Users\<username>\AppData\Local\Programs\AWS Microservice Extractor for .NET\resources\AWS Tools\serviceextract-profiler \ServiceExtractProfiler.dll .NET Core, NET 7.0, NET 6.0, and .NET 5.0 CORECLR_ENABLE_PROFILING CORECLR_PROFILER CORECLR_PROFILER_PATH_64 In addition to the previous version-specific updates, you must update the following variables for all .NET versions: SERVICEEXTRACT_PROFILER_OUTPUT_DIR specifies the output directory used by the profiler SERVICEEXTRACT_PROFILER_OUTPUT_DIR=C:\ProfilerOutput SERVICEEXTRACT_PROFILER_TARGET_DLL specifies the output directory used by the profiler SERVICEEXTRACT_PROFILER_TARGET_DLL=<your-target-app.dll> Here is an example of how to configure IIS with the necessary environment variables in the applicationHost.config file: <applicationPools> Runtime profiling prerequisites 17 AWS Microservice Extractor for .NET User Guide <add name="DefaultAppPool" /> <add name="<YourSiteName>" autoStart="false" /> <add name=".NET v4.5 Classic" managedRuntimeVersion="v4.0" managedPipelineMode="Classic" /> <add name=".NET v4.5" managedRuntimeVersion="v4.0" /> <add name="<YourSiteName2>" autoStart="false" /> <applicationPoolDefaults managedRuntimeVersion="v4.0"> <processModel identityType="ApplicationPoolIdentity" /> <environmentVariables> <add name="COR_ENABLE_PROFILING" value="1" /> <add name="COR_PROFILER" value="{DCF75470-C2FC-4198-88EE- D07740A3FB9B}" /> <add name="COR_PROFILER_PATH" value=" C:\Users\<username>\AppData\Local \Programs\AWS Microservice Extractor for .NET\resources\AWS Tools\serviceextract- profiler\ServiceExtractProfiler.dll <add name="SERVICEEXTRACT_PROFILER_OUTPUT_DIR" value="C: \ProfilerOutput" /> <add name="SERVICEEXTRACT_PROFILER_TARGET_DLL" value="<your-target- app.dll>" /> </environmentVariables> </applicationPoolDefaults> </applicationPools> When the manual configuration completes, the runtime profiler automatically captures metrics when you run your application. Perform typical workloads and test cases while running your application to capture relevant metrics for an application assessment. After shutting down the application pool in IIS, a .csv file will be created by the profiler for you to upload to the Microservice Extractor tool for the assessment. You can find the .csv file at the output directory configured in the previous step for SERVICEEXTRACT_PROFILER_OUTPUT_DIR. Use AWS Microservice Extractor for .NET This section contains information to help you get started with AWS Microservice Extractor for .NET after verifying that the required prerequisites are met. When you start Microservice Extractor for the first time, you are prompted to enter your AWS CLI profile information so that Microservice Extractor can collect metrics to improve your experience. These collected metrics also help to flag issues with the software so that AWS can quickly address them. If you have not set up your AWS profile, see Configuring the AWS CLI. Using Microservice Extractor topics Use Microservice Extractor 18 AWS Microservice Extractor for .NET User Guide • Set up AWS Microservice Extractor for .NET • Onboard an application • View application details • APIs Tab • Launch application visualization • Work with the application visualization • Extract parts of an application as independent services • Manually deploy as independent service • Failure modes • Remove an application from AWS Microservice Extractor for .NET • Edit application details • Edit user settings Set up AWS Microservice Extractor for .NET Perform the following steps to set up AWS Microservice Extractor for .NET. 1. Verify that you have completed the prerequisite steps to use Microservice Extractor. 2. 3. From the Microservice Extractor landing page, choose Get started. From the Setup Microservice Extractor page, select an AWS region where to store and analyze source code metadata. Note Your source code never leaves your local system. Microservice Extractor will upload source code metadata |
user-guide-007 | user-guide.pdf | 7 | independent service • Failure modes • Remove an application from AWS Microservice Extractor for .NET • Edit application details • Edit user settings Set up AWS Microservice Extractor for .NET Perform the following steps to set up AWS Microservice Extractor for .NET. 1. Verify that you have completed the prerequisite steps to use Microservice Extractor. 2. 3. From the Microservice Extractor landing page, choose Get started. From the Setup Microservice Extractor page, select an AWS region where to store and analyze source code metadata. Note Your source code never leaves your local system. Microservice Extractor will upload source code metadata to an Amazon S3 bucket you have designated from your AWS account. Microservice Extractor’s scalable backend will process source code metadata ephemerally and write the results in the same S3 bucket. Please see the Data Privacy FAQ for more information. 4. Select either an AWS named profile or existing AWS CLI/SDK credentials. You can select an AWS named profile from the dropdown list, update an existing named profile, or Add a named profile. Microservice Extractor uses the credentials from your AWS profile to share your Microservice Extractor usage data with AWS to make the Microservice Extractor tool better. For Set up 19 AWS Microservice Extractor for .NET User Guide more information about named profiles, see Named profiles for the AWS CLI in the AWS CLI User Guide. Note When using Single Sign-on capabilities such as AWS IAM Identity Center be sure to choose the option for AWS CLI/SDK credentials. 5. Select the Amazon S3 bucket in which to store your source code metadata by typing the bucket name and selecting it. If the bucket does not exist, a Region selection will appear for you to create a bucket. Select or create a prefix for your Amazon S3 bucket. 6. (Optional) You may enter Amazon Resource Name (ARN) of the AWS KMS key (SSE-KMS) to use for server-side encryption of the objects Microservice Extractor will store in S3 bucket on your behalf. If you leave this empty, Microservice Extractor will use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store source code metadata. 7. To use AI-based recommendations, select the check box to Enable automated groupings. When the check box is selected, your code metadata is stored in an Amazon S3 bucket. Note Your code metadata is never moved from the designated Amazon S3 bucket. You can select the Amazon S3 bucket in which to store your code by typing the bucket name and selecting it. If the bucket does not exist, a Region selection will appear for you to create a bucket. Select or create a prefix for your Amazon S3 bucket. 8. Add or update the Working directory used to store the output from the application analysis and extraction of your application. You cannot change this directory after the application is set up. 9. Microservice Extractor usage data sharing is enabled by default. To view the types of data collected, see Information collected. Clear the check box selection to disable usage data sharing. 10. Choose Next to onboard your application. Set up 20 AWS Microservice Extractor for .NET User Guide Onboard an application To onboard your application, perform the following steps. 1. Navigate to the Applications page of the Microservice Extractor tool by either choosing it from the left navigation pane of the application, or by choosing Next from the Setup AWS Microservice Extractor for .NET page. In the Applications component, choose Onboard application. The Onboard application option is disabled if you have not selected an AWS named profile from the Set up Microservice Extractor page. 2. On the Onboard application page, enter the following information for the application you want to onboard. • Application details. Enter a Name and optional Description for your application. • Source code. Provide the ASP.NET solution file (for example, the .sln file). Choose the application project file to upload to Microservice Extractor. The file must be buildable and use ASP.NET. The source code and its dependencies must be local to the on-premises machine or Amazon EC2 instance on which Microservice Extractor is run. Note Microservice Extractor uses MSBuild to build your application. If your application uses custom build scripts, you will encounter an error during the application analysis. • MSBuild path. The path of the version of MSBuild that you want to use, and optional arguments. AWS Microservice Extractor for .NET uses the latest version of MSBuild that is on the system as the default version. • Runtime profiling data — optional. Upload the outputted .csv file from the runtime profiling performed for your application. For steps to run the profiler, see the Runtime profiling prerequisites. You can find the outputted .csv file in the output directory configured for the SERVICEEXTRACT_PROFILER_OUTPUT_DIR environment variable. Note You can't change the source code |
user-guide-008 | user-guide.pdf | 8 | will encounter an error during the application analysis. • MSBuild path. The path of the version of MSBuild that you want to use, and optional arguments. AWS Microservice Extractor for .NET uses the latest version of MSBuild that is on the system as the default version. • Runtime profiling data — optional. Upload the outputted .csv file from the runtime profiling performed for your application. For steps to run the profiler, see the Runtime profiling prerequisites. You can find the outputted .csv file in the output directory configured for the SERVICEEXTRACT_PROFILER_OUTPUT_DIR environment variable. Note You can't change the source code or runtime profiling data that was input after onboarding. If you want to make changes to these input files, you must onboard the application as a new application. Onboard 21 AWS Microservice Extractor for .NET User Guide • Analyze .NET Core portability — optional. You can choose to include .NET Core compatibility data in the visualization of your application. 3. Choose Onboard Application. Microservice Extractor analyzes your source code and uses the analysis and runtime metrics to produce a graphical representation of your application. 4. When the analysis completes, the application will show a build status of Success on the Applications page. From here, you can select the application and choose Launch visualization, or choose View dependency graph in the build success banner. The graph shows node dependencies, runtime metrics, such as runtime call counts, and static references between code artifacts, such as classes. If you choose the application name, you are taken to the Application details page, where you can view and edit the details you entered when you onboarded the application. If the application shows a build failed status, review the error message, and choose Update source code to remediate. Note We recommend that you wait until the build status shows Success before you navigate to the View details page. View application details When an application is successfully onboarded, you can view details about the application by choosing it from the Applications page. You will be taken to the Application details tab. From there, you can view and Edit the application details. If you previously launched the visualization for the application, you can choose the Visualization (nodes and dependencies) tab to view the graphical visualization of your application. For more information about how to work with the visualization, see Work with the application visualization. The application details page displays a summary of the onboarding details that you entered for the application. It also includes an overview of the porting compatibility assessment if you enabled the option while on onboarding application, the location of the application logs, log status, and descriptions of any log errors. You can edit the name and description, source code files, and runtime profiling data by choosing Edit. If you make changes to the application details, the application analysis is performed again, and the application visualization is refreshed. If you cancel the edit, no changes will be applied. View details 22 AWS Microservice Extractor for .NET User Guide APIs Tab You can view which APIs your application utilizes as well as the classes where they are referenced by navigating to the APIs tab after onboarding and analyzing an application. From the search bar in the table, you are able to either include or exclude APIs based on several fields, including their compatibility with .NET Core and corresponding Class. From this page you can also select any number of APIs and add them to a group by clicking the Add to Group button, just like you would in the Visualization tab. Note: The groups created will be created using the classes that the corresponding APIs belong to; therefore a class containing multiple APIs will not be able to have its APIs in separate groups. Launch application visualization After you onboard an application, you can launch the visualization of the application from the Applications page to better conceptualize and group your application nodes. Select the application for which you want to view the visualization and choose Launch visualization. You will be taken directly to the Visualization (nodes and dependencies) tab of the application page. From the Visualization (nodes and dependencies) page, you can view, in graph form, the logically grouped functionalities identified by Microservice Extractor to be extracted as isolated services. By default, the visualization displays nodes at the Project level, you can then use filters to display specific project level or namespace level nodes. For selected nodes, you can view .NET core APIs 23 AWS Microservice Extractor for .NET User Guide portability. You can apply the default groups, modify them, or create new groups to associate with a functionality that guides refactoring. For more details on graph functionality, see Work with the application visualization. Work with the application visualization The Visualization tab displays the application nodes |
user-guide-009 | user-guide.pdf | 9 | the logically grouped functionalities identified by Microservice Extractor to be extracted as isolated services. By default, the visualization displays nodes at the Project level, you can then use filters to display specific project level or namespace level nodes. For selected nodes, you can view .NET core APIs 23 AWS Microservice Extractor for .NET User Guide portability. You can apply the default groups, modify them, or create new groups to associate with a functionality that guides refactoring. For more details on graph functionality, see Work with the application visualization. Work with the application visualization The Visualization tab displays the application nodes and dependencies in graphical format. The initial view on the Visualization tab is the main view. The circles represent nodes, and the arrows show dependencies and direction between nodes, incoming or outgoing. By default, no groups are created. The main view reflects any updates you make to your groupings. Features of the AWS Microservice Extractor for .NET visualization tool You can perform the following tasks from the Visualization (nodes and dependencies) page to help you group your application nodes to extract as a smaller service. Task Description Create custom groups to visualize a segmentation of the service You can create groups in the following ways: • Drag and drop (main view only) — Select one or more nodes by clicking on them, then drag the node or nodes together. • Choose or right-click (main view) — Choose or right-click a node to open the Actions menu. From the Actions menu, you can choose to Add node to group. The Add node(s) to group pane appears on the right, where you can choose to add nodes to an existing group or create a new group, and select the Group name and, optionally, the Group color . Groups are indicated by dotted rectangle s. You can collapse and expand groups by choosing the minimize and maximize icons in the left corner of each rectangle. Collapsing a Work with visualization 24 AWS Microservice Extractor for .NET User Guide Task Description View node details Reset view rectangle helps to reduce visual noise as you focus on other areas of the service. Select one or more nodes. Selected nodes are indicated by a dotted circle. Incoming and outgoing dependencies for the selected nodes are highlighted as red (outgoing) or blue (incoming). If you select more than one node, each selected node will appear as dotted, and the dependencies will be highlighted for all of the selected nodes. When you choose or right-click on a node, you can select View node details from the Actions menu. The Node details panel appears on the right. Node details include the following tabs and information for one or more selected nodes: • General — Shows the selected nodes, their dependencies, and runtime profiling information. The arrows, or Edges show the direction of the dependency, incoming or outgoing. The call count for each node dependency is also displayed. • .NET Core portability — Shows the selected nodes and their .NET Core portability status. If a node is not compatible for .NET Core portability, hover over the status message to view the details and potential remediation. Choose Reset view to reset the visualization to the original state, or as it was arranged when you first launched it. All new groups are removed, and all changes will be discarded. Work with visualization 25 AWS Microservice Extractor for .NET User Guide Task Description Show dependencies Node filter View options Choose Show dependencies to show or hide the incoming and outgroing dependencies between nodes. By default Show dependice s is selected, and incoming and outgoing dependencies are displayed on the visualiza tion. From the Visualization Nodes and Dependencies view, you can use the Node filter search box to change the displayed nodes in the following ways: • Project — When a project name is entered in the text box and selected, the visualiza tion displays nodes that are a part of that project. The name of a project node filter begins with P. • Namespace — When a namespace name is entered and selected, the visualization displays nodes that are part of that specific namespace. The name of a namespace node filter begins with N. • Clear all filters— This option removes all selected node filters from the visualization and the view is returned to the default view. When node filters are applied you can switch between a visualization with the filters applied or a visualization with no filters. To do this, under View options select All nodes to view the visualization with no filters or select Filtered nodes to apply your selected node filters. Work with visualization 26 AWS Microservice Extractor for .NET User Guide Task View Legend View Group classification Description The Legend displays the meanings of the symbols in the visualization. |
user-guide-010 | user-guide.pdf | 10 | N. • Clear all filters— This option removes all selected node filters from the visualization and the view is returned to the default view. When node filters are applied you can switch between a visualization with the filters applied or a visualization with no filters. To do this, under View options select All nodes to view the visualization with no filters or select Filtered nodes to apply your selected node filters. Work with visualization 26 AWS Microservice Extractor for .NET User Guide Task View Legend View Group classification Description The Legend displays the meanings of the symbols in the visualization. • A gray shaded circle indicates a node. • A dotted circle indicates a selected node. • A gray rectangle indicates a group. • A gray rectangle that contains an expand icon indicates a collapsed group. • A gray cube that indicates a class with minimal logic. • A gray connected cirle and half circle that indicates the entry point into the applicati on. • A gray cylinder that indicates the code in that node accesses data. • A gray wrench and screwdriver indicates the code invokes an external service. • A gray circle with three smaller circles indicates a node can fit into two or more classifications. • A gray folder icon indicates a project node. • A gray set of curly brackets indicates a namespace node. • A blue arrow indicates a dependency incoming to a node. • A red arrow indicates a dependency outgoing from a node. Choose Group classification from the bottom of the visualization to view the name, ID, and color assigned to each group in the visualiza tion. Work with visualization 27 AWS Microservice Extractor for .NET User Guide Task Description View runtime profiling information Edit group name and color Add or delete canvas (main view only) Get automated groupings (main view only) You can view the number of call counts from the main view by hovering over the arrows in the visualization. After you have created a group, you can edit the name and color of the group by choosing or right-clicking it to open the Actions menu, then choosing Edit group name and color. You can update the group name and color in the Edit group name and color pane that appears on the right. A canvas displays a specific layout of nodes, edges, and groups. After you onboard your application, the default canvas is automatic ally created. You can perform graph tasks on each canvas, such as viewing node details, rearranging nodes, or creating groups. You can also add a new canvas or delete an existing one (excluding the default canvas) by selecting the option from the Actions dropdown menu. You can get automated artificial intelligence- generated grouping recommendations by selecting one of the following options on the Get automated groupings pop-up: • To preserve the existing groups in the visualization and get grouping recommend ations for only the non-grouped nodes, select Maintain existing group nodes. • To delete all of the existing groups and get grouping recommendations for all of the nodes in the application, select Reset existing groups. Work with visualization 28 AWS Microservice Extractor for .NET Main visualization User Guide After you onboard an application, Microservice Extractor displays its nodes and dependencies as a graph. No groups are created by default. You can create groups, modify them, or create new groups to associate with a functionality that guides refactoring. Use the main visualization to view your groups and prepare for extraction after creating groups in the main visualization, or after exploring recommended groupings. Manually remove node dependencies to prepare parts of your application for extraction as smaller services. The parts are displayed as groups in the graph. Microservice Extractor can also extract API endpoints as separate services by isolating the code that underlies the API endpoints and replacing local calls with network calls. This creates a new implementation of the calling class in a new solution, while preserving the interface and original solution. You can then develop, build, and deploy the new repositories independently as services. For more information about actions you can take from the main visualization, see Features of the AWS Microservice Extractor for .NET visualization tool. Extract parts of an application as independent services Review the arrangement of the group or groups you selected and their individual nodes and dependencies on the main view of the Visualization (nodes and dependencies) page. When you are satisfied with your groups, choose Extract group and perform the following steps: 1. On the Review details and initiate extraction page, review and verify the Service details and the Extraction details. Address all of the issues listed for the Nodes and Dependencies. To view the description of an issue, select the Shared state access detected or Requires attention alert under Comments. |
user-guide-011 | user-guide.pdf | 11 | tool. Extract parts of an application as independent services Review the arrangement of the group or groups you selected and their individual nodes and dependencies on the main view of the Visualization (nodes and dependencies) page. When you are satisfied with your groups, choose Extract group and perform the following steps: 1. On the Review details and initiate extraction page, review and verify the Service details and the Extraction details. Address all of the issues listed for the Nodes and Dependencies. To view the description of an issue, select the Shared state access detected or Requires attention alert under Comments. Select the corresponding Class ID to view and address the issue in the source code. If a class accesses a state that is shared by classes that belong to multiple groups in the application, modification of the shared state may result in errors when you extract the nodes as a smaller service. If the Shared state access detected message appears next to a class, check whether the class accesses a state that is shared by classes that belong to other groups. If so, update your application source code to remove access to the shared state. Analyze the application again before proceeding with the extraction. The following shared state accesses are detected: • TempData property in ControllerBase class. Extract as independent services 29 AWS Microservice Extractor for .NET User Guide • Session property in Controller class. • Session property in ASP.NET Core HttpContext class. • Items property in ASP.NET Core HttpContext class. • TempData property in ASP.NET Core Controller class. • TempData property in ASP.NET Core WebApi controller. 2. Select the options under Method invocations from the original application to the extracted service. Consider the following limitations for each method. How Microservice Extractor extracts the service code repository: • Extract as a microservice with remote endpoints — network calls can add additional overhead to user requests. Manual verification and refactoring may be required to ensure accuracy. • Extract as a library — code duplication from manual refactoring may introduce conflicting states in the application. How Microservice Extractor processes the original monolithic application repository: • Refactor the methods in the original monolithic application repository to the methods that call the extracted microservice — this option is not supported for WCF applications. • Do not refactor the methods — code duplication from manual refactoring may introduce conflicting states in the application. 3. When you are satisfied with the extraction details, choose Extract. The progress of the extraction is displayed at the top of the page . To cancel the extraction, in the extraction progress banner, select Cancel extraction. If you cancel the extraction, the extraction configuration is deleted, and you must restart the extraction. A successful extraction will display the output location of the extraction in the green status banner. To view the extraction details, choose View details on the status banner. If the extraction fails, the red status banner displays an error message. Navigate to the Visualization page to verify and address issues with the unsupported classes and try again. View and edit extraction details Extract as independent services 30 AWS Microservice Extractor for .NET User Guide You can view the details of the extraction from the Application details page by selecting the radio button next to the Service name under Extractions, and choosing View details from the Actions dropdown. On the service details page, you can view the Extraction details and Nodes and dependencies. To edit the extraction details, choose Re-extract service from the Actions dropdown. You must re-extract a service in order to edit its configuration. Manually deploy as independent service To deploy parts of your application as smaller services, we recommend that you set up the following environment: • Docker version 17.05 installed locally with administrator access • An AWS profile with an attached policy that grants permissions to write to Amazon Elastic Container Registry and Amazon S3 • Windows Server 2019 or later • A minimum of 50 GB of free disk space To deploy the extracted service as an independent service, perform the following high-level steps: 1. Refactor the source code, if necessary, to ensure that the extracted service builds successfully. 2. Navigate to the output location of the extracted service. 3. From the Dockerfile, manually create a Docker container image. 4. Push the Docker container image to Amazon Elastic Container Registry (Amazon ECR). 5. Use AWS CloudFormation to deploy the container image hosted in Amazon ECR to Amazon Elastic Container Service (ECS). For more information, see Using Amazon ECR with Amazon ECS and Creating Amazon ECS resources with AWS CloudFormation. Failure modes What used to be called a function call is now a network call. A network call can fail for various reasons; for example, network connectivity, service outages, authentication errors, or unknown server errors. While |
user-guide-012 | user-guide.pdf | 12 | of the extracted service. 3. From the Dockerfile, manually create a Docker container image. 4. Push the Docker container image to Amazon Elastic Container Registry (Amazon ECR). 5. Use AWS CloudFormation to deploy the container image hosted in Amazon ECR to Amazon Elastic Container Service (ECS). For more information, see Using Amazon ECR with Amazon ECS and Creating Amazon ECS resources with AWS CloudFormation. Failure modes What used to be called a function call is now a network call. A network call can fail for various reasons; for example, network connectivity, service outages, authentication errors, or unknown server errors. While Microservice Extractor provides some handling for these new types of errors, you may want to update them to accommodate your error-handling scheme. We recommend copying artifacts or package dependencies that lie outside the scope of the directory of your solution (with the exception of standard “reference assemblies” installed in Manually deploy 31 AWS Microservice Extractor for .NET User Guide known locations) into your solution directory, and adjusting project files to point to the updated location before starting automatic refactoring. Remove an application from AWS Microservice Extractor for .NET To remove an application from Microservice Extractor, perform the following steps: 1. 2. From the left navigation pane, choose Applications. Select the radio button next to the application that you want to remove, and from the Actions menu, select Remove application from list. When you remove an application, all of the contents in the working directory will be removed, and you will no longer be able to manage the application in Microservice Extractor. Edit application details To edit the details of an application, perform the following steps. 1. 2. From the left navigation pane, choose Applications. Select the radio button next to the application for which you want to edit the details. From the Action menu, select Edit application details. 3. On the Edit details page, you can update the application Name and Description, the MSbuild path, and the option to Analyze .NET Core Portability. Note When you submit the updates, Microservice Extractor performs a new application analysis and refreshes the visualization of the application. Edit user settings To change your user settings, perform the following steps. 1. From the left navigation pane of the Microservice Extractor tool, choose Settings. 2. On the Settings page, select Edit. Remove application 32 AWS Microservice Extractor for .NET User Guide 3. On the Edit settings page, you can update your AWS named profile and your Microservice Extractor usage data sharing option. You cannot change your working directory after your application is set up. Edit settings 33 AWS Microservice Extractor for .NET User Guide AWS Microservice Extractor for .NET security Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: • Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third- party auditors regularly test and verify the effectiveness of our security as part of the AWS Compliance Programs. For more information about the compliance programs that apply to Microservice Extractor, see AWS Services in Scope by Compliance Program. • Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company’s requirements, and applicable laws and regulations. This documentation helps you understand how to apply the shared responsibility model when using AWS Microservice Extractor for .NET. The following topics show you how to configure Microservice Extractor to meet your security and compliance objectives. You also learn how to use other AWS services that help you to monitor and secure your Microservice Extractor resources. Security topics • Data protection in AWS Microservice Extractor for .NET • Identity and Access Management in AWS Microservice Extractor for .NET • Configuration and vulnerability analysis in AWS Microservice Extractor for .NET • Security best practices Data protection in AWS Microservice Extractor for .NET The AWS shared responsibility model applies to data protection in AWS Microservice Extractor for .NET. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, Data protection 34 AWS Microservice Extractor for .NET User Guide see the Data Privacy FAQ. |
user-guide-013 | user-guide.pdf | 13 | Security best practices Data protection in AWS Microservice Extractor for .NET The AWS shared responsibility model applies to data protection in AWS Microservice Extractor for .NET. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, Data protection 34 AWS Microservice Extractor for .NET User Guide see the Data Privacy FAQ. For information about data protection in Europe, see the AWS Shared Responsibility Model and GDPR blog post on the AWS Security Blog. For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways: • Use multi-factor authentication (MFA) with each account. • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3. • Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see Working with CloudTrail trails in the AWS CloudTrail User Guide. • Use AWS encryption solutions, along with all default security controls within AWS services. • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. • If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see Federal Information Processing Standard (FIPS) 140-3. We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a Name field. This includes when you work with Microservice Extractor or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server. Encryption at rest Microservice Extractor stores data in two locations. The first one is the working directory for Microservice Extractor on your local system. The data stored here is not encrypted at rest. The second is the Amazon S3 bucket you have configured in Microservice Extractor where source code metadata is stored for analysis. This location is encrypted at rest as follows. Microservice Extractor automatically enables server-side encryption with Amazon managed keys for new object uploads. Data protection 35 AWS Microservice Extractor for .NET User Guide Unless you specify otherwise, objects use SSE-S3 by default to encrypt objects. However, you can choose to configure the service to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) instead. For more information, see Specifying server-side encryption with AWS KMS (SSE-KMS) in S3 User Guide. AWS KMS is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Microservice Extractor uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your data. Also, when SSE-KMS is requested for the object, the object checksum as part of the object's metadata, is stored in encrypted form. For more information about checksum, see Checking object integrity in S3 User Guide. If you use KMS keys, you can use AWS KMS through the AWS Management Console or the AWS KMS API to centrally create, view, edit, monitor, enable or disable, rotate, and schedule deletion of KMS keys, and define the policies that control how and by whom KMS keys can be used. Audit their usage to prove that they are being used correctly. Auditing is supported by the AWS KMS API, but not by the AWS KMS management console. The security controls in AWS KMS can help you meet encryption-related compliance requirements. You can use these KMS keys to protect your data in Microservice Extractor. When you use SSE-KMS encryption, the AWS KMS keys must be in the same Region as selected in the tool. There are additional charges for using AWS KMS keys. For more information, see AWS KMS key concepts in the AWS Key Management Service Developer Guide and AWS KMS pricing. Encryption in transit Microservice Extractor makes requests to the server over the Transport Layer Security protocol (TLS). AWS CloudTrail and Microservice Extractor APIs If you use CloudTrail, you may find |
user-guide-014 | user-guide.pdf | 14 | KMS can help you meet encryption-related compliance requirements. You can use these KMS keys to protect your data in Microservice Extractor. When you use SSE-KMS encryption, the AWS KMS keys must be in the same Region as selected in the tool. There are additional charges for using AWS KMS keys. For more information, see AWS KMS key concepts in the AWS Key Management Service Developer Guide and AWS KMS pricing. Encryption in transit Microservice Extractor makes requests to the server over the Transport Layer Security protocol (TLS). AWS CloudTrail and Microservice Extractor APIs If you use CloudTrail, you may find that your CloudTrail logs contain API calls related to Microservice Extractor. These calls are: • StartGroupingAssessment • CancelGroupingAssessment • GetGroupingAssessment Data protection 36 AWS Microservice Extractor for .NET User Guide Data collected by AWS Microservice Extractor for .NET You can choose to share data when you first set up the Microservice Extractor application. You have the option to turn off usage data sharing by clearing the check box for usage data sharing on the AWS Microservice Extractor for .NET Settings page . When usage data sharing is turned on, Microservice Extractor collects the following information when you onboard your source code: • Success/failure operations performed during onboarding, static code analysis, application build, and graph creation. • Resources consumed during operations, such as CPU and memory usage. • Number of nodes and dependencies. • Number of detected islands. • Project type, as indicated by the project GUID and the presence of files having known extensions. • Programming language used. • Presence of code that likely implements user-interface, data access, and/or service components Microservice Extractor doesn’t collect proprietary information, such as source code. In the case of failure, the tool may collect stack traces to improve product experience. Identity and Access Management in AWS Microservice Extractor for .NET AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. AWS Microservice Extractor for .NET is a standalone application that does not require IAM access control to use resources. To use Microservice Extractor, you your user must have the correct permissions. These permissions are provided in the Prerequisites section of this guide. Data collected by AWS Microservice Extractor for .NET 37 AWS Microservice Extractor for .NET User Guide Configuration and vulnerability analysis in AWS Microservice Extractor for .NET When AWS Microservice Extractor for .NET requires updates, you are notified and must install the latest version of the application upon restart. You maintain the system patching responsibility, per the shared responsibility model. Security best practices AWS Microservice Extractor for .NET provides security features to consider as you develop and implement your own security policies. The following best practice is a general guideline and doesn’t represent a complete security solution. Because this best practice might not be appropriate or sufficient for your environment, treat it as a helpful consideration rather than a prescription. Implement least privilege access When you attach the IAM policies as inline policies to your IAM user, grant only the permissions that are required to perform the specified task. Implementing least privilege access is fundamental in reducing security risk and the impact that could result from errors or malicious intent. Configuration and vulnerability analysis 38 AWS Microservice Extractor for .NET User Guide Troubleshooting AWS Microservice Extractor for .NET The following remediation strategies can help you troubleshoot problems with AWS Microservice Extractor for .NET. Troubleshooting topics: • AWS profile errors • Build failures • Extraction errors • Application artifact location • Onboarding and visualization errors • Creating groups • Uninstalling application • Metrics and logs collected by AWS Microservice Extractor for .NET • Questions and feedback AWS profile errors Description An error regarding the specified AWS profile is returned. For example: The specified AWS profile is invalid or does not have permission to send metrics to AWS. Please refer to the Microservice Extractor User Guide for instructions on how to setup a valid AWS profile for use with Microservice Extractor. Solution • Verify that you your user has the required permissions. To view the required policies, see the Microservice Extractor prerequisites. To provide access, add permissions to your users, groups, or roles: • Users and groups in AWS IAM Identity Center: Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide. AWS profile errors 39 AWS Microservice Extractor for .NET User Guide • Users managed in IAM through an identity provider: Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide. • IAM users: • Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide. • |
user-guide-015 | user-guide.pdf | 15 | and groups in AWS IAM Identity Center: Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide. AWS profile errors 39 AWS Microservice Extractor for .NET User Guide • Users managed in IAM through an identity provider: Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide. • IAM users: • Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide. • (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide. • Add a named AWS profile with proper credentials, if required. For more information, see Named profiles in the AWS CLI User Guide. Build failures Description The build fails when you attempt to build your repositories. Solution Verify the following: • The application is supported by Microservice Extractor. See Prerequisites for analysis and extraction of monolithic application to view the application prerequisites. • You have installed MSBuild, and Microservice Extractor is pointing to it on the Settings page. • MSBuild is working properly. Check the MSBuild log on the Application details page. If there is no preview, or choosing the log returns an error, then MSBuild is likely failing. Verify that you can build the application manually to see if MSBuild is working properly. • Microservice Extractor is using the same MSBuild version as your installed version of Visual Studio. If the versions don't match, you can update the MSBuild version used by Microservice Extractor from the Settings page. • Relevant switches are added to the .csproj file so that the default msbuild operation can use them. By default, Microservice Extractor appends the RestorePackagesConfig and restore switches to restore NuGet packages during build. Build failures 40 AWS Microservice Extractor for .NET Extraction errors Description User Guide When you attempt to extract segments of your code as independent services, an error is returned. Solution For the following error: Command failed: failed to run build command on ...\MyApp.sln: failed to execute command: exec: "msbuild": executable file not found in %PATH% • Verify that you have installed MSBuild and Microservice Extractor is pointing to it on the Settings page. • Add MSBuild to the system path environment variable. Check logs • You can determine the cause of most extraction failures by viewing %USERPORFILE%\AppData \Roaming\ServiceExtract\logs\Extraction\extract*log. Generally, the last line in this file contains a message with the cause of the error. • If the error message in the log refers to a build failure, check %USERPORFILE%\AppData \Roaming\ServiceExtract\logs\msbuild.log for details. Note that the extraction builds both the new service and the modified original application. The output of each of these builds is sent to msbuild.log. Application artifact location To determine the application artifact file location, do the following: 1. Find the application ID in the Summary section of the Application details page. 2. The directory of the corresponding application is: C:\Users\<username>\AppData\Roaming \ServiceExtract\version-1\cache\version-<versionNumber>\<applicationId> Extraction errors 41 AWS Microservice Extractor for .NET User Guide Onboarding and visualization errors If you are encountering problems onboarding an application and viewing the visualization, try the following solutions: 1. The metric policy may not be properly configured. Check the Electron main application log to check for the last succeeded state and high-level errors. If the metrics policy is not properly configured, the log file should display many error messages about metrics. 2. Check the directory of the corresponding application ID for the following intermediate artifact files: edges.csv and vertices.csv inside of the data-extraction-output folder. Creating groups If you experience problems when creating groups, perform the following steps: 1. Check in the directory for the corresponding application ID for the following intermediate artifact file: tags.csv inside of the tags-data folder. 2. Verify that the groupings are correctly reflected in the file. 3. Choose Reset view to remove all of the groups, and try again. Uninstalling application If you encounter issues when attempting to uninstall the application as a non-admin user when using the Apps and Features interface in Windows, perform the following steps: 1. Open Control Panel>Uninstall a program. Or, choose the Windows icon>Run> and enter appwiz.cpl. 2. Choose AWS Microservice Extractor for .NET from the list of installed applications, and select Uninstall to remove the application and its installation files. Onboarding and visualization errors 42 AWS Microservice Extractor for .NET User Guide Metrics and logs collected by AWS Microservice Extractor for .NET AWS Microservice Extractor for .NET collects the following files from C:\Users \<username>\AppData\Roaming\ServiceExtract\logs on the server or desktop where the tool runs: • logs*.log • logs\ProcessOrchestrator*.log • logs\Extraction*.log • logs\PortingAssistant*.log Questions and feedback If you have questions that |
user-guide-016 | user-guide.pdf | 16 | the following steps: 1. Open Control Panel>Uninstall a program. Or, choose the Windows icon>Run> and enter appwiz.cpl. 2. Choose AWS Microservice Extractor for .NET from the list of installed applications, and select Uninstall to remove the application and its installation files. Onboarding and visualization errors 42 AWS Microservice Extractor for .NET User Guide Metrics and logs collected by AWS Microservice Extractor for .NET AWS Microservice Extractor for .NET collects the following files from C:\Users \<username>\AppData\Roaming\ServiceExtract\logs on the server or desktop where the tool runs: • logs*.log • logs\ProcessOrchestrator*.log • logs\Extraction*.log • logs\PortingAssistant*.log Questions and feedback If you have questions that are not addressed in the AWS Microservice Extractor for .NET technical documentation, contact <[email protected]>. You can also provide feedback by choosing Feedback in the upper right-hand corner of this page. Metrics and logs collected by AWS Microservice Extractor for .NET 43 AWS Microservice Extractor for .NET User Guide AWS Microservice Extractor for .NET version history The following table describes the released versions of AWS Microservice Extractor for .NET. Version 2.1.6 2.1.4 2.1.3 2.1.2 2.1.1 2.1.0 2.0.0 Release date March 6, 2025 Details • Fixed bug to properly enforce that only supported versions of .NET are targeted in input source code. • Clean up code for simplity September 9, 2024 May 20, 2024 January 25, 2024 December 13, 2023 and robustness. • Fixed bug where log data was not uploaded when data sharing was enabled by customer. • Fix the error when retrying API calls: retryCondition is not a function. • Improved the responsiv eness of visualization canvas where there are 100+ nodes on a single layer • Added APIs tab to view solution APIs • Added support for .NET 8 November 18, 2023 • Integrated porting November 9, 2023 assessment and porting actions 44 AWS Microservice Extractor for .NET User Guide Version Details Release date • Added APIs tab to view solution APIs • Added porting assessmen t overview and actions to Application Details tab 1.9.2 • Added support for HTTPS September 14, 2023 inspecting proxies. • Fixed an issue where projects were not getting detected due to flat folder structure. 1.9.1 • Improved the responsiv August 30, 2023 1.9.0 August 25, 2023 eness of visualization when the user double-clicks on a node. • Fix bugs in sending metrics. • Support for visualizing very large applications with up to 50,000 classes through scalable backend. • Updated IAM policy. • Support for server-side encryption with choices of Amazon S3 managed encryption keys (SSE-S3) and AWS Key Management Service keys (SSE-KMS). 45 AWS Microservice Extractor for .NET User Guide Version 1.8.6 Release date July 26, 2023 Details • The embedded strangler fig porting capabilities now use Porting Assistant for .NET version 2.13. • Performance improveme nts for application analysis workflow. 1.8.5 • Updated look for select UI July 13, 2023 elements. • Bug fixes and improveme nts for visualization. • Fixed an issue where Microservice Extractor may not detect that credentials were invalid/expired. • Fixed an issue with notificat ion flash bar when .NET Core portability assessmen t fails. 46 AWS Microservice Extractor for .NET User Guide Version 1.8.4 Release date June 19, 2023 Details • New look for incoming and outgoing dependancies in visualization. • Unified the background color/style when dialogs appear. • Added ability to filter by Groups in addition to Projects and Namespaces. • Updates to visualization legend to reflect recent changes in the UI. • The user can now visualize an application even if .NET Core portability assessment is still in progress. 1.8.3 • Update maximum node/ edge count limits and June 6, 2023 disable automated grouping if they are exceeded. 1.8.2 • Remove the 'Application May 22, 2023 Type' column on applicati ons page. • Send logs when Microserv ice Extractor is closed. • Bug fixes and improveme nts in automated grouping. 47 AWS Microservice Extractor for .NET User Guide Version 1.8.1 Details • Bug fixes and improveme nts on application details page. • Bug fixes with canvas. Release date May 2, 2023 1.8.0 • Added node filtering April 24, 2023 update to include filtering by project or namespace • Added grouping options to include projects and groups, namespaces, and class 1.7.2 • Disable extraction of empty March 8, 2023 groups. • Bug fixes and improveme nts with canvas. • Make automated grouping an async process. • Migrate React Router from v5 to v6. 48 AWS Microservice Extractor for .NET User Guide Version 1.7.1 Details Release date • Re-implement January 25, 2023 packaging .net in Telemetry .exe. • Update the interval of checking automated grouping job status to 60 seconds. Update the total time of checking automated grouping job status to 2 hours. • Set the release message when needed. • Fixed extraction failure due to accented character in the path. 1.7.0 • Added AI-powered November 18, |
user-guide-017 | user-guide.pdf | 17 | March 8, 2023 groups. • Bug fixes and improveme nts with canvas. • Make automated grouping an async process. • Migrate React Router from v5 to v6. 48 AWS Microservice Extractor for .NET User Guide Version 1.7.1 Details Release date • Re-implement January 25, 2023 packaging .net in Telemetry .exe. • Update the interval of checking automated grouping job status to 60 seconds. Update the total time of checking automated grouping job status to 2 hours. • Set the release message when needed. • Fixed extraction failure due to accented character in the path. 1.7.0 • Added AI-powered November 18, 2022 automated refactoring recommendations. 49 AWS Microservice Extractor for .NET User Guide Version 1.6.0 Details Release date • Listed detected applicati November 15, 2022 on features in onboarding report. • Bundled Porting Assistant for .NET, and added extract and port operation. • Added the option to extract code without creating HTTP endpoints. • Visualization updates for ASP.NET Web Forms and WCF applications. Bug fixes • Fixed onboarding errors for solutions containing invalid .NET projects (for example, C++ and empty projects). • Fixed intermittent extractio n status bar update issue. • Addressed some bugs in generated code that implements HTTP communications between monolith and microservice. 50 AWS Microservice Extractor for .NET User Guide Version 1.5.0 Release date August 2, 2022 Details • Added heuristics-based refactoring recommend ations. • Added support for multiple canvases. • Removed building process from application onboardin g. • Visualization identifies node types using icons. • Visualization supports any C# application (not only MVC applications). Bug fixes • Improved error messaging. • Fixed various issues related to the visualization UI. 51 AWS Microservice Extractor for .NET User Guide Version 1.3.1 Details Release date • Expanded support for .NET April 7, 2022 versions. • Added support for custom MSBuild versions and arguments for each onboarded application. • Added support for configuri ng AWS profile using short-term credentials and existing AWS CLI/SDK credentials. Bug fixes • Fixed issues with island detection for some large applications. • Fixed issues with changing MSBuild path across software relaunches. • Improved messaging for onboarding and extraction failures. • Fixed issue with extraction. 1.2.1 Added support for short-term credentials. January 13, 2022 52 AWS Microservice Extractor for .NET User Guide Version 1.1.0 Details Bug fixes Release date December 27, 2021 • Provided a way to avoid long path errors when onboarding applications. • Fixed intermittent, uncaught Java exceptions. 1.0.1 Bug fix December 13, 2021 • Mitigated Apache Log4j security vulnerability (CVE-2021-44228). 1.0.0 Initial release November 30, 2021 53 AWS Microservice Extractor for .NET User Guide AWS Microservice Extractor for .NET User Guide document history The following table describes the documentation for this release of AWS Microservice Extractor for .NET. • API version: latest • Latest documentation update: March 6, 2025 Change Description Date Filtering nodes by project and namespace You can apply filters to the visualization to see only April 17, 2023 IAM best practices updates Automated refactoring recommendations .NET 7 support namespace or projet nodes. Updated guide to align with the IAM best practices . For more information, see Security best practices in IAM. You can start refactoring an older monolithic applicati on with no familiarity with its original architecture or retrofitted features. You can visualize and extract .NET version 7.0 applications using AWS Microservice Extractor for .NET. February 15, 2023 November 18, 2022 November 15, 2022 AWS Microservice Extractor for .NET integration with You can leverage the functionality of Porting Porting Assistant for .NET Assistant for .NET during the November 15, 2022 54 AWS Microservice Extractor for .NET User Guide porting and extraction of your monolithic applications. AWS Microservice Extractor for .NET general availability You can reduce the time and effort required to break down November 30, 2021 large, monolithic applications running on the AWS Cloud or on premises into smaller, independent services. 55 |
user-guide.pdf-001 | user-guide.pdf.pdf | 1 | User Guide AWS Deadline Cloud Version latest Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. AWS Deadline Cloud User Guide AWS Deadline Cloud: User Guide Copyright © 2025 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. AWS Deadline Cloud Table of Contents User Guide What is Deadline Cloud? ................................................................................................................. 1 Features of Deadline Cloud ........................................................................................................................ 1 Concepts and terminology ......................................................................................................................... 2 Getting started with Deadline Cloud ....................................................................................................... 5 Accessing Deadline Cloud ........................................................................................................................... 5 Related services ............................................................................................................................................ 5 How Deadline Cloud works ........................................................................................................................ 6 .................................................................................................................................................................... 7 Permissions in Deadline Cloud ............................................................................................................. 7 Software support with Deadline Cloud .............................................................................................. 8 Getting started ................................................................................................................................ 9 Set up your AWS account ........................................................................................................................... 9 Set up your monitor .................................................................................................................................. 10 Create your monitor ............................................................................................................................. 10 Define farm details ............................................................................................................................... 13 Define queue details ............................................................................................................................ 13 Define fleet details ............................................................................................................................... 15 Review and create ................................................................................................................................ 16 Set up the submitter ................................................................................................................................. 16 Step 1: Install the Deadline Cloud submitter ................................................................................. 17 Step 2: Install and set up Deadline Cloud monitor ...................................................................... 20 Step 3: Launch the Deadline Cloud submitter ............................................................................... 24 Supported submitters .......................................................................................................................... 25 Using the monitor ......................................................................................................................... 31 Share the Deadline Cloud monitor URL ................................................................................................ 31 Open the Deadline Cloud monitor ......................................................................................................... 32 View queue and fleet details ................................................................................................................... 33 Manage jobs, steps, and tasks ................................................................................................................. 34 View job details .......................................................................................................................................... 35 Archive a job .......................................................................................................................................... 36 Requeue a job ........................................................................................................................................ 37 Resubmit a job ...................................................................................................................................... 37 View a step .................................................................................................................................................. 37 View a task .................................................................................................................................................. 38 Version latest iii AWS Deadline Cloud User Guide View logs ...................................................................................................................................................... 38 Download finished output ....................................................................................................................... 40 Farms .............................................................................................................................................. 41 Create a farm .............................................................................................................................................. 41 Queues ............................................................................................................................................ 42 Create a queue ........................................................................................................................................... 42 Create a queue environment ................................................................................................................... 44 Default Conda queue environment ................................................................................................... 45 Associate a queue and fleet .................................................................................................................... 47 Fleets .............................................................................................................................................. 48 Service-managed fleets ............................................................................................................................ 48 Create an SMF ....................................................................................................................................... 48 Use a GPU accelerator ......................................................................................................................... 50 Software licenses .................................................................................................................................. 51 VFX platform .......................................................................................................................................... 51 Customer-managed fleets ........................................................................................................................ 52 Managing users .............................................................................................................................. 54 Manage users for your monitor .............................................................................................................. 54 Manage users for farms ............................................................................................................................ 56 Jobs ................................................................................................................................................. 59 Using a submitter ...................................................................................................................................... 60 Shared job settings tab ....................................................................................................................... 62 Job-specific settings tab ...................................................................................................................... 64 Job attachments tab ............................................................................................................................ 65 Host requirements tab ......................................................................................................................... 67 Processing jobs ........................................................................................................................................... 68 Monitoring jobs .......................................................................................................................................... 68 Storage ........................................................................................................................................... 72 Job attachments ......................................................................................................................................... 72 Encryption for job attachment S3 buckets ..................................................................................... 73 Managing job attachments in S3 buckets ....................................................................................... 74 Virtual file system ................................................................................................................................ 74 Track spending and usage ............................................................................................................ 78 Cost assumptions ....................................................................................................................................... 78 Control costs with a budget .................................................................................................................... 79 Prerequisite ............................................................................................................................................. 80 Version latest iv AWS Deadline Cloud User Guide Open the Deadline Cloud budget manager .................................................................................... 80 Create a budget .................................................................................................................................... 80 View a budget ....................................................................................................................................... 81 Edit a budget ......................................................................................................................................... 82 Deactivate a budget ............................................................................................................................. 82 Monitor a budget with EventBridge events .................................................................................... 83 Track usage and costs ............................................................................................................................... 84 Prerequisite ............................................................................................................................................. 84 Open the usage explorer .................................................................................................................... 84 Use the usage explorer ........................................................................................................................ 84 Cost management ...................................................................................................................................... 87 Cost management best practices ...................................................................................................... 88 Security .......................................................................................................................................... 91 Data protection ........................................................................................................................................... 92 Encryption at rest ................................................................................................................................. 93 Encryption in transit ............................................................................................................................ 93 Key management .................................................................................................................................. 93 Inter-network traffic privacy ............................................................................................................ 103 Opt out ................................................................................................................................................. 103 Identity and Access Management ........................................................................................................ 104 Audience ............................................................................................................................................... 105 Authenticating with identities ......................................................................................................... 105 Managing access using policies ....................................................................................................... 109 How Deadline Cloud works with IAM ............................................................................................. 111 Identity-based policy examples ....................................................................................................... 118 AWS managed policies ...................................................................................................................... 121 Troubleshooting .................................................................................................................................. 126 Compliance validation ............................................................................................................................ 127 Resilience ................................................................................................................................................... 128 Infrastructure security ............................................................................................................................. 129 Configuration and vulnerability analysis ............................................................................................ 130 Cross-service confused deputy prevention ......................................................................................... 130 AWS PrivateLink ....................................................................................................................................... 131 Considerations ..................................................................................................................................... 132 Deadline Cloud endpoints ................................................................................................................ 132 Create endpoints ................................................................................................................................ 133 Version latest v AWS Deadline Cloud User Guide Security best practices ............................................................................................................................ 134 Data protection ................................................................................................................................... 134 IAM permissions .................................................................................................................................. 135 Run jobs as users and groups .......................................................................................................... 135 Networking ........................................................................................................................................... 136 Job data ................................................................................................................................................ 136 Farm structure ..................................................................................................................................... 136 Job attachment queues ..................................................................................................................... 137 Custom software buckets ................................................................................................................. 139 Worker hosts ........................................................................................................................................ 140 Host configuration script .................................................................................................................. 141 Workstations ........................................................................................................................................ 141 Verify downloaded software ............................................................................................................ 142 Monitoring ................................................................................................................................... 148 Quotas .......................................................................................................................................... 150 AWS CloudFormation resources ................................................................................................. 151 Deadline Cloud and AWS CloudFormation templates ...................................................................... 151 Learn more about AWS CloudFormation ............................................................................................ 151 Troubleshooting ........................................................................................................................... 152 Why can a user not see my farm, fleet, or queue? ........................................................................... 152 User access ........................................................................................................................................... 152 |
user-guide.pdf-002 | user-guide.pdf.pdf | 2 | Data protection ................................................................................................................................... 134 IAM permissions .................................................................................................................................. 135 Run jobs as users and groups .......................................................................................................... 135 Networking ........................................................................................................................................... 136 Job data ................................................................................................................................................ 136 Farm structure ..................................................................................................................................... 136 Job attachment queues ..................................................................................................................... 137 Custom software buckets ................................................................................................................. 139 Worker hosts ........................................................................................................................................ 140 Host configuration script .................................................................................................................. 141 Workstations ........................................................................................................................................ 141 Verify downloaded software ............................................................................................................ 142 Monitoring ................................................................................................................................... 148 Quotas .......................................................................................................................................... 150 AWS CloudFormation resources ................................................................................................. 151 Deadline Cloud and AWS CloudFormation templates ...................................................................... 151 Learn more about AWS CloudFormation ............................................................................................ 151 Troubleshooting ........................................................................................................................... 152 Why can a user not see my farm, fleet, or queue? ........................................................................... 152 User access ........................................................................................................................................... 152 Why are workers not picking up my jobs? ......................................................................................... 153 Fleet role configuration ..................................................................................................................... 153 Why is my worker stuck running? ........................................................................................................ 153 Worker stuck exiting OpenJD environment .................................................................................. 153 Troubleshooting jobs ............................................................................................................................... 154 Why did creating my job fail? .......................................................................................................... 154 Why is my job not compatible? ....................................................................................................... 155 Why is my job stuck in ready? ......................................................................................................... 155 Why did my job fail? .......................................................................................................................... 155 Why is my step pending? ................................................................................................................. 156 Additional resources ................................................................................................................................ 156 Document history ........................................................................................................................ 157 AWS Glossary ............................................................................................................................... 161 Version latest vi AWS Deadline Cloud User Guide What is AWS Deadline Cloud? Deadline Cloud is an AWS service you can use to create and manage rendering projects and jobs on Amazon Elastic Compute Cloud (Amazon EC2) instances directly from digital content creation pipelines and workstations. Deadline Cloud provides console interfaces, local applications, command line tools, and an API. With Deadline Cloud, you can create, manage, and monitor farms, fleets, jobs, user groups, and storage. You can also specify hardware capabilities, create environments for specific workloads, and integrate the content creation tools that your production requires into your Deadline Cloud pipeline. Deadline Cloud provides a unified interface to manage all of your rendering projects in one place. You can manage users, assign projects to them, and grant permissions for job roles. Topics • Features of Deadline Cloud • Concepts and terminology for Deadline Cloud • Getting started with Deadline Cloud • Accessing Deadline Cloud • Related services • How Deadline Cloud works Features of Deadline Cloud Here are some of the key ways Deadline Cloud can help you run and manage visual compute workloads: • Quickly create your farms, queues, and fleets. Monitor their status, and gain insights into the operation of your farm and jobs. • Centrally manage Deadline Cloud users and groups, and assign permissions. • Manage sign-in security for project users and external identity providers with AWS IAM Identity Center. • Securely manage access to project resources with AWS Identity and Access Management (IAM) policies and roles. Features of Deadline Cloud Version latest 1 AWS Deadline Cloud User Guide • Use tags to organize and quickly find project resources. • Manage project resource usage and estimated costs for your project. • Provide a wide range of compute management options to support rendering in the cloud or in person. Concepts and terminology for Deadline Cloud To help you get started with AWS Deadline Cloud, this topic explains some of its key concepts and terminology. Budget manager Budget manager is part of the Deadline Cloud monitor. Use the budget manager to create and manage budgets. You can also use it to limit activities to stay within budget. Deadline Cloud Client Library The Client Library includes a command line interface and library for managing Deadline Cloud. Functionality includes submitting job bundles based on the Open Job Description specification to Deadline Cloud, downloading job attachment outputs, and monitoring your farm using the command line interface. Digital content creation application (DCC) Digital content creation applications (DCCs) are third-party products where you create digital content. Examples of DCCs are Maya, Nuke, and Houdini. Deadline Cloud provides job submitter integrated plugins for specific DCCs. Farm A farm is a where your project resources are located. It consists of queues and fleets. Fleet A fleet is a group of worker nodes that do the rendering. Worker nodes process jobs. A fleet can be associated to multiple queues, and a queue can be associated to multiple fleets. Job A job is a rendering request. Users submit jobs. Jobs contain specific job properties that are outlined as steps and tasks. Concepts and terminology Version latest 2 AWS Deadline Cloud Job attachments User Guide A job attachment is a Deadline Cloud feature that you can use to manage inputs and outputs for jobs. Job files are uploaded as job attachments during the rendering process. These files can be textures, 3D models, lighting rigs, and other similar items. Job priority Job priority is the approximate order that Deadline Cloud processes a job in a queue. You can set the job priority between 1 and 100, jobs with a higher number priority are generally processed first. Jobs with the same priority are processed in the |
user-guide.pdf-003 | user-guide.pdf.pdf | 3 | terminology Version latest 2 AWS Deadline Cloud Job attachments User Guide A job attachment is a Deadline Cloud feature that you can use to manage inputs and outputs for jobs. Job files are uploaded as job attachments during the rendering process. These files can be textures, 3D models, lighting rigs, and other similar items. Job priority Job priority is the approximate order that Deadline Cloud processes a job in a queue. You can set the job priority between 1 and 100, jobs with a higher number priority are generally processed first. Jobs with the same priority are processed in the order received. Job properties Job properties are settings that you define when submitting a render job. Some examples include frame range, output path, job attachments, renderable camera, and more. The properties vary based on the DCC that the render is submitted from. Job template A job template defines the runtime environment and all processes that run as part of a Deadline Cloud job. Queue A queue is where submitted jobs are located and scheduled to be rendered. A queue must be associated with a fleet to create a successful render. A queue can be associated with multiple fleets. Queue-fleet association When a queue is associated with a fleet, there is a queue-fleet association. Use an association to schedule workers from a fleet to jobs in that queue. You can start and stop associations to control scheduling of work. Session A session is an ephemeral runtime environment on a worker host created to run a set of tasks from the same job. The session ends when the worker host finishes running tasks for that job. The session provides a way to configure the environment with resources shared across multiple task runs, such as defining environment variables or starting a background process or container. Concepts and terminology Version latest 3 AWS Deadline Cloud Session action User Guide A session action is a discrete unit of work executed by a worker within a session. It can encompass the core task run operations of a task, or it might include preparatory steps such as environment setup and post-execution processes like tear-down and cleanup. Step A step is one particular process to run in the job. Deadline Cloud submitter A Deadline Cloud submitter is a digital content creation (DCC) plugin. Artists use it to submit jobs from a third-party DCC interface that they are familiar with. Tags A tag is a label that you can assign to an AWS resource. Each tag consists of a key and an optional value that you define. With tags, you can categorize your AWS resources in different ways. For example, you could define a set of tags for your account’s Amazon EC2 instances that help you track each instance’s owner and stack level. You can also categorize your AWS resources by purpose, owner, or environment. This approach is useful when you have many resources of the same type. You can quickly identify a specific resources based on the tags that you've assigned to it. Task A task is a single component of a render step. Usage-based licensing (UBL) Usage-based licensing (UBL) is an on-demand licensing model that is available for select third- party products. This model is pay as your go, and you are charged for the number of hours and minutes that you use. Usage explorer Usage explorer is a feature of Deadline Cloud monitor. It provides an approximate estimate of your costs and usage. Worker Workers belong to fleets and run Deadline Cloud assigned tasks to complete steps and jobs. Workers store the logs from task operations in Amazon CloudWatch Logs. Workers can also use Concepts and terminology Version latest 4 AWS Deadline Cloud User Guide the job attachments feature to sync inputs and outputs to an Amazon Simple Storage Service (Amazon S3) bucket. Getting started with Deadline Cloud Use Deadline Cloud to quickly create a render farm with default settings and resources, such as Amazon EC2 instance configuration and Amazon Simple Storage Service (Amazon S3) buckets. You can also define the settings and resources when you create a render farm. This method takes more time than using the default settings and resources but gives you more control. After you're familiar with Deadline Cloud Concepts and terminology, see Getting started for step- by-step instructions for creating your farm, adding users, and links to helpful information. Accessing Deadline Cloud You can access Deadline Cloud in any of the following ways: • Deadline Cloud console– Access the console in a browser to create a farm and its resources, and manage user access. For more information, see Getting started. • Deadline Cloud monitor– Manage your render jobs, including updating priorities and job statuses. Monitor your farm and view logs and job status. For users with Owner permissions, the Deadline Cloud |
user-guide.pdf-004 | user-guide.pdf.pdf | 4 | control. After you're familiar with Deadline Cloud Concepts and terminology, see Getting started for step- by-step instructions for creating your farm, adding users, and links to helpful information. Accessing Deadline Cloud You can access Deadline Cloud in any of the following ways: • Deadline Cloud console– Access the console in a browser to create a farm and its resources, and manage user access. For more information, see Getting started. • Deadline Cloud monitor– Manage your render jobs, including updating priorities and job statuses. Monitor your farm and view logs and job status. For users with Owner permissions, the Deadline Cloud monitor also provides access to explore usage and create budgets. The Deadline Cloud monitor is available as both a web browser and a desktop application. • AWS SDK and AWS CLI– Use the AWS Command Line Interface (AWS CLI) to call the Deadline Cloud API operations from the command line on your local system. For more information, see Set up a developer workstation. Related services Deadline Cloud works with the following AWS services: • Amazon CloudWatch– With CloudWatch, you can monitor your projects and associated AWS resources. For more information, see Monitoring with CloudWatch in the Deadline Cloud Developer Guide. Getting started with Deadline Cloud Version latest 5 AWS Deadline Cloud User Guide • Amazon EC2–This AWS service provides virtual servers that run your applications in the cloud. You can configure your projects to use Amazon EC2 instances for your workloads. For more information, see Amazon EC2 instances. • Amazon EC2 Auto Scaling– With Auto Scaling, you can automatically increase or decrease the number of instances as the demand on your instances changes. Auto Scaling helps to make sure that you're running your desired number of instances, even if an instance fails. If you enable Auto Scaling with Deadline Cloud, instances that are launched by Auto Scaling are automatically registered with the workload. Likewise, instances that are terminated by Auto Scaling are automatically de-registered from the workload. For more information, see the Amazon EC2 Auto Scaling User Guide. • AWS PrivateLink– AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), AWS services, and your on-premises networks, without exposing your traffic to the public internet. AWS PrivateLink makes it easy to connect services across different accounts and VPCs. For more information, see AWS PrivateLink. • Amazon S3– Amazon S3 is an object storage service. Deadline Cloud uses Amazon S3 buckets to store job attachments. For more information, see the Amazon S3 User Guide. • IAM Identity Center– IAM Identity Center is an AWS service where you can provide users with single sign-on access to all their assigned accounts and applications from one place. You can also centrally manage multi-account access and user permissions to all of your accounts in AWS Organizations. For more information, see AWS IAM Identity Center FAQs. How Deadline Cloud works With Deadline Cloud, you can create and manage rendering projects and jobs directly from digital content creation (DCC) pipelines and workstations. You submit jobs to Deadline Cloud using the AWS SDK, AWS Command Line Interface (AWS CLI), or Deadline Cloud job submitters. Deadline Cloud supports the Open Job Description (OpenJD) for job template specification. For more information, see Open Job Description on the GitHub website. Deadline Cloud provides job submitters. A job submitter is a DCC plugin for submitting render jobs from a third-party DCC interface, such as Maya or Nuke. With a submitter, artists can submit rendering jobs from a third-party interface to Deadline Cloud where project resources are managed and jobs are monitored, all in one location. With a Deadline Cloud farm, you can create queues and fleets, manage users, and manage project resource usage and costs. A farm consists of queues and fleets. A queue is where submitted jobs How Deadline Cloud works Version latest 6 AWS Deadline Cloud User Guide are located and scheduled to be rendered. A fleet is a group of worker nodes that run tasks to complete jobs. A queue must be associated with a fleet so that the jobs can render. A single fleet can support multiple queues and a queue can be supported by multiple fleets. Jobs consist of steps, and each step consists of specific tasks. With the Deadline Cloud monitor, you can access statuses, logs, and other troubleshooting metrics for jobs, steps, and tasks. Permissions in Deadline Cloud Deadline Cloud supports the following: • Managing access to its API operations using AWS Identity and Access Management (IAM) • Managing access of workforce users using an integration with AWS IAM Identity Center Before anyone can work on a project, they must have access to that project and the associated farm. Deadline Cloud is integrated with IAM Identity Center to manage workforce authentication and authorization. Users can be added directly to IAM Identity Center, or permission can |
user-guide.pdf-005 | user-guide.pdf.pdf | 5 | With the Deadline Cloud monitor, you can access statuses, logs, and other troubleshooting metrics for jobs, steps, and tasks. Permissions in Deadline Cloud Deadline Cloud supports the following: • Managing access to its API operations using AWS Identity and Access Management (IAM) • Managing access of workforce users using an integration with AWS IAM Identity Center Before anyone can work on a project, they must have access to that project and the associated farm. Deadline Cloud is integrated with IAM Identity Center to manage workforce authentication and authorization. Users can be added directly to IAM Identity Center, or permission can be connected to your existing identity provider (IdP) such as Okta or Active Directory. IT administrators can grant access permissions to users and groups at different levels. Each subsequent level includes the permissions for the previous levels. The following list describes the four access levels from the lowest level to the highest level: • Viewer– Permission to see resources in the farms, queues, fleets, and jobs they have access to. A viewer can't submit or make changes to jobs. • Contributor– Same as a viewer, but with permission to submit jobs to a queue or farm. • Manager– Same as contributor, but with permission to edit jobs in queues they have access to, and grant permissions on resources that they have access to. • Owner– Same as manager, but can view and create budgets and see usage. Note These permissions don't give users access to the AWS Management Console or permission to modify Deadline Cloud infrastructure. Users must have access to a farm before they can access the associated queues and fleets. User access is assigned to queues and fleets separately within a farm. Permissions in Deadline Cloud Version latest 7 AWS Deadline Cloud User Guide You can add users as individuals or as part of a group. Adding groups to a farm, fleet, or queue can make it easier to manage access permissions for large groups of people. For example, if you have a team that is working on a specific project, you can add each of the team members to a group. Then, you can grant access permissions to the entire group for the corresponding farm, fleet, or queue. Software support with Deadline Cloud Deadline Cloud works with any software application that can be run from a command line interface and controlled by using parameter values. Deadline Cloud supports the OpenJD specification for describing work as jobs with software script steps that are parameterized (such as across a frame range) into tasks. Assemble OpenJD job instructions into job bundles with Deadline Cloud tools and features to create, run, and license the steps from a third-party software application. Jobs need licensing to render. Deadline Cloud offers usage-based-licensing (UBL) for a selection of software application licenses that is billed by the hour in minute increments based on usage. With Deadline Cloud, you can also use your own software licenses if you like. If a job can't access a license, it doesn't render and produces an error that displays in the task log in the Deadline Cloud monitor. Software support with Deadline Cloud Version latest 8 AWS Deadline Cloud User Guide Getting started with Deadline Cloud To create a farm in AWS Deadline Cloud, you can use either the Deadline Cloud console or the AWS Command Line Interface (AWS CLI). Use the console for a guided experience creating the farm, including queues and fleets. Use the AWS CLI to work directly with the service, or for developing your own tools that work with Deadline Cloud. To create a farm and use the Deadline Cloud monitor, set up your account for Deadline Cloud. You only need to set up the Deadline Cloud monitor infrastructure once per account. From your farm, you can manage your project, including user access to your farm and its resources. To create a farm without setting up the Deadline Cloud monitor infrastructure, set up a developer workstation for Deadline Cloud. To create a farm with minimal resources to accept jobs, select Quickstart in the console home page. Set up the Deadline Cloud monitor walks you through those steps. These farms start with a queue and a fleet that are automatically associated. This approach is a convenient way to create sandbox style farms to experiment in. Topics • Set up your AWS account • Set up the Deadline Cloud monitor • Set up Deadline Cloud submitters Set up your AWS account Set up your AWS account to use AWS Deadline Cloud. If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone |
user-guide.pdf-006 | user-guide.pdf.pdf | 6 | are automatically associated. This approach is a convenient way to create sandbox style farms to experiment in. Topics • Set up your AWS account • Set up the Deadline Cloud monitor • Set up Deadline Cloud submitters Set up your AWS account Set up your AWS account to use AWS Deadline Cloud. If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. Set up your AWS account Version latest 9 AWS Deadline Cloud User Guide When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. When you first create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. Important We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Set up the Deadline Cloud monitor To get started, you'll need to create your Deadline Cloud monitor infrastructure and define your farm. You can also perform additional, optional steps including adding groups and users, choosing a service role, and adding tags to your resources. Step 1: Create your monitor The Deadline Cloud monitor uses AWS IAM Identity Center to authorize users. The IAM Identity Center instance that you use for Deadline Cloud must be in the same AWS Region as the monitor. If your console is using a different Region when you create the monitor, you'll get a reminder to change to the IAM Identity Center Region. Your monitor's infrastructure consists of the following components: • Monitor name: The Monitor name is how you can identify your monitor — for example AnyCompany monitor. Your monitor's name also determines your monitor URL. Set up your monitor Version latest 10 AWS Deadline Cloud Important User Guide You can't change the monitor name after you finish setting up. • Monitor URL: You can access your monitor by using the Monitor URL. The URL is based on the Monitor name — for example https://anycompanymonitor.awsapps.com. Important You can't change the Monitor URL after you finish setting up. • AWS Region: The AWS Region is the physical location for a collection of AWS data centers. When you set up your monitor, the Region defaults to the closest location to you. We recommend changing the Region so it is located closest to your users. This reduces lag and improves data transfer speeds. AWS IAM Identity Center must be enabled in the same AWS Region as Deadline Cloud. Important You can't change your Region after you finish setting up Deadline Cloud. Complete the tasks in this section to configure your monitor's infrastructure. To configure your monitor's infrastructure 1. 2. 3. 4. 5. 6. Sign in to the AWS Management Console to start the Welcome to Deadline Cloud setup, then choose Next. Enter the Monitor name — for example AnyCompany Monitor. (Optional) To change the Monitor URL, choose Edit URL. (Optional) To change the AWS Region so it's closest to your users, choose Change Region. a. Select the Region closest to your users. b. Choose Apply Region. (Optional) To further customize your monitor setup, select Additional settings. If you are ready for Step 2: Define farm details, choose Next. Create your monitor Version latest 11 AWS Deadline Cloud Additional settings User Guide Deadline Cloud setup includes additional settings. With these settings, you can view all the changes Deadline Cloud setup makes to your AWS account, configure your monitor user role, and change your encryption key type. AWS IAM Identity Center AWS IAM Identity Center is a cloud-based single sign-on service for managing users and groups. IAM Identity Center can also be integrated with your enterprise single sign-on (SSO) provider so that users can sign in with their company account. Deadline Cloud enables IAM Identity Center by default, and it is required to set up and use Deadline Cloud. The IAM Identity Center instance that you use for Deadline Cloud must be in the same AWS |
user-guide.pdf-007 | user-guide.pdf.pdf | 7 | the changes Deadline Cloud setup makes to your AWS account, configure your monitor user role, and change your encryption key type. AWS IAM Identity Center AWS IAM Identity Center is a cloud-based single sign-on service for managing users and groups. IAM Identity Center can also be integrated with your enterprise single sign-on (SSO) provider so that users can sign in with their company account. Deadline Cloud enables IAM Identity Center by default, and it is required to set up and use Deadline Cloud. The IAM Identity Center instance that you use for Deadline Cloud must be in the same AWS Region as the monitor. For more information, see What is AWS IAM Identity Center. Configure service access role An AWS service can assume a service role to perform actions on your behalf. Deadline Cloud requires a monitor user role for it to give users access to resources in your monitor. You can attach AWS Identity and Access Management (IAM) managed policies to the monitor user role. The policies give users permissions to perform certain actions, such as creating jobs in a specific Deadline Cloud application. Because applications depend on specific conditions in the managed policy, if you don’t use the managed policies, the application might not perform as expected. You can change the monitor user role after you complete setup, at any time. For more information about user roles, see IAM Roles. The following tabs contain instructions for two different use cases. To create and use a new service role, choose the New service role tab. To use an existing service role, choose the Existing service role tab. New service role To create and use a new service role 1. 2. Select Create and use a new service role. (Optional) Enter a Service user role name. 3. Choose View permission details for more information about the role. Create your monitor Version latest 12 AWS Deadline Cloud Existing service role To use an existing service role 1. Select Use an existing service role. User Guide 2. Open the dropdown list to choose an existing service role. 3. (Optional) Choose View in IAM console for more information about the role. Step 2: Define farm details Back on the Deadline Cloud console, complete the following steps to define the farm details. 1. 2. In Farm details, add a Name for the farm. For Description, enter the farm description. A description can help you identify your farm's purpose. 3. Create a group and add uses for your farm. After you set up your farm, you can use the Deadline Cloud management console to add or change groups and users. 4. (Optional) Choose Additional farm settings. a. (Optional) By default, your data is encrypted with a key that AWS owns and manages for your security. You can choose Customize encryption settings (advanced) to use an existing key or to create a new one that you manage. If you choose to customize encryption settings using the checkbox, enter a AWS KMS ARN, or create a new AWS KMS by choosing Create new KMS key. b. (Optional) Choose Add new tag to add one or more tags to your farm. 5. Choose one of the following options: • Select Skip to Review and Create to review and create your farm. • Select Next to proceed to additional, optional steps. (Optional) Step 3: Define queue details The queue is responsible for tracking progress and scheduling work for your jobs. 1. Starting in Queue details, provide a Name for the queue. Define farm details Version latest 13 AWS Deadline Cloud User Guide 2. 3. For Description, enter the queue description. A clear description can help you quickly identify your queue's purpose. For Job attachments, you can either create a new Amazon S3 bucket or choose an existing Amazon S3 bucket. If you don't have an existing Amazon S3 bucket, you'll need to create one. a. b. To create a new Amazon S3 bucket, select Create new job bucket. You can define the name of the job bucket in the Root prefix field. We recommend calling the bucket deadlinecloud-job-attachments-[MONITORNAME]. You can only use lowercase letters and dashes. No spaces or special characters. To search for and select an existing Amazon S3 bucket, select Choose from existing Amazon S3 bucket. Then, search for an existing bucket by choosing Browse S3. When the list of your available Amazon S3 buckets display, select the Amazon S3 bucket you want to use for your queue. 4. (Optional) Choose Additional farm settings. a. If you are using customer-managed fleets, select Enable association with customer- managed fleets. i. ii. For customer-managed fleets, add a Queue-configured user, and then set the POSIX and/or Windows credentials. Alternatively, you can bypass the run-as functionality by selecting the checkbox. If you want to set a budget for a queue, choose |
user-guide.pdf-008 | user-guide.pdf.pdf | 8 | S3 bucket, select Choose from existing Amazon S3 bucket. Then, search for an existing bucket by choosing Browse S3. When the list of your available Amazon S3 buckets display, select the Amazon S3 bucket you want to use for your queue. 4. (Optional) Choose Additional farm settings. a. If you are using customer-managed fleets, select Enable association with customer- managed fleets. i. ii. For customer-managed fleets, add a Queue-configured user, and then set the POSIX and/or Windows credentials. Alternatively, you can bypass the run-as functionality by selecting the checkbox. If you want to set a budget for a queue, choose Require a budget for this queue. If you require a budget, you must create the budget using the Deadline Cloud console to schedule jobs in the queue. b. Your queue requires permission to access Amazon S3 on your behalf. We recommend you create a new service role for every queue. i. For a new role, complete the following steps. A. Select Create and use a new service role. B. C. Enter a Role name for your queue role or use the provided role name. (Optional) Add a queue role Description. D. You can view the IAM permissions for the queue role by choosing View permission details. ii. Alternatively, you can select an existing service role. Define queue details Version latest 14 AWS Deadline Cloud User Guide c. (Optional) Add environment variables for the queue environment using name and value pairs. d. (Optional) Add tags for the queue using key and value pairs. Choose one of the following options: • Select Skip to Review and Create to review and create your farm. • Select Next to proceed to additional, optional steps. (Optional) Step 4: Define fleet details A fleet allocates workers to execute your rendering tasks. If you need a fleet for your rendering tasks, check the box for Create fleet. 1. Fleet details a. Provide both a Name and optional Description for your fleet. b. Review the fleet type and operating system for awareness. 2. 3. In the Instance market type section, choose either Spot Instance or On-demand Instance. Amazon EC2 On-demand instances provide faster availability and Amazon EC2 Spot instances are better for cost saving efforts. For Auto scaling the number of instances in your fleet, choose both a Minimum number of instances and a Maximum number of instances. We strongly recommend to always set the minimum number of instances to 0 to avoid incurring extra costs. 4. Review the worker capabilities for awareness. 5. (optional) Choose Additional fleet settings a. Your fleet requires permission to write to CloudWatch on your behalf. We recommend you create a new service role for every fleet. i. For a new role, complete the following steps. A. Select Create and use a new service role. B. Enter a Role name for your fleet role or use the provided role name. Define fleet details Version latest 15 AWS Deadline Cloud User Guide C. (Optional) Add a fleet role Description. D. To view the IAM permissions for the fleet role, choose View permission details. ii. Alternatively, you can use an existing service role. b. (Optional) Add tags for the fleet using key and value pairs. After you enter all the fleet details, choose Next. Step 5: Review and create Review the information entered to create your farm. When you're ready, choose Create farm. The progress of your farm's creation is displayed on the Farms page. A success message displays when your farm is ready for use. Set up Deadline Cloud submitters This process is for administrators and artists who want to install, set up, and launch the AWS Deadline Cloud submitter. A Deadline Cloud submitter is a digital content creation (DCC) plugin. Artists use it to submit jobs from a third-party DCC interface that they're familiar with. Note This process must be completed on all workstations that artists will use for submitting renders. Each workstation must have the DCC installed before installing the corresponding submitter. For example, if you want to download the Deadline Cloud submitter for Blender, you need to have Blender already installed on your workstation. We provide reasonable defaults for keeping workstations secure. For more information about securing your workstation, see Security best practices - workstations. Topics • Step 1: Install the Deadline Cloud submitter • Step 2: Install and set up Deadline Cloud monitor Review and create Version latest 16 AWS Deadline Cloud User Guide • Step 3: Launch the Deadline Cloud submitter • Supported submitters Step 1: Install the Deadline Cloud submitter The following sections guide you through the steps to install the Deadline Cloud submitter. Download the submitter installer Before you can install the Deadline Cloud submitter, you must download the submitter installer. 1. 2. 3. Sign in to the AWS Management Console and open the Deadline Cloud console. |
user-guide.pdf-009 | user-guide.pdf.pdf | 9 | best practices - workstations. Topics • Step 1: Install the Deadline Cloud submitter • Step 2: Install and set up Deadline Cloud monitor Review and create Version latest 16 AWS Deadline Cloud User Guide • Step 3: Launch the Deadline Cloud submitter • Supported submitters Step 1: Install the Deadline Cloud submitter The following sections guide you through the steps to install the Deadline Cloud submitter. Download the submitter installer Before you can install the Deadline Cloud submitter, you must download the submitter installer. 1. 2. 3. Sign in to the AWS Management Console and open the Deadline Cloud console. From the side navigation pane, choose Downloads. From the Deadline Cloud submitter installer section, select the installer for your computer's operating system, and then choose Download. 4. (Optional) Verify the authenticity of downloaded software. Install the Deadline Cloud submitter With the installer, you can install the following submitters: Software Supported versions Windows installer Linux installer MacOS installer Adobe After Effects Autodesk Arnold for Maya 2024 - 2025 Included Not included Included 7.1 - 7.2 Included Included Included Autodesk Maya 2023 - 2025 Included Included Included Blender 3.6 - 4.2 Included Included Included Foundry Nuke 15 Included Included Not included KeyShot Studio 2023 - 2024 Included Not included Included Step 1: Install the Deadline Cloud submitter Version latest 17 AWS Deadline Cloud Software Maxon Cinema 4D Supported versions Windows installer Linux installer MacOS installer 2024 - 2025 Included Not included Included User Guide SideFX Houdini 19.5 - 20.5 Included Included Included You can install other submitters not listed here. We use Deadline Cloud libraries to build submitters. Some of the submitters include Unreal Engine, 3ds Max and Rhino. You can find the source code for these libraries and submitters in the aws-deadline GitHub organization. Windows 1. In a file browser, navigate to the folder where the installer downloaded, and then select DeadlineCloudSubmitter-windows-x64-installer.exe. a. If a Windows protected your PC pop-up displays, choose More info. b. Choose Run anyway. 2. After the AWS Deadline Cloud Submitter Setup Wizard opens, choose Next. 3. Choose the installation scope by completing one of the following steps: • • To install for only the current user, choose User. To install for all users, choose System. If you choose System, you must exit the installer and re-run it as an administrator by completing the following steps: a. Right-click on DeadlineCloudSubmitter-windows-x64-installer.exe, and then choose Run as administrator. b. c. Enter your administrator credentials, and then choose Yes. Choose System for the installation scope. 4. After selecting the installation scope, choose Next. 5. Choose Next again to accept the installation directory. 6. Select Integrated submitter for Nuke, or whichever submitter you want to install. 7. Choose Next. Step 1: Install the Deadline Cloud submitter Version latest 18 AWS Deadline Cloud User Guide 8. Review the installation, and choose Next. 9. Choose Next again, and then choose Finish. Linux Note The Deadline Cloud integrated Nuke installer for Linux and Deadline Cloud monitor can only be installed on Linux distributions with at least GLIBC 2.31. 1. Open a terminal window. 2. To do a system install of the installer, enter the command sudo -i and press Enter to become root. 3. Navigate to the location where you downloaded the installer. For example, cd /home/USER/Downloads. 4. 5. To make the installer executable, enter chmod +x DeadlineCloudSubmitter-linux- x64-installer.run. To run the Deadline Cloud submitter installer, enter ./DeadlineCloudSubmitter- linux-x64-installer.run. 6. When the installer opens, follow the prompts on your screen to complete the Setup Wizard. MacOS 1. In a file browser, navigate to the folder where the installer downloaded, and then select the file. 2. After the AWS Deadline Cloud Submitter Setup Wizard opens, choose Next. 3. Choose Next again to accept the installation directory. 4. Select Integrated submitter for Maya, or whichever submitter you want to install. 5. Choose Next. 6. Review the installation, and choose Next. 7. Choose Next again, and then choose Finish. Step 1: Install the Deadline Cloud submitter Version latest 19 AWS Deadline Cloud User Guide Step 2: Install and set up Deadline Cloud monitor You can install the Deadline Cloud monitor desktop application with Windows, Linux, or macOS. Windows 1. If you haven't already, sign in to the AWS Management Console and open the Deadline 2. 3. Cloud console. From the left navigation pane, choose Downloads. In the Deadline Cloud monitor section, select the latest Windows file, and choose Download. To perform a silent install, use the following command: DeadlineCloudMonitor_VERSION_x64-setup.exe /S By default the monitor is installed in C:\Users{username}\AppData\Local \DeadlineCloudMonitor. To change the installation directory, use this command instead: DeadlineCloudMonitor_VERSION_x64-setup.exe /S /D={InstallDirectory} Linux (AppImage) To install Deadline Cloud monitor AppImage on Debian distros 1. Download the latest Deadline Cloud monitor AppImage. 2. Note This step is for Ubuntu 22 and up. For other versions of Ubuntu, skip this |
user-guide.pdf-010 | user-guide.pdf.pdf | 10 | AWS Management Console and open the Deadline 2. 3. Cloud console. From the left navigation pane, choose Downloads. In the Deadline Cloud monitor section, select the latest Windows file, and choose Download. To perform a silent install, use the following command: DeadlineCloudMonitor_VERSION_x64-setup.exe /S By default the monitor is installed in C:\Users{username}\AppData\Local \DeadlineCloudMonitor. To change the installation directory, use this command instead: DeadlineCloudMonitor_VERSION_x64-setup.exe /S /D={InstallDirectory} Linux (AppImage) To install Deadline Cloud monitor AppImage on Debian distros 1. Download the latest Deadline Cloud monitor AppImage. 2. Note This step is for Ubuntu 22 and up. For other versions of Ubuntu, skip this step. To install libfuse2, enter: sudo apt update sudo apt install libfuse2 3. To make the AppImage executable, enter: Step 2: Install and set up Deadline Cloud monitor Version latest 20 AWS Deadline Cloud User Guide chmod a+x deadline-cloud-monitor_<APP_VERSION>_amd64.AppImage Linux (Debian) To install Deadline Cloud monitor Debian package on Debian distros 1. Download the latest Deadline Cloud monitor Debian package. 2. Note This step is for Ubuntu 22 and up. For other versions of Ubuntu, skip this step. To install libssl1.1, enter: wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/ libssl1.1_1.1.1f-1ubuntu2_amd64.deb sudo apt install ./libssl1.1_1.1.1f-1ubuntu2_amd64.deb 3. To install the Deadline Cloud monitor Debian package, enter: sudo apt update sudo apt install ./deadline-cloud-monitor_<APP_VERSION>_amd64.deb 4. If the install fails on packages that have unmet dependencies, fix the broken packages and then run the following commands. sudo apt --fix-missing update sudo apt update sudo apt install -f Linux (RPM) To install Deadline Cloud monitor RPM on Rocky Linux 9 or Alma Linux 9 1. Download the latest Deadline Cloud monitor RPM. 2. Add the extra packages for the Enterprise Linux 9 repository: Step 2: Install and set up Deadline Cloud monitor Version latest 21 AWS Deadline Cloud User Guide sudo dnf install epel-release 3. Install compat-openssl11 for the libssl.so.1.1 dependency: sudo dnf install compat-openssl11 deadline-cloud-monitor-<VERSION>-1.x86_64.rpm To install Deadline Cloud monitor RPM on Red Hat Linux 9 1. Download the latest Deadline Cloud monitor RPM. 2. Enable the CodeReady Linux Builder repository: subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms 3. Install the extra packages for Enterprise RPM: sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release- latest-9.noarch.rpm 4. Install compat-openssl11 for the libssl.so.1.1 dependency: sudo dnf install compat-openssl11 deadline-cloud-monitor-<VERSION>-1.x86_64.rpm To install Deadline Cloud monitor RPM on Rocky Linux 8, Alma Linux 8, or Red Hat Linux 8 1. Download the latest Deadline Cloud monitor RPM. 2. Install the Deadline Cloud monitor: sudo dnf install deadline-cloud-monitor-<VERSION>-1.x86_64.rpm macOS 1. If you haven't already, sign in to the AWS Management Console and open the Deadline Cloud console. From the left navigation pane, choose Downloads. In the Deadline Cloud monitor section, select the latest macOS file, and choose Download. 2. 3. Step 2: Install and set up Deadline Cloud monitor Version latest 22 AWS Deadline Cloud User Guide 4. Open the downloaded file. When the window displays, select and drag the Deadline Cloud monitor icon into the Applications folder. After you complete the download, you can verify the authenticity of the downloaded software. You might want to do this to ensure no one has tampered with the files during or after the download process. See Verify authenticity of downloaded software in Step 1. After downloading Deadline Cloud monitor and verifying the authenticity, use the following procedure to set up the Deadline Cloud monitor. To set up Deadline Cloud monitor 1. Open Deadline Cloud monitor. 2. When prompted to create a new profile, complete the following steps. a. b. c. Enter your monitor URL into the URL input, which looks like https://MY- MONITOR.deadlinecloud.amazonaws.com/ Enter a Profile name. Choose Create Profile. Your profile is created and your credentials are now shared with any software that uses the profile name that you created. 3. After you create the Deadline Cloud monitor profile, you can't change the profile name or the studio URL. If you need to make changes, do the following instead: a. Delete the profile. In the left navigation pane, choose Deadline Cloud monitor > Settings > Delete. b. Create a new profile with the changes that you want. 4. From the left navigation pane, use the >Deadline Cloud monitor option to do the following: • Change the Deadline Cloud monitor profile to log in to a different monitor. • Enable Autologin so you don’t have to enter your monitor URL on subsequent opens of Deadline Cloud monitor. 5. Close the Deadline Cloud monitor window. It continues to run in the background and sync your credentials every 15 minutes. 6. For each digital content creation (DCC) application that you plan to use for your rendering projects, complete the following steps: Step 2: Install and set up Deadline Cloud monitor Version latest 23 AWS Deadline Cloud User Guide a. b. From your Deadline Cloud submitter, open the Deadline Cloud workstation configuration. In the workstation configuration, select the profile that you created in the Deadline Cloud |
user-guide.pdf-011 | user-guide.pdf.pdf | 11 | don’t have to enter your monitor URL on subsequent opens of Deadline Cloud monitor. 5. Close the Deadline Cloud monitor window. It continues to run in the background and sync your credentials every 15 minutes. 6. For each digital content creation (DCC) application that you plan to use for your rendering projects, complete the following steps: Step 2: Install and set up Deadline Cloud monitor Version latest 23 AWS Deadline Cloud User Guide a. b. From your Deadline Cloud submitter, open the Deadline Cloud workstation configuration. In the workstation configuration, select the profile that you created in the Deadline Cloud monitor. Your Deadline Cloud credentials are now shared with this DCC and your tools should work as expected. Step 3: Launch the Deadline Cloud submitter The following example shows how to install the Blender submitter. You can install other submitters using the instructions in Supported submitters. To launch the Deadline Cloud submitter in Blender Note Support for Blender is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. 1. Open Blender. 2. Choose Edit, then Preferences. Under File Paths choose Script Directories, then choose Add. Add a script directory for the python folder where the Blender submitter was installed: Windows: %USERPROFILE%\DeadlineCloudSubmitter\Submitters\Blender\python\ Linux: ~/DeadlineCloudSubmitter/Submitters/Blender/python/ MacOS: ~/DeadlineCloudSubmitter/Submitters/Blender/python/ 3. Restart Blender. 4. Choose Edit, then Preferences. Next, choose Add-ons, then search for Deadline Cloud for Blender. Select the checkbox to enable the add-on. 5. Open a Blender scene with dependencies that exist within the asset root directory. 6. In the Render menu, select the Deadline Cloud dialog. a. If you are not already authenticated in the Deadline Cloud submitter, the Credentials Status shows as NEEDS_LOGIN. Step 3: Launch the Deadline Cloud submitter Version latest 24 AWS Deadline Cloud b. Choose Login. User Guide c. A login browser window displays. Log in with your user credentials. d. Choose Allow. You are now logged in and the Credentials Status shows as AUTHENTICATED. 7. Choose Submit. Supported submitters The following sections guide you through the steps to launch the available Deadline Cloud submitter plugins. You can install other submitters not listed here. We use Deadline Cloud libraries to build submitters. Some of the submitters include Unreal Engine, 3ds Max and Rhino. You can find the source code for these libraries and submitters in the aws-deadline GitHub organization. Software Supported versions Windows installer Linux installer MacOS installer Adobe After Effects Autodesk Arnold for Maya 2024 - 2025 Included Not included Included 7.1 - 7.2 Included Included Included Autodesk Maya 2023 - 2025 Included Included Included Blender 3.6 - 4.2 Included Included Included Foundry Nuke 15 Included Included Not included KeyShot Studio 2023 - 2024 Included Not included Included Maxon Cinema 4D 2024 - 2025 Included Not included Included SideFX Houdini 19.5 - 20.5 Included Included Included Supported submitters Version latest 25 AWS Deadline Cloud After Effects User Guide To launch the Deadline Cloud submitter in After Effects 1. Open After Effects. 2. Choose Edit, then Preferences, then Scripting & Expressions. 3. Choose Allow scripts to write files and access networks. 4. Restart After Effects 5. Select Window, then choose DeadlineCloudSubmitter.jsx. To use the After Effects submitter 1. Choose Open render queue on the submitter panel. 2. Add a composition to your render queue and set up the render settings, output module, and output path. 3. Choose Refresh on the submitter panel. 4. Choose your composition from the list and then choose Submit. You can choose Refresh again when you add or remove compositions from your render queue. You can dock the submitter into the side panels by choosing the top right corner of the submitter and dropping it in any highlighted section in After Effects. Blender To launch the Deadline Cloud submitter in Blender Note Support for Blender is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. 1. Open Blender. 2. Choose Edit, then Preferences. Under File Paths choose Script Directories, then choose Add. Add a script directory for the python folder where the Blender submitter was installed: Supported submitters Version latest 26 AWS Deadline Cloud User Guide Windows: %USERPROFILE%\DeadlineCloudSubmitter\Submitters\Blender\python\ Linux: ~/DeadlineCloudSubmitter/Submitters/Blender/python/ 3. Restart Blender. 4. Choose Edit, then Preferences. Next, choose Add-ons, then search for Deadline Cloud for Blender. Select the checkbox to enable the add-on. 5. Open a Blender scene with dependencies that exist within the asset root directory. 6. In the Render menu, select the Deadline Cloud dialog. a. If you are not already authenticated in the Deadline Cloud submitter, the Credentials Status shows as NEEDS_LOGIN. b. Choose Login. c. A login browser window displays. Log in with your user credentials. d. Choose Allow. You are now logged in and the Credentials Status shows as AUTHENTICATED. 7. Choose Submit. Cinema 4D To launch the Deadline Cloud submitter in Cinema |
user-guide.pdf-012 | user-guide.pdf.pdf | 12 | Add-ons, then search for Deadline Cloud for Blender. Select the checkbox to enable the add-on. 5. Open a Blender scene with dependencies that exist within the asset root directory. 6. In the Render menu, select the Deadline Cloud dialog. a. If you are not already authenticated in the Deadline Cloud submitter, the Credentials Status shows as NEEDS_LOGIN. b. Choose Login. c. A login browser window displays. Log in with your user credentials. d. Choose Allow. You are now logged in and the Credentials Status shows as AUTHENTICATED. 7. Choose Submit. Cinema 4D To launch the Deadline Cloud submitter in Cinema 4D Note Support for Cinema 4D is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. 1. Open Cinema 4D. 2. If prompted to install GUI components for AWS Deadline Cloud, complete the following steps: a. When the prompt displays, choose Yes, and wait for dependencies to install. b. Restart Cinema 4D to ensure the changes are applied. 3. Choose Extensions > AWS Deadline Cloud Submitter. Supported submitters Version latest 27 AWS Deadline Cloud Houdini To launch the Deadline Cloud submitter in Houdini User Guide Note Support for Houdini is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. 1. Open Houdini. 2. In the Network Editor, select the /out network. 3. Press tab, and enter deadline. 4. Select the Deadline Cloud option, and connect it to your existing network. 5. Double-click the Deadline Cloud node. KeyShot To launch the Deadline Cloud submitter in KeyShot 1. Open KeyShot. 2. Choose Windows > Scripting console > Submit to AWS Deadline Cloud and Run. There are two submission modes for the KeyShot submitter. Select the submission mode to open the submitter. • Attach the scene BIP file and all external file references – The open scene file and all external files referenced in the BIP are included as job attachments. • Attach only the scene BIP file – Only the open scene file is attached to the submission. Any external files referenced in the scene must be available to workers through network storage or another method. Supported submitters Version latest 28 AWS Deadline Cloud Maya and Arnold for Maya To launch the Deadline Cloud submitter in Maya User Guide Note Support for Maya and Arnold for Maya (MtoA) is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. 1. Open Maya. 2. Set your project, and open a file that exists within the asset root directory. 3. Choose Windows → Settings/Preferences → Plugin Manager. 4. 5. Search for DeadlineCloudSubmitter. To load the Deadline Cloud submitter plugin, select Loaded. a. If you are not already authenticated in the Deadline Cloud submitter, the Credentials Status shows as NEEDS_LOGIN. b. Choose Login. c. A login browser window displays. Log in with your user credentials. d. Choose Allow. You are now logged in and the Credentials Status shows as AUTHENTICATED. 6. (Optional) To load the Deadline Cloud submitter plugin every time you open Maya, choose Auto-load. 7. Select the Deadline Cloud shelf, then select the green button to launch the submitter. Nuke To launch the Deadline Cloud submitter in Nuke Note Support for Nuke is provided using the Conda environment for service-managed fleets. For more information, see Default Conda queue environment. Supported submitters Version latest 29 AWS Deadline Cloud 1. Open Nuke. User Guide 2. Open a Nuke script with dependencies that exist within the asset root directory. 3. Choose AWS Deadline, and then choose Submit to Deadline Cloud to launch the submitter. a. If you are not already authenticated in the Deadline Cloud submitter, the Credentials Status shows as NEEDS_LOGIN. b. Choose Login. c. In the login browser window, log in with your user credentials. d. Choose Allow. You are now logged in and the Credentials Status shows as AUTHENTICATED. 4. Choose Submit. Supported submitters Version latest 30 AWS Deadline Cloud User Guide Using the Deadline Cloud monitor The AWS Deadline Cloud monitor provides you with an overall view of your visual compute jobs. You can use it to monitor and manage jobs, view worker activity on fleets, track budgets and usage, and to download a job's results. Each queue has a job monitor that shows you the status of jobs, steps, and tasks. The monitor provides ways to manage jobs directly from the monitor. You can make prioritization changes, cancel jobs, requeue jobs, and resubmit jobs. The Deadline Cloud monitor has a table that shows summary status for a job, or you can select a job to see detailed task logs that help troubleshoot issues with a job. You can use the Deadline Cloud monitor to download the results to the location on your workstation that was specified when the job was created. |
user-guide.pdf-013 | user-guide.pdf.pdf | 13 | download a job's results. Each queue has a job monitor that shows you the status of jobs, steps, and tasks. The monitor provides ways to manage jobs directly from the monitor. You can make prioritization changes, cancel jobs, requeue jobs, and resubmit jobs. The Deadline Cloud monitor has a table that shows summary status for a job, or you can select a job to see detailed task logs that help troubleshoot issues with a job. You can use the Deadline Cloud monitor to download the results to the location on your workstation that was specified when the job was created. The Deadline Cloud monitor also helps you monitor usage and manage costs. For more information, see Track spending and usage for Deadline Cloud farms. Topics • Share the Deadline Cloud monitor URL • Open the Deadline Cloud monitor • View queue and fleet details in Deadline Cloud • Manage jobs, steps, and tasks in Deadline Cloud • View and manage job details in Deadline Cloud • View a step in Deadline Cloud • View a task in Deadline Cloud • View logs in Deadline Cloud • Download finished output in Deadline Cloud Share the Deadline Cloud monitor URL When you set up the Deadline Cloud service, by default you create a URL that opens the Deadline Cloud monitor for your account. Use this URL to open the monitor in your browser or on your desktop. Share the URL with other users so that they can access the Deadline Cloud monitor. Share the Deadline Cloud monitor URL Version latest 31 AWS Deadline Cloud User Guide Before a user can open the Deadline Cloud monitor, you must grant the user access. To grant access, either add the user to the list of authorized users for the monitor or add them to a group with access to the monitor. For more information, see Managing users in Deadline Cloud. To share the monitor URL 1. Open the Deadline Cloud console. 2. From Get started, choose Go to Deadline Cloud dashboard. 3. On the navigation pane, choose Dashboard. 4. In the Account overview section, choose Account details. 5. Copy and then securely send the URL to anyone who needs to access the Deadline Cloud monitor. Open the Deadline Cloud monitor You can open the Deadline Cloud monitor by any of the following ways: • Console – Sign in to the AWS Management Console and open the Deadline Cloud console. • Web – Go to the monitor URL that you created when you set up Deadline Cloud. • Monitor – Use the desktop Deadline Cloud monitor. When you use the console, you must be able to sign in to AWS using an AWS Identity and Access Management identity, and then sign in to the monitor with AWS IAM Identity Center credentials. If you only have IAM Identity Center credentials, you must sign in using the monitor URL or the desktop application. To open the Deadline Cloud monitor (web) 1. Using a browser, open the monitor URL that you created when you set up Deadline Cloud. 2. Sign in with your user credentials. To open the Deadline Cloud monitor (console) 1. Open the Deadline Cloud console. 2. 3. In the navigation pane, select Farms. Select a farm, then choose Manage jobs to open the Deadline Cloud monitor page. Open the Deadline Cloud monitor Version latest 32 AWS Deadline Cloud User Guide 4. Sign in with your user credentials. To open the Deadline Cloud monitor (desktop) 1. Open the Deadline Cloud console. -or- Open the Deadline Cloud monitor - web from the monitor URL. 2. • On the Deadline Cloud console, do the following: 1. In the monitor, choose Go to Deadline Cloud dashboard, and then choose Downloads from the left menu. 2. From Deadline Cloud monitor, choose the monitor version for your desktop. 3. Choose Download. • On the Deadline Cloud monitor - web, do the following: • From the left menu, choose Workstation setup. If the Workstation setup item isn't visible, use the arrow to open the left menu. • Choose Download. • From Select an OS, choose your operating system. 3. Download the Deadline Cloud monitor - desktop. 4. After you download and install the monitor, open it on your computer. • If this is your first time opening the Deadline Cloud monitor, you must provide the monitor URL and create a profile name. Next you sign in to the monitor with your Deadline Cloud credentials. • After you create a profile, you open the monitor by selecting a profile. You might need to enter your Deadline Cloud credentials. View queue and fleet details in Deadline Cloud You can use the Deadline Cloud monitor to view the configuration of the queues and fleets in your farm. You can also use the monitor to see a |
user-guide.pdf-014 | user-guide.pdf.pdf | 14 | monitor, open it on your computer. • If this is your first time opening the Deadline Cloud monitor, you must provide the monitor URL and create a profile name. Next you sign in to the monitor with your Deadline Cloud credentials. • After you create a profile, you open the monitor by selecting a profile. You might need to enter your Deadline Cloud credentials. View queue and fleet details in Deadline Cloud You can use the Deadline Cloud monitor to view the configuration of the queues and fleets in your farm. You can also use the monitor to see a list of the jobs in a queue or the workers in a fleet. You must have VIEWING permission to view queue and fleet details. If the details don't display, contact your administrator to get the correct permissions. View queue and fleet details Version latest 33 AWS Deadline Cloud To view queue details 1. Open the Deadline Cloud monitor. User Guide 2. 3. 4. From the list of farms, choose the farm that contains the queue that you're interested in. In the list of queues, choose a queue to display its details. To compare the configuration of two or more queues, select more than one check box. To see a list of jobs in the queue, choose the queue name from the list of queues or from the details panel. If the monitor is already open, you can select the queue from the Queues list in the left navigation pane. To view fleet details 1. Open the Deadline Cloud monitor. 2. 3. 4. From the list of farms, choose the farm that contains the fleet that you're interested in. In Farm resources, choose Fleets. In the list of fleets, choose a fleet to display its details. To compare the configuration of two or more fleets, select more than one check box. 5. To see a list of workers in the fleet, choose the fleet name from the list of fleets or from the details panel. If the monitor is already open, you can select the fleet from the Fleets list in the left navigation pane. Manage jobs, steps, and tasks in Deadline Cloud When you select a queue, the job monitor section of the Deadline Cloud monitor shows you the jobs in that queue, the steps in the job, and the tasks in each step. When you select a job, step, or task, you can use the Actions menu to manage each. To open the job monitor, follow the steps to view a queue in View queue and fleet details in Deadline Cloud, then select the job, step, or task to work with. For jobs, steps, and tasks, you can do the following: • Change the status to Requeued, Succeeded, Failed, or Canceled. Manage jobs, steps, and tasks Version latest 34 AWS Deadline Cloud User Guide • Download the processed output from the job, step, or task. • Copy the ID of the job, step, or task. For the selected job, you can: • Archive the job. • Modify the job properties, such as changing prioritization or viewing step to step dependencies. • View additional details using the job's parameters. • Resubmit the job. For for more information, see View and manage job details in Deadline Cloud. For each step, you can: • View the dependencies for the step. The dependencies for a step must be completed before the step runs. For details, see View a step in Deadline Cloud. For each task, you can: • View logs for the task. • View task parameters. For more information, see View a task in Deadline Cloud. View and manage job details in Deadline Cloud The Job monitor page in the Deadline Cloud monitor provides you with the following: • An overall view of the progress of a job. • A view of the steps and tasks that make up the job. Choose a job from the list to view a list of steps for the job, and then choose a step from the list of steps to view the tasks for the job. After you choose an item, you can use the Actions menu for that item to view details. View job details Version latest 35 AWS Deadline Cloud To view job details User Guide 1. 2. 3. Follow the steps to view a queue in View queue and fleet details in Deadline Cloud. In the navigation pane, select the queue where you submitted your job. Select a job using one of the following methods: a. b. From the Jobs list, select a job to view its details. From the search field, enter any text associated with the job, such as the job name or user that created the job. From the results that display, select the job you want to view. |
user-guide.pdf-015 | user-guide.pdf.pdf | 15 | details Version latest 35 AWS Deadline Cloud To view job details User Guide 1. 2. 3. Follow the steps to view a queue in View queue and fleet details in Deadline Cloud. In the navigation pane, select the queue where you submitted your job. Select a job using one of the following methods: a. b. From the Jobs list, select a job to view its details. From the search field, enter any text associated with the job, such as the job name or user that created the job. From the results that display, select the job you want to view. The details of a job include the steps in the job and the tasks in each step. You can use the Actions menu to do the following: • Change the status of the job. • View and modify the properties of a job. • You can view the dependencies between steps in the job. • You can change the priority of the job in a queue. Jobs with higher number priority are processed before jobs with lower number priority. Jobs can have a priority between 1 and 100. When two jobs have the same priority, the oldest job is scheduled first. • View the parameters for the job that were set when the job was submitted. • Download the output of a job. When you download the output of a job, it contains all of the output generated by the steps and tasks in the job. Archive a job To archive a job, it must be in a terminal state, FAILED, SUCCEEDED, SUSPENDED, or CANCELED. The ARCHIVED state is final. After a job is archived, it can't be requeued or modified. The job's data is not affected by archiving the job. The data is deleted when the inactivity timeout is reached, or when the queue containing the job is deleted. Other things that happen to archived jobs: • Archived jobs are hidden in the Deadline Cloud monitor. • Archived jobs are visible in a read-only state form the Deadline Cloud CLI for 120 days before deletion. Archive a job Version latest 36 AWS Deadline Cloud Requeue a job User Guide When you requeue a job, all of the tasks without step dependencies switch to READY. The status of steps with dependencies switch to READY or PENDING as they are restored. • All jobs, steps, and tasks switch to PENDING. • If a step doesn't have a dependency, it switches to READY. Resubmit a job There might be times when you want to run a job again, but with different properties and settings. For example, you might submit a job to render a subset of testing frames, verify the output, then run the job again with the full frame range. To do this, resubmit the job. When you resubmit a job, new tasks without dependencies become READY. New tasks with dependencies become PENDING. • All new jobs, steps, and tasks become PENDING. • If a new step doesn't have a dependency, it becomes READY. When you resubmit a job, you can only change properties that were defined as configurable when the job was first created. For example, if the name of a job is not defined as a configurable property of the job when first submitted, then the name cannot be edited on resubmission. View a step in Deadline Cloud Use the AWS Deadline Cloud monitor to view the steps in your processing jobs. In the Job monitor, the Steps list shows the list of steps that make up the selected job. When you select a step, the Tasks list shows the tasks in the step. To view a step 1. 2. 3. Follow the steps in View and manage job details in Deadline Cloud to view a list of jobs. Select a job from the Jobs list. Select a step from the Steps list. You can use the Actions menu to do the following: Requeue a job Version latest 37 AWS Deadline Cloud • Change the status of the step. User Guide • Download the output of the step. When you download the output of a step, it contains all of the output generated by the tasks in the step. • View the dependencies of a step. The dependencies table shows a list of steps that must be complete before the selected step starts, and a list of steps that are waiting for this step to complete. View a task in Deadline Cloud Use the AWS Deadline Cloud monitor to view the tasks in your processing jobs. In the Job monitor, the Tasks list shows the tasks that make up the step selected in the Steps list. To view a task 1. 2. 3. 4. Follow the steps in View and manage job details in Deadline Cloud to view a |
user-guide.pdf-016 | user-guide.pdf.pdf | 16 | • View the dependencies of a step. The dependencies table shows a list of steps that must be complete before the selected step starts, and a list of steps that are waiting for this step to complete. View a task in Deadline Cloud Use the AWS Deadline Cloud monitor to view the tasks in your processing jobs. In the Job monitor, the Tasks list shows the tasks that make up the step selected in the Steps list. To view a task 1. 2. 3. 4. Follow the steps in View and manage job details in Deadline Cloud to view a list of jobs. Select a job from the Jobs list. Select a step from the Steps list. Select a task from the Tasks list. You can use the Actions menu to do the following: • Change the status of the task. • View task logs. For more information, see View logs in Deadline Cloud. • View that parameters that were set when the task was created. • Download the output of the task. When you download the output of a task, it only contains the output generated by the selected task. View logs in Deadline Cloud Logs provide you with detailed information about the status and processing of tasks. In the AWS Deadline Cloud monitor, you can see the following two types of logs: • Session logs detail the timeline of actions, including: • Setup actions, such as attachment syncing and loading the software environment • Running a task or set of tasks View a task Version latest 38 AWS Deadline Cloud User Guide • Closure actions, such as shutting down the environment on a worker A session includes processing of at least one task, and can include multiple tasks. Session logs also show information about Amazon Elastic Compute Cloud (Amazon EC2) instance type, vCPU, and memory. Session logs also include a link to the log for the worker used in the session. • Worker logs provide details for the timeline of actions that a worker processes during its lifecycle. Worker logs can contain information about multiple sessions. You can download session and worker logs so that you can examine them offline. To view session logs 1. 2. 3. 4. 5. Follow the steps in View and manage job details in Deadline Cloud to view a list of jobs. Select a job from the Jobs list. Select a step from the Steps list. Select a task from the Tasks list. From the Actions menu, choose View logs. The Timelines section shows a summary of the actions for the task. To see more tasks run in the session and to see the shutdown actions for the session, choose View logs for all tasks. To view worker logs from a task 1. 2. 3. 4. 5. Follow the steps in View and manage job details in Deadline Cloud to view a list of jobs. Select a job from the Jobs list. Select a step from the Steps list. Select a task from the Tasks list. From the Actions menu, choose View logs. 6. Choose Session info. 7. Choose View worker log. To view worker logs from fleet details 1. 2. Follow the steps in View queue and fleet details in Deadline Cloud to view a fleet. Select a Worker ID from the Workers list. View logs Version latest 39 AWS Deadline Cloud User Guide 3. From the Actions menu, choose View worker logs. Download finished output in Deadline Cloud After a job is finished, you can use the AWS Deadline Cloud monitor to download the results to your workstation. The output file is stored with the name and location that you specified when you created the job. Output files are stored indefinitely. To reduce storage costs, consider creating an S3 Lifecycle configuration for your queue's Amazon S3 bucket. For more information, see Managing your storage lifecycle in the Amazon Simple Storage Service User Guide. To download the finished output of a job, step, or task 1. 2. 3. 4. Follow the steps in View and manage job details in Deadline Cloud to view a list of jobs. Select the job, step, or task that you want to download the output for. • If you select a job, you can download all of the output for all of the tasks in all of the steps for that job. • If you select a step, you can download all of the output for all of the tasks in that step. • If you select a task, you can download the output for that individual task. From the Actions menu, choose Download output. The output will be downloaded to the location set when the job was submitted. Note Downloading output using the menu is currently only supported for Windows and Linux. If you have a |
user-guide.pdf-017 | user-guide.pdf.pdf | 17 | If you select a job, you can download all of the output for all of the tasks in all of the steps for that job. • If you select a step, you can download all of the output for all of the tasks in that step. • If you select a task, you can download the output for that individual task. From the Actions menu, choose Download output. The output will be downloaded to the location set when the job was submitted. Note Downloading output using the menu is currently only supported for Windows and Linux. If you have a Mac and you choose the Download output menu item, a window shows the AWS CLI command that you can use to download the rendered output. Download finished output Version latest 40 AWS Deadline Cloud User Guide Deadline Cloud farms With a Deadline Cloud farm, you can manage users and project resources. A farm is a where your project resources are located. Your farm consists of queues and fleets. A queue is where submitted jobs are located and scheduled to be rendered. A fleet is a group of worker nodes that run tasks to complete jobs. After you create a farm, you can create queues and fleets to meet your project's needs. Create a farm 1. 2. From the Deadline Cloud console, choose Go to Dashboard. In the Farms section of the Deadline Cloud dashboard, choose Actions → Create farm. • Alternatively, in the left side panel choose Farms and other resources, then choose Create Farm. 3. Add a Name for your farm. 4. 5. For Description, enter the farm description. A clear description can help you quickly identify your farm's purpose. (Optional) By default, your data is encrypted with a key that AWS owns and manages for your security. You can choose Customize encryption settings (advanced) to use an existing key or to create a new one that you manage. If you choose to customize encryption settings using the checkbox, enter a AWS KMS ARN, or create a new AWS KMS by choosing Create new KMS key. 6. (Optional) Choose Add new tag to add one or more tags to your farm. 7. Choose Create farm. After creation, your farm displays. Create a farm Version latest 41 AWS Deadline Cloud User Guide Deadline Cloud queues A queue is a farm resource that manages and processes jobs. To work with queues, you should already have a monitor and farm set up. Topics • Create a queue • Create a queue environment • Associate a queue and fleet Create a queue 1. From the Deadline Cloud console dashboard, select the farm that you want to create a queue for. • Alternatively, in the left side panel choose Farms and other resources, then select the farm you want to create a queue for. 2. 3. 4. 5. In the Queues tab, choose Create queue. Enter a name for your queue. For Description, enter the queue description. A description helps you identify your queue's purpose. For Job attachments, you can either create a new Amazon S3 bucket or choose an existing Amazon S3 bucket. a. To create a new Amazon S3 bucket i. ii. Select Create new job bucket. Enter a name for the bucket. We recommend naming the bucket deadlinecloud- job-attachments-[MONITORNAME]. iii. Enter a Root prefix to define or change your queue's root location. b. To choose an existing Amazon S3 bucket Select Choose an existing S3 bucket > Browse S3. Select the S3 bucket for your queue from the list of available buckets. i. ii. Create a queue Version latest 42 AWS Deadline Cloud User Guide 6. (Optional) To associate your queue with a customer-managed fleet, select Enable association with customer-managed fleets. 7. If you enable association with customer-managed fleets, you must complete the following steps. Important We strongly recommend specifying users and groups for run-as functionality. If you don't, it will degrade your farm’s security posture because the jobs can then do everything the worker's agent can do. For more information about the potential security risks, see Run jobs as users and groups. a. For Run as user: To provide credentials for the queue's jobs, select Queue-configured user. Or, to opt out of setting your own credentials and run jobs as the worker agent user, select Worker agent user. b. (Optional) For Run as user credentials, enter a user name and group name to provide credentials for the queue's jobs. If you are using a Windows fleet, you must create an AWS Secrets Manager secret that contains the password for the Run as user. If you don't have an existing secret with the password, choose Create secret to open the Secrets Manager console to create a secret. For more information, see Manage access to Windows job user secrets in the Deadline |
user-guide.pdf-018 | user-guide.pdf.pdf | 18 | out of setting your own credentials and run jobs as the worker agent user, select Worker agent user. b. (Optional) For Run as user credentials, enter a user name and group name to provide credentials for the queue's jobs. If you are using a Windows fleet, you must create an AWS Secrets Manager secret that contains the password for the Run as user. If you don't have an existing secret with the password, choose Create secret to open the Secrets Manager console to create a secret. For more information, see Manage access to Windows job user secrets in the Deadline Cloud Developer Guide. 8. Requiring a budget helps manage costs for your queue. Select either Don't require a budget or Require a budget. 9. Your queue requires permission to access Amazon S3 on your behalf. You can create a new service role or use an existing service role. If you don't have an existing service role, create and use a new service role. a. b. To use an existing service role, select Choose a service role, and then select a role from the dropdown. To create a new service role, select Create and use a new service role, and then enter a role name and description. Create a queue Version latest 43 AWS Deadline Cloud User Guide 10. (Optional) To add environment variables for the queue environment, choose Add new environment variable, and then enter a name and value for each variable you add. 11. (Optional) Choose Add new tag to add one or more tags to your queue. 12. To create a default Conda queue environment, keep the checkbox selected. To learn more about queue environments, see Create a queue environment. If you are creating a queue for a customer-managed fleet, clear the checkbox. 13. Choose Create queue. Create a queue environment A queue environment is a set of environment variables and commands that set up fleet workers. You can use queue environments to provide software applications, environment variables, and other resources to jobs in the queue. When you create a queue, you have the option of creating a default Conda queue environment. This environment provides service-managed fleets access to packages for partner DCC applications and renderers. The default environment For more information, see Default Conda queue environment. You can add queue environments using the console, or by editing the json or YAML template directly. This procedure describes how to create an environment with the console. 1. To add a queue environment to a queue, navigate to the queue and select the Queue environments tab. 2. Choose Actions, then Create new with form. 3. Enter a name and description for the queue environment. 4. Choose Add new environment variable, and then enter a name and value for each variable you add. 5. (Optional) Enter a priority for the queue environment. The priority indicates the order that this queue environment will run on the worker. Higher priority queue environments will run first. 6. Choose Create queue environment. Create a queue environment Version latest 44 AWS Deadline Cloud User Guide Default Conda queue environment When you create a queue associated with a service-managed fleet, you have the option of adding a default queue environment that supports Conda to download and install packages in a virtual environment for your jobs. If you add a default queue environment with the Deadline Cloud console, the environment is created for you. If you add a queue another way, such as the AWS CLI or with AWS CloudFormation, you'll need to create the queue environment yourself. To ensure you have the correct contents for the environment, you can refer to queue environment template YAML files on GitHub. For the contents of the default queue environment, see the default queue environment YAML file on GitHub. There are other queue environment templates available on GitHub that you can use as a starting point for your own needs. Conda provides packages from channels. A channel is a location where packages are stored. Deadline Cloud provides a channel, deadline-cloud, that hosts Conda packages that support partner DCC applications and renderers. Select each tab below to view the available packages for Linux or Windows. Linux • Blender • blender=3.6 • blender=4.2 • blender-openjd • Houdini • houdini=19.5 • houdini=20.0 • houdini=20.5 • houdini-openjd • Maya • maya=2024 maya=2025 • maya-mtoa=2024.5.3 Default Conda queue environment Version latest 45 AWS Deadline Cloud User Guide maya-mtoa=2025.5.4 • maya-openjd • Nuke • nuke=15 • nuke-openjd Windows • After Effects • aftereffects=24.6 • aftereffects=25.1 • Cinema 4D • cinema4d=2024 • cinema4d=2025 • cinema4d-openjd • KeyShot • keyshot=2024 • keyshot-openjd When you submit a job to a queue with the default Conda environment, the environment adds two parameters to the job. These parameters specify the Conda packages and channels to use to configure the job's environment before |
user-guide.pdf-019 | user-guide.pdf.pdf | 19 | • blender-openjd • Houdini • houdini=19.5 • houdini=20.0 • houdini=20.5 • houdini-openjd • Maya • maya=2024 maya=2025 • maya-mtoa=2024.5.3 Default Conda queue environment Version latest 45 AWS Deadline Cloud User Guide maya-mtoa=2025.5.4 • maya-openjd • Nuke • nuke=15 • nuke-openjd Windows • After Effects • aftereffects=24.6 • aftereffects=25.1 • Cinema 4D • cinema4d=2024 • cinema4d=2025 • cinema4d-openjd • KeyShot • keyshot=2024 • keyshot-openjd When you submit a job to a queue with the default Conda environment, the environment adds two parameters to the job. These parameters specify the Conda packages and channels to use to configure the job's environment before tasks are processed. The parameters are: • CondaPackages – a space-separated list of package match specifications, such as blender=3.6 or numpy>1.22. The default is empty to skip creating a virtual environment. • CondaChannels – a space separated list of Conda channels such as deadline-cloud, conda- forge, or s3://amzn-s3-demo-bucket/conda/channel. The default is deadline-cloud, a channel available to service-managed fleets that provides partner DCC applications and renderers. When you use an integrated submitter to send a job to Deadline Cloud from your DCC, the submitter populates the value of the CondaPackages parameter based on the DCC application Default Conda queue environment Version latest 46 AWS Deadline Cloud User Guide and submitter. For example, if you are using Blender the CondaPackage parameter is set to blender=3.6.* blender-openjd=0.4.*. We recommend you pin any submissions to only the versions listed in the table above, for example blender=3.6. This is because patch releases affect the available packages. For example, when we release Blender 3.6.17, we will no longer distribute Blender 3.6.16. Any submissions pinned to blender=3.6.16 will fail. If you pin to blender=3.6, then you will get the latest distributed patch version and jobs will not be impacted. By default, the DCC submitters pin to the current versions listed in the table above, excluding the patch number, such as blender=3.6. Associate a queue and fleet To process jobs, you must associate a queue with a fleet. You can associate a single fleet with multiple queues and a single queue with multiple fleets. When you associate a fleet with multiple queues, it divides its workers evenly among them. Similarly, when you associate a queue with multiple fleets, it distributes jobs evenly across those fleets. Follow these steps to associate an existing queue with an existing fleet: 1. From your Deadline Cloud farm, select the Queue you want to associate with a fleet. The queue displays. 2. To select a fleet to associate with your queue, choose Associate fleets. 3. Choose the Select fleets dropdown. A list of available fleets displays. 4. From the list of available fleets, select the checkbox next to the fleet or fleets you want to associate with your queue. 5. Choose Associate. The fleet association status should now be Associated. Associate a queue and fleet Version latest 47 AWS Deadline Cloud User Guide Deadline Cloud fleets This section explains how to manage service-managed fleets and customer-managed fleets (CMF) for Deadline Cloud. You can set up two types of Deadline Cloud fleets: • Service-managed fleets are fleets of workers that have default settings provided by Deadline Cloud. These default settings are designed to be efficient and cost effective. • Customer-managed fleets (CMFs) provide you with full control over your processing pipeline. A CMF can reside within AWS infrastructure, on premises, or in a co-located data center. This includes provisioning, operations, management, and decommissioning workers in the fleet. When you associate a fleet with multiple queues, it divides its workers evenly among those queues. Topics • Service-managed fleets • Customer-managed fleets Service-managed fleets A service-managed fleet (SMF) is a fleet of workers that have default settings provided by Deadline Cloud. These default settings are designed to be efficient and cost-effective. Some of the default settings limit the amount of time that workers and tasks can run. A worker can only run for seven days and a task can only run for five days. When the limit is reached, the task or worker stops. If this happens, you might lose work that worker or task was running. To avoid this, monitor your workers and tasks to ensure they don't exceed the maximum duration limits. To learn more about monitoring your workers, see Using the Deadline Cloud monitor. Create a service-managed fleet 1. 2. 3. From the Deadline Cloud console, navigate to the farm you want to create the fleet in. Select the Fleets tab, and then choose Create fleet. Enter a Name for your fleet. Service-managed fleets Version latest 48 AWS Deadline Cloud User Guide 4. (Optional) Enter a Description. A clear description can help you quickly identify your fleet's purpose. 5. Select Service-managed fleet type. 6. Choose either the Spot or On-demand instance market option for your fleet. Spot instances are unreserved capacity that you |
user-guide.pdf-020 | user-guide.pdf.pdf | 20 | more about monitoring your workers, see Using the Deadline Cloud monitor. Create a service-managed fleet 1. 2. 3. From the Deadline Cloud console, navigate to the farm you want to create the fleet in. Select the Fleets tab, and then choose Create fleet. Enter a Name for your fleet. Service-managed fleets Version latest 48 AWS Deadline Cloud User Guide 4. (Optional) Enter a Description. A clear description can help you quickly identify your fleet's purpose. 5. Select Service-managed fleet type. 6. Choose either the Spot or On-demand instance market option for your fleet. Spot instances are unreserved capacity that you can used at a discounted price, but may be interrupted by On-demand requests. On-demand instances are priced by the second, but have no long-term commitment, and will not be interrupted. By default, fleets use Spot instances. 7. For service access for your fleet, select an existing role or create a new role. A service role provides credentials to instances in the fleet, granting them permission to process jobs, and to users in the monitor so that they can read log information. 8. Choose Next. 9. Choose between CPU only instances or GPU accelerated instances. GPU accelerated instances may be able to process your jobs faster, but can be more expensive. 10. Select the operating system for your workers. You can leave the default, Linux or choose Windows. 11. (Optional) If you selected GPU accelerated instances, set the maximum and minimum number of GPUs in each instance. For testing purposes you are limited to one GPU. To request more for your production workloads, see Requesting a quota increase in the Service Quotas User Guide. 12. Enter the minimum and maximum vCPU’s that you require for you fleet. 13. Enter the minimum and maximum memory that you require for you fleet. 14. (Optional) You can choose to allow or exclude specific instance types from your fleet to ensure only those instance types are used for this fleet. 15. (Optional) Set the maximum number of instances to scale the fleet so that capacity is available for the jobs in the queue. We recommend that you leave the minimum number of instances at 0 to ensure the fleet releases all instances when no jobs are queued. 16. (Optional) You can specify the size of the Amazon Elastic Block Store (Amazon EBS) gp3 volume that will be attached to the workers in this fleet. For more information, see the EBS user guide. 17. Choose Next. 18. (Optional) Define custom worker capabilities that define features of this fleet that can be combined with custom host capabilities specified on job submissions. One example is a particular license type if you plan to connect your fleet to your own license server. 19. Choose Next. Create an SMF Version latest 49 AWS Deadline Cloud User Guide 20. (Optional) To associate your fleet with a queue, select a queue from the dropdown. If the queue is set up with the default Conda queue environment, your fleet is automatically provided with packages that support partner DCC applications and renderers. For a list of provided packages, see Default Conda queue environment. 21. Choose Next. 22. (Optional) To add a tag to your fleet, choose Add new tag, and then enter the key and value for that tag. 23. Choose Next. 24. Review your fleet settings, and then choose Create fleet. Use a GPU accelerator You can configure worker hosts in your service-managed fleets to use one or more GPUs to accelerate processing your jobs. Using an accelerator can reduce the time that it takes to process a job, but can increase the cost of each worker instance. You should test your workloads to understand the trade offs between a fleet using GPU accelerators and fleets that don't. Note For testing purposes you are limited to one GPU. To request more for your production workloads, see Requesting a quota increase in the Service Quotas User Guide. You decide whether your fleet will use GPU accelerators when you specify the worker instance capabilities. If you decide to use GPUs, you can specify the minimum and maximum number of GPUs for each instance, the types of GPU chips to use, and the runtime driver for the GPUs. The available GPU accelerators are: • T4 - NVIDIA T4 Tensor Core GPU • A10G - NVIDIA A10G Tensor Core GPU • L4 - NVIDIA L4 Tensor Core GPU • L40s - NVIDIA L40S Tensor Core GPU You can choose from the following runtime drivers: Use a GPU accelerator Version latest 50 AWS Deadline Cloud User Guide • Latest - Use the latest runtime available for the chip. If you specify latest and a new version of the runtime is released, the new version of the runtime is used. • GRID:R550 - NVIDIA vGPU software 17 • GRID:R535 |
user-guide.pdf-021 | user-guide.pdf.pdf | 21 | the GPUs. The available GPU accelerators are: • T4 - NVIDIA T4 Tensor Core GPU • A10G - NVIDIA A10G Tensor Core GPU • L4 - NVIDIA L4 Tensor Core GPU • L40s - NVIDIA L40S Tensor Core GPU You can choose from the following runtime drivers: Use a GPU accelerator Version latest 50 AWS Deadline Cloud User Guide • Latest - Use the latest runtime available for the chip. If you specify latest and a new version of the runtime is released, the new version of the runtime is used. • GRID:R550 - NVIDIA vGPU software 17 • GRID:R535 - NVIDIA vGPU software 16 If you don't specify a runtime, Deadline Cloud uses latest as the default. However, if you have multiple accelerators and specify latest for some and leave others blank, Deadline Cloud raises an exception. Software licensing for service-managed fleets Deadline Cloud provides usage-based licensing (UBL) for commonly used software packages. Supported software packages are automatically licensed when they run on a service-managed fleet. You don't need to configure or maintain a software license server. Licenses scale so you won't run out for larger jobs. You can install software packages that support UBL using the built-in Deadline Cloud conda channel, or you can use your own packages. For more information about the conda channel, see Create a queue environment. For a list of supported software packages and information about pricing for UBL, see AWS Deadline Cloud pricing. Bring your own license with service-managed fleets With Deadline Cloud usage-based licensing (UBL) you don't need to manage separate licence agreements with software vendors. However, if you have existing licenses or need to use software that isn't available through UBL, you can use your own software licenses with your Deadline Cloud service-managed fleets. You connect your SMF to the software license server via the internet to check out a license for each worker in the fleet. For an example of connecting to a license server using a proxy, see Connect service-managed fleets to a custom license server in the Deadline Cloud Developer Guide. VFX Reference Platform compatibility The VFX Reference Platform is a common target platform for the VFX industry. To use the standard service-managed fleet Amazon EC2 instance running Amazon Linux 2023 with software that Software licenses Version latest 51 AWS Deadline Cloud User Guide supports the VFX Reference Platform, you should keep in mind the following considerations when using a service-managed fleet. The VFX Reference Platform is updated annually. These considerations for using an AL2023 including Deadline Cloud service-managed fleets are based on the calendar year (CY) 2022 through 2024 Reference Platforms. For more information, see VFX Reference Platform. Note If you are creating a custom Amazon Machine Image (AMI) for a customer-managed fleet, you can add these requirements when you prepare the Amazon EC2 instance. To use VFX Reference Platform supported software on an AL2023 Amazon EC2 instance, consider the following: • The glibc version installed with AL2023 is compatible for runtime use, but not for building software compatible with the VFX Reference Platform CY2024 or earlier. • Python 3.9 and 3.11 are provided with the service-managed fleet making it compatible with VFX Reference Platform CY2022 and CY2024. Python 3.7 and 3.10 are not provided in the service- managed fleet. Software requiring them must provide the Python installation in the queue or job environment. • Some Boost library components provided in the service-managed fleet are version 1.75, which is not compatible with the VFX Reference Platform. If your application uses Boost, you must provide your own version of the library for compatibility. • Intel TBB update 3 is provided in the service-managed fleet. This is compatible with VFX Reference Platform CY2022, CY2023, and CY2024. • Other libraries with versions specified by the VFX Reference Platform are not provided by the service-managed fleet. You must provide the library with any application used on a service- managed fleet. For a list of libraries, see the reference platform. Customer-managed fleets When you want to use a fleet of workers that you manage, you can create a customer-managed fleet (CMF) that Deadline Cloud uses to process your jobs. Use a CMF when: • You have existing on-premises workers to integrate with Deadline Cloud. Customer-managed fleets Version latest 52 AWS Deadline Cloud User Guide • You have workers in a co-located data center. • You want direct control of Amazon Elastic Compute Cloud (Amazon EC2) workers. When you use a CMF, you have full control over and responsibility for the fleet. This includes provisioning, operations, management, and decommissioning workers in the fleet. For more information, see Create and use Deadline Cloud customer-managed fleets in the Deadline Cloud Developer Guide. Customer-managed fleets Version latest 53 AWS Deadline Cloud User Guide Managing users in Deadline Cloud AWS Deadline Cloud uses AWS IAM Identity Center |
user-guide.pdf-022 | user-guide.pdf.pdf | 22 | Deadline Cloud. Customer-managed fleets Version latest 52 AWS Deadline Cloud User Guide • You have workers in a co-located data center. • You want direct control of Amazon Elastic Compute Cloud (Amazon EC2) workers. When you use a CMF, you have full control over and responsibility for the fleet. This includes provisioning, operations, management, and decommissioning workers in the fleet. For more information, see Create and use Deadline Cloud customer-managed fleets in the Deadline Cloud Developer Guide. Customer-managed fleets Version latest 53 AWS Deadline Cloud User Guide Managing users in Deadline Cloud AWS Deadline Cloud uses AWS IAM Identity Center to manage users and groups. IAM Identity Center is a cloud-based single sign-on service that can be integrated with your enterprise single- sign on (SSO) provider. With integration, users can sign in with their company account. Deadline Cloud enables IAM Identity Center by default, and it is required to set up and use Deadline Cloud. For more information, see Manage your identity source. An organization owner for your AWS Organizations is responsible for managing the users and groups that have access to your Deadline Cloud monitor. You can create and manage these users and groups using IAM Identity Center or the Deadline Cloud console. For more information, see What is AWS Organizations. You create and remove users and groups that can manage farms, queues, and fleets using the Deadline Cloud console. When you add a user to Deadline Cloud, they must reset their password using IAM Identity Center before they get access. Topics • Manage users and groups for the monitor • Manage users and groups for farms, queues, and fleets Manage users and groups for the monitor An Organizations owner can use the Deadline Cloud console to manage the users and groups that have access to the Deadline Cloud monitor. You can choose from existing IAM Identity Center users and groups, or you can add new users and groups from the console. 1. Sign in to the AWS Management Console and open the Deadline Cloud console. From the main page, in the Get started section, choose Set up Deadline Cloud or Go to dashboard. 2. In the left navigation pane, choose User management. By default, the Groups tab is selected. Depending on the action to take, choose either the Groups tab or Users tab. Manage users for your monitor Version latest 54 AWS Deadline Cloud Groups To create a group 1. Choose Create group. User Guide 2. Enter a group name. The name must be unique among groups in your IAM Identity Center organization. To remove a group 1. Select the group to remove. 2. Choose Remove. 3. In the confirmation dialog, choose Remove group. Note You are removing the group from IAM Identity Center. Group members can no longer sign in to the Deadline Cloud or access farm resources. Users To add users 1. Choose the Users tab. 2. Choose Add users. 3. 4. Enter the name, email address, and username for the new user. (Optional) Choose one or more IAM Identity Center groups to add the new user to. 5. Choose Send invite to send the new user an email with instructions for joining your IAM Identity Center organization. To remove a user 1. Select the user you to remove. 2. Choose Remove. 3. In the confirmation dialog, choose Remove user. Manage users for your monitor Version latest 55 AWS Deadline Cloud User Guide Note You are removing the user from IAM Identity Center. The user can no longer sign in to the Deadline Cloud monitor or access farm resources. Manage users and groups for farms, queues, and fleets As part of managing users and groups, you can grant access permissions at different levels. Each subsequent level includes the permissions for the previous levels. The following list describes the four access levels from the lowest level to the highest level: • Viewer – Permission to see resources in the farms, queues, fleets, and jobs they have access to. A viewer can't submit or make changes to jobs. • Contributor – Same as a viewer, but with permission to submit jobs to a queue or farm. • Manager – Same as contributor, but with permission to edit jobs in queues they have access to, and grant permissions on resources that they have access to. • Owner – Same as manager, but can view and create budgets and see usage. Note Changes to access permissions can take up to 10 minutes to reflect in the system. 1. If you haven't already, sign in to the AWS Management Console and open the Deadline Cloud 2. 3. 4. console. In the left navigation pane, choose Farms and other resources. Select the farm to manage. Choose the farm name to open the details page. You can search for the farm using the |
user-guide.pdf-023 | user-guide.pdf.pdf | 23 | in queues they have access to, and grant permissions on resources that they have access to. • Owner – Same as manager, but can view and create budgets and see usage. Note Changes to access permissions can take up to 10 minutes to reflect in the system. 1. If you haven't already, sign in to the AWS Management Console and open the Deadline Cloud 2. 3. 4. console. In the left navigation pane, choose Farms and other resources. Select the farm to manage. Choose the farm name to open the details page. You can search for the farm using the search bar. To manage a queue or fleet, choose the Queues or Fleets tab, and then choose the queue or fleet to manage. 5. Choose the Access management tab. By default, the Groups tab is selected. To manage users, choose Users. Manage users for farms Version latest 56 AWS Deadline Cloud User Guide Depending on the action to take, choose either the Groups tab or Users tab. Groups To add groups 1. Select the Groups toggle. 2. Choose Add group. 3. 4. From the dropdown, select the groups to add. For the group access level, choose one of the following options: • Viewer • Contributor • Manager • Owner 5. Choose Add. To remove groups 1. Select the groups to remove. 2. Choose Remove. 3. In the confirmation dialog, choose Remove group. Users To add users 1. 2. 3. To add a user, choose Add user. From the dropdown, select the users to add. For the user access level, choose one of the following options: • Viewer • Contributor • Manager • Owner Manage users for farms Version latest 57 AWS Deadline Cloud 4. Choose Add. To remove users 1. Select the user to remove. 2. Choose Remove. 3. In the confirmation dialog, choose Remove user. User Guide Manage users for farms Version latest 58 AWS Deadline Cloud User Guide Deadline Cloud jobs A job is a set of instructions that AWS Deadline Cloud uses to schedule and run work on available workers. When you create a job, you choose the farm and queue to send the job to. A submitter is a plugin for your digital content creation (DCC) application that manages creating a job in the interface of your DCC application. After you create the job, you use the submitter send it to Deadline Cloud for processing. The submitter creates an Open Job Specification (OpenJD) template that describes the job. At the same time it uploads your asset files to an Amazon Simple Storage Service (Amazon S3) bucket. To reduce upload time, the submitter only sends files that have changed since the last upload to Amazon S3 You can also create a job in the following ways. • From a terminal – for users submitting a job that are comfortable using the command line. • From a script – for customizing and automating workloads. • From an application – for when the user's work is in an application, or when an application's context is important. For more information, see How to submit a job to Deadline Cloud in the Deadline Cloud Developer Guide. A job consists of: • Priority – The approximate order that Deadline Cloud processes a job in a queue. You can set the job priority between 0 and 100, jobs with a higher number priority are generally processed first. Jobs with the same priority are processed in the order received. • Steps – Defines the script to run on workers. Steps can have requirements such as minimum worker memory or other steps that need to complete first. Each step has one or more tasks. • Tasks – A unit of work sent to a worker to perform. A task is a combination of a step's script and parameters, such as a frame number, that are used in the script. The job is complete when all tasks are complete for all steps. • Environment – Set up and tear down instructions shared by multiple steps or tasks. Version latest 59 AWS Deadline Cloud User Guide Using a Deadline Cloud submitter A submitter is a tool that integrates with your digital content creation so that you can send render jobs directly to Deadline Cloud. This integration streamlines your workflow by eliminating the need to switch between applications or manually transfer files. This saves time and reduces the potential for errors. Submitters are available for many popular DCC applications. Installing a submitter, adds Deadline Cloud specific options to your application's interface, typically in the render settings or export menu. With a Deadline Cloud submitter you can: • Configure render job parameters in your familiar DCC environment • Submit jobs to Deadline Cloud without leaving your application • Reduce the potential for errors associated with manual file transfers • |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.