Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Region is arequired parameterfor the SSM client to know which region it should be interacting with. It does not try to assume even if you’re in the AWS cloud.If you want it to assume in your container the simplest way in which to implement is to use the AWSenvironment variables.In yourcontainer definitionspecify the environment attribute specify a variable with name AWS_DEFAULT_REGION and the value of your current region.By doing this you will not have to specify a region in the SDK within the container.This exampleuses the environment attribute for more information. | I have a python app running in a Docker container on a EC2 instance managed by ECS (well, that's what I would like...). However, to use services likeSSMwith boto3, I need to know the region where the instance is running. I dont need any credentials as I use a role for the instance which grants access to the service, so a defaultSessionis ok.I know that it is possible to fetch the region with acurlto get the dynamic metadata, but is there any more elegant way to instantiate a client with a region name (of credentials) inside an EC2 instance ?I ran through the boto3documentationand foundNote that if you've launched an EC2 instance with an IAM role configured, there's no explicit configuration you need to set in boto3 to use these credentials. Boto3 will automatically use IAM role credentials if it does not find credentials in any of the other places listed above.So why do I need to pass the region name forSSMclient for example ? Is there a workaround ? | How to use boto3 inside an EC2 instance |
aws iam list-policies --query 'Policies[?starts_with(PolicyName,`AWSCodeCommit`)]' | I'm trying to listAWS (managed) policiesrelated toCodeCommitusingAWS CLI.I foundaws list-policiesbut it doesn't seem to have a way to filter. It just returns ALL the policies.I would like to return the same as if I was using theconsoleAWSCodeCommitFullAccessAWSCodeCommitPowerUserAWSCodeCommitReadOnlyAnyone knows the proper way to do this? Thanks! | Use AWS CLI to list certain managed policies |
Unfortunately, youcan't useyour policy in IP policy for an ES domain.Let me elaborate a bit on this, as I think there is a confusion betweenresource-based policies, such as IP policies for the ES domain, andidentity-based policiesfor IAM users, roles or groups. The differences are explained in the AWSdocs.In short, you policyarn:aws:iam::0000000:policy/Whitelisteris, so called,managed-policy. The managed polices can only be attached toIAM identitywhich can beIAM user, group or role. They can't be attached to resource-based policies. | I have made a whitelist policy containing list of IP address from where I want to get the IPSample below, consider Policy ARN isarn:aws:iam::0000000:policy/Whitelister{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"W.X.Y.Z",
"A.B.C.D"
]
}
}
}
]
}I have an AWS Elasticsearch(ES) account, which allows JSON based access policy. How can I use the above policy in AWS ES' policy to restrict access to these IPs only.I have hard written IPs now, but that will cause redundancy and updating the IPs will be difficult.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": [
"arn:aws:es:****************/domain-name/*",
"arn:aws:es:****************/domain-name/"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"W.X.Y.Z",
"A.B.C.D"
]
}
}
}
]
} | Attach Policy to AWS Elasticsearch |
No you can't use multiple API keys on a single API. However you can configure multiple user pools if you use Amazon Cognito. | I am using Appsync as graphql server. I have read through this articlehttps://aws.amazon.com/blogs/mobile/using-multiple-authorization-types-with-aws-appsync-graphql-apis/which allows me to use multiple authorisation methods in graphql.I have two fields on graphql schema which require API Key authorization. Can I assign one key for one field and a different key for different field?By looking at the@aws_api_key, it doesn't have extra parameter likenameor something. It seems that if a field is authorized by API_KEY, it can be accessed by any API KEY generated by the appsync.Can I create one APIKEY for a particular field? I am looking for something like below schema:getAllPosts(): [Post]
@aws_api_key(key1)
updatePost(post: Post): Post
@aws_api_key(key2)I'd like to usekey1forgetAllPostsandkey2forupdatePost. How can I achieve this in appsync? | How can I use two API Key to authorize two graphql fields separately in Appsync? |
You need to connect to the Bastion host, and use that connection to open a tunnel from your machine to the target machine in the private subnet. That allows you open a second connection to the target machine, using the tunnel.Here is a guide on how to do this using Putty:AWS Setup Bastion Host SSH Tunnel(they are also opening a second tunnel to a Windows server, you can ignore that part). | I have a scenario as following,
I have one EC2 instance in private subnet and one EC2 instance in public subnet.
How can I connect to private subnet EC2 instance through public subnet EC2 instance which is also called Bastion host (Jump box) from my Windows OS client machine.?? | How to connect to EC2 instance which is in Private subnet from my Windows OS client machine through Bastion host.? |
In AWS your SSD disk is known as anEBS Volume.To update the volume size you would from the EC2 console want to find your current volume and right click on it. Then click modify volume and select your new size.If you haven't created your instance yet you can specify the size during the wizard. | I am new to AWS EC2 and I have just purchased a t3 medium reserved instance. I would like to add 100 GB of SSD storage to my instance and use it as the instance's primary hard disk. How can I do this? I did not see any option of adding and configuring the SSD disk when I purchased the instance.I have purchased a linux/unix instance without any AMI. I intend to install Ubuntu 18.04 as the OS. Please advise. | Add SSD storage to AWS EC2 reserved instance |
I already found it. You HAVE TO use the boto3-SDK to post an external question | I am trying to create an ExternalQuestion, but I always receive an error. There is also no template for creating ExternalQuestions, so I just use the "other" template and paste the code below.
I used the example code from the "External Question"-doc and I was expecting it to show me my website in an iframe:Code:<?xml version="1.0" encoding="UTF-8"?>
<ExternalQuestion xmlns="https://example.com">
<ExternalURL>https://example.com/task01</ExternalURL>
<FrameHeight>0</FrameHeight>
</ExternalQuestion>I thought that this might show my website in an iframe and appends the parameters for worker-ID and so on. But when I'm trying to post this I get the errorLayout does not contain any fields for Workers to provide responses. Please include at least one field (input, select, or textarea).What am I doing wrong? | Where to create External Question on Mechanical Turk |
The problem is simple, a stable internet connection is required to maintain the ssh connection as suggested by @John Rotenstien. | I was able to properly access my EC2 instance till yesterday. I don't know what happened but suddenly today I am able to login to the instance but the server closes the connection after 10 seconds of login. What could be the reason?The message I am getting[ec2-user@ip-172-31-32-248 ~]$ Connection to ec2-18-221-152-137.us-east-2.compute.amazonaws.com closed by remote host.
Connection to ec2-18-221-152-137.us-east-2.compute.amazonaws.com closed.``` | AWS EC2 ssh connection closes after 10 seconds |
My understanding is that you want ONE buildspec that you can reuse for multiple projects with a similar build. If this is the case, I think you can do this but you need to reverse your primary and secondary sources.When you create a build project you have to define the buildspec that is going to be used. Your build project is going to use the buildspec from your PRIMARY source. So the SOURCE that has your primary buildspec will need to be your PRIMARY and the project you are going to build will be your SECONDARY.Then, in your buildspec you can reference the commands pointing to your SECONDARY source using environment variables. In your buildspec, you would reference CODEBUILD_SRC_DIR_sourceIdentier.I have done this with codepipeline having multiple sources. If you define your output for your secondary source to be called SECONDARY_SOURCE_OUTPUTThen you would refer to it in your buildspec as $CODEBUILD_SRC_DIR_SECONDARY_SOURCE_OUTPUT/.Your buildspec exists in your Primary source but you would set your commands would execute from the directory above in your buildspec. Now you can have several similar projects that have the same build pattern use the same buildspec.In the case below, I use the same buildspec from the project on the left in the source stage, with different projects that can be pulled in as secondary sources.This linkhas some information on multiple sources. | My plan is to create a build project with two sources:The primary source is the repository of the application I'm building.A secondary source is a repository containing a generic buildspec.yml and other files like eslint configs.The reason for that is I want to separate app code from build definition, so I can reuse the same build definition for several apps. The same buildspec.yml and accompanying eslint files are capable of building any of my backend applications.According to documentation the buildspec path for a build project is a path relative to the root path of primary source.But what is the correct way of pointing to a buildspec.yml residing in secondary source?(As far as I know the application code must be the primary source so Codebuild can detect code changes like PRs opened and code push operations.)(I know that Codebuild allows a S3 path as buildspec path but I don't see how it can help me since my secondary source is a repository.)Thanks! | How to specify a buildspec file contained in a secondary source in Codebuild? |
I think your link should have the following form:<a href="{{link}}">{{link}}</a> | I have to send an email using template through AWS SES.{
"Template": {
"TemplateName": "resetPasswordEmailTemplate",
"SubjectPart": "FBO Hangars- Recover Password",
"HtmlPart": "Hi {{name}},\r\n Please click on the below link to recover your password.<b> <a href="">{{link}}</a> <b> The link will be valid for 24 hours. <b> Thank you for using ,Team",
}
}This is my template. issue is the link is not clickable. Also when i inspect the DOM, href attribute is not there in the tag. Is this something related to configurations? Or the issue with my template | AWS -SES: Adding link in my email template |
No, after you upload the deployment package, it's saved in the function and layer storage of your AWS lambda account which has adefault limitof 75GB. On each invocation of the lambda function, the deployment package will be pulled from there.Since the deployment package is not pulled from S3, it will not incur any data transfer cost. | I'm well aware of the lambda function deployment package size limit is 50 MB(in case of compressed .zip/.jar) with a direct upload and 250 MB limit (uncompressed) via upload from S3.What I'm not clear is how lambda deploys the package from S3?Is it on the each invocation of the lambda function?Will there be any cost associated data transfer between S3 to lambda function? | Does AWS Lambda use S3 during invocation or only during deployment? |
Both Amazon SNS and Amazon Pinpoint, supports sending push notification to Amazon devices (e.g Amazon fire tablet) through ADM (Amazon Device Messaging).The major difference between Amazon SNS & Amazon Pinpoint is that : with Amazon SNS you have to set up your application to manage each message's audience, content, and delivery schedule. On the other hand, with Amazon Pinpoint you do not have to code these features, most of them are already built in. With Amazon Pinpoint, you can collect data about your app usage, create highly-targeted segments and send full campaigns(either immediate or scheduled) plus many more features. | I have a use case where i want to send a notification to user on an Amazon fire tablet App and upon tapping on the Notification I show him the features of App.I want to schedule this Notification from cloud. I saw that we have two services Amazon pinpoint and SNS in doing so. But lot of their features seems overlapping.
And I also know about Amazon Device messaging which is a service to push Notification.which service is more suitable here and why ?They all sound confusing to me. Anyone who can keep them in simple words would help me. | Amazon Pinpoint vs SNS vs ADM |
You could have an S3 event for create object that triggers a Lambda function. This could perform the validation checks you desire.See:https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html | I understand that a pre-signed URL is a way to send a file to S3. By doing that way, how can the object be validated? For example, I want to submit a JSON file to S3 and I want to make sure the file is in a correct format as input. I'd like to know if there is any way to make a response that the file is correctly saved and is valid by own validator function. | How to validate a file on S3 send by pre-signed URL? |
When you use theec2:Regionin the Condition key, that'sEC2 specificYou'll want to try theaws:RequestedRegionfor the condition key.Beware though,Some global services, such as IAM, have a single endpoint. Because this endpoint is physically located in the US East (N. Virginia) Region, IAM calls are always made to the us-east-1 RegionGive it a try with{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "*",
"NotAction": [
"iam:*",
"organizations:*",
"account:*"
],
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "us-east-2"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:DeleteServiceLinkedRole",
"iam:ListRoles",
"organizations:DescribeOrganization",
"account:ListRegions"
],
"Resource": "*"
}
]
} | I'm trying to create an AWS IAM Policy that gives access to everything that a Power User has (arn:aws:iam::aws:policy/PowerUserAccess) but only in a specific region.I started with the existing Power User policy and found this article:https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_ec2_region.htmlSo I added the "condition" to the Power User Policy and the result is:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "*",
"NotAction": [
"iam:*",
"organizations:*",
"account:*"
],
"Condition": {
"StringEquals": {
"ec2:Region": "us-east-2"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:DeleteServiceLinkedRole",
"iam:ListRoles",
"organizations:DescribeOrganization",
"account:ListRegions"
],
"Resource": "*"
}
]
}This does not seem to be working as I can create EC2 instances only in the specified region... but other services are not available: | AWS IAM PowerUser Scoped to Specific Region |
you can now get select the inference response for classification models :)More info athttps://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development-container-output.html | I am testing SageMaker AutoPilot in order to verify how good it is for regular use.Up until now, it seems relatively easy to use it, it trained a model with good results and it was easy to create the endpoint. I would like to get the predicted label and its probability, in order to check if the prediciton is good. However, I could only get the label and I did not find anything about retrieving the probability (predict_proba).Is there any way to get the probability? Thank you! | Predict probability on AWS SageMaker AutoPilot endpoint |
The IN clause requires a list or a column, not a comma-separated string.
One way to do what you want is tosplitandexplodea string, something like this:WITH values AS (
select explode(split('${hivevar:idListToFilter}',',')) val
)
SELECT * FROM table_1 t
WHERE t.id NOT IN (
SELECT trim(x.val) from values x
);... whereidListFilteris passed into HQL as a simple comma-separated string, via$ beeline --hivevar idListToFilter="id1,id2,id3" ... | I am working on HQL where I need to pass an array of strings as an argument:select * from table_1 where id not in ('${idListToFilter}')I want to passidListToFilteras an argument in Hive query. Tried using values likeidListToFilter="'1','2','3'"but getting an exception:NoViableAltException(340@[319:1: constant : ( ( intervalLiteral )=> intervalLiteral | Number | dateLiteral | timestampLiteral | StringLiteral | stringLiteralSequence | IntegralLiteral | NumberLiteral | charSetStringLiteral | booleanValue | KW_NULL -> TOK_NULL );])Can someone help? | Passing array of string as an argument in Hive Query (HQL) |
This message does NOT guarantee the stack will be created successfully:"StackId": "arn:aws:cloudformation:eu-west-2:350027292717:stack/emajarstack/a3b07fd0-8d1e-11ea-9ac5-060e4e394d84"Two possibilities here:1 - 99% sure your stack failed creation because of some issues with the template or other (e.g. limits, dependencies) and that's why it's not showing up, check in the console for more details. It might be in roll_back complete state, so if you can't look at the console, use this flag:aws cloudformation list-stacks --stack-status-filter ROLLBACK_COMPLETE2 - Specify the correct region with your list command as your CLI will be looking in the default region which could be different from the one your stack is in.Edit: also, run aws sts get-caller-identity to ensure you are using the right user with enough permissions. | I'm facing a pretty weird problem using AWS CLI. I created a new IAM user from my main profile and I gave this userAdministratorAccessin order to allow this user to create AWS resources using a CloudFormation script.I just created a new stack containing a VPC resource. The stack should have been created ineu-west-2since I got the following message:{
"StackId": "arn:aws:cloudformation:eu-west-2:350027292717:stack/emajarstack/a3b07fd0-8d1e-11ea-9ac5-060e4e394d84"
}If I log in with my main AWS profile I cannot see any stack created in theeu-west-2region.
I even tried to run a couple of CLI commands to list or describe my stacks but apparently no stack has been created:$ aws cloudformation list-stacks
{
"StackSummaries": []
}
$ aws cloudformation describe-stacks --stack-name emajarstack
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id emajarstack does not existFun fact is that I cannot create the same stack because I get the following message:An error occurred (AlreadyExistsException) when calling the CreateStack operation: Stack [emajarstack] already existsQuestions are:do you know how can I find my stack or if I'm doing something wrong?is there a way to search a stack by id? | AWS - Cannot find a Cloudfront stack |
Based on the comments, the problem was due to missing permissions tocodestar-connections.The solution was to create an inline policy in the role in question with permissions tocodestar-connections:*. | I am trying to build a Pipeline for our CICD process using AWS CodePipeline. I click on "Create Pipeline", provide a name and use the defaults for the first panel. On the next panel, on selecting "BitBucket(beta)" as the source provider, I get the following access exception:AccessDeniedException:User: arn:aws:iam::280945876345:user/Roger is not authorized to perform: codestar-connections:ListConnections on resource: arn:aws:codestar-connections:us-west-2:280945876345:*I went through the documentation and provided full access to CodePipeline, CodeDeploy, CodeStar, CodeBuildAdmin, CloudFormation, AmazonS3, AmazonECS, AWSCodeCommit to the IAM user. I dont find any policies related to codestar-connections:* that I could add. I understand the CodePipeline-BitBucket integration is in Beta phase, but just wanted to check if anyone else had encountered this issue and resolved it. | AWS IAM access exception with AWS CodePipeline when integrating with BitBucket |
As mentioned on syumaK's answer, this is not supported by the Textstract API. Consider maybe using alternative services like Google Vision API which often gives you whole paragraphs rather than just lines.Alternatively, consider how text is normally laid out on a page. Lines part of the same paragraph tend to have similar-ish widths as well as similar heights, they will either share similar left, center or right x-locations depending on alignment used and generally the separation between lines in the y-direction will be less than 2 times the height of the line. You can limit your search to single pages at a time. Might benefit from building a spatial search index like an r-tree to improve the page search speed.No code sorry, but that should form a pretty good skeleton for building out the line block aggregation function. | I've started experimenting withaws-textract, specifically withdetect-document-text(Docs:https://docs.aws.amazon.com/textract/latest/dg/detecting-document-text.html).
For one example, where the image content is:This is the first line
should continue here.
This is the second line.detect-document-textoutput, is returning aJSON, where eachBlockTypenode is eitherWORD,LINEorPAGE(Some other elements are attached like,Relationshipswhere is defined thetypeand a list ofId's,Geometryinformation (coordinates),Confidence, etc). In this case, output will contain aBlockType(LINE) for each row (as expected), something like this:{
...
{
...
"BlockType": "LINE",
"Confidence": 97.8960189819336,
"Text": "This is the first line",
...
},
{
...
"BlockType": "LINE",
"Confidence": 97.8960189819336,
"Text": "should continue here.",
...
},
{
...
"BlockType": "LINE",
"Confidence": 97.8960189819336,
"Text": "This is the second line.",
...
},
...
}My question is the next,is there a parameter that can be overwritten (like span value for rows or cells to keep a single node by "sentence") or a kind option to group lines by paragraph (based on calculated coordinates) with the intention to have full sentences?Or is this a mandatory post-processing from client side? Wondering, seems to be a common scenario, so trying to find if it's already offered bytextractor some otherawsservice usingtextractoutputJSON. | aws textract - Group output lines by parragraph |
As all projects, initial implementation of CSFLE had a scope. This scope did not include the ability to use instance roles for credential identification.I suggest you submit your request tohttps://feedback.mongodb.com/for consideration. | I am using EC2Instance profile credentialsfor allowing the AWS EC2 instance to access other AWS services.Recently, I implementedMongoDB Client-Side Field-Level Encryptionfor which the AWS KMS has been used as KMS Providers. TheMongoDB Documentation for CSFLEmentions that the KMS Provider should have secret key and access key that maps to an IAM User.This way I will have to create another IAM User and then maintain those credentials separately. A simpler way (and more secure) would have been to use theDefaultCredentialsProviderfromsoftware.amazon.awssdk:authand that could have used the credentials from the instance profile that could have given access to the KMS. But this does not work for me and MongoClient fails as KMS rejects the security token used.Is there any reason behind not allowing this way of accessing KMS? | Can't use AWS IAM Roles with KMS Providers for MongoDB Client Side Field Level Encryption? |
My first thought would be to store the pfx file in an S3 bucket in your account, specifying KMS encryption when you store the file. Then give the Lambda function's IAM role permission to read the file from S3.In someinitialization code outside of your Lambda function's handler, you would simply call an S3 copy function, using the AWS SDK, to copy the pfx file to the Lambda function's/tmpfolder. | I'm developing a lambda function to consume a soap api. The soap api requires authentication with an ssl certificate. I managed to get it working locally by importing the pfx file using a binary loader (webpack), and then writing it back to the '/tmp/' path in the lambda container like so:const cert = require('/etc/ssl/certs/cert.pfx')
const certPath = '/tmp/cert.pfx'
fs.writeFileSync(certPath, Buffer.from(cert, 'binary'))
client.setSecurity('/tmp/cert.pfx', 'secretPassphrase', {...options});This is not really a viable strategy as it would either require adding the pfx file to version control or otherwise complicated measures.What i would love is to be able to just require the pfx binary from somewhere in AWS (secretsmanager/paramstore/someotherservice). But I can't seem to figure out a way to get that to work with the binary pfx format.What is the smart way to solve this problem?Thanks a million! | How to safely add .pfx certificate to aws lambda |
You are able to useFn::ImportValuein conjunction with!Subin cloudformation templates. However, the intrinsic function reference types and order are important here. As per theAWS Documentation:You can't use the short form of !ImportValue when it contains a !Sub. Instead, you must use the full function name.Therefore, structure your template like,Properties:
Bucket:
Fn::Sub:
- 'arn:aws:s3:::${BucketName}/prefix/*'
- BucketName: !ImportValue VPCCommonBucketAlso, as your probably aware, to use the import function you must havedeclared the resource an outputin a separate cloudformation template. Here's an AWS providedwalk-throughif you get stuck. | We have a huge VPC CF Template that we use to define our development, staging, and production environments. One of these resources is a Common S3 bucket for use with tasks not directly related to a specific customer. This bucket has an Export namedVPCCommonBucketwhich contains just the bucket name.I am trying to use this Export value in another stack, referencing that bucket, creating an IAM user that has access to ONLY that bucket, further restricting it to a single directory IN that bucket.When using a Parameter, I can do something like this:!Sub "arn:aws:s3:::${BucketName}/prefix/*"But I cannot find something similar with regard to usingFn::ImportValue/!ImportValue. Is there a way to insert an exported variable into a string as I'm trying to do here? Or is this a matter of needing to go back and alter our main Template to include ANOTHER Export for the Bucket's arn? | Insert a CloudFormation ImportValue similar to how you can insert a Parameter? |
For custom runtimes, you are billed for the init time as well as mentioned in thedocs-"Initialization counts towards billed execution time and timeout. When an execution triggers the initialization of a new instance of your function, you can see the initialization time in the logs and AWS X-Ray trace."548.98 ms (function duration) + 411.83 ms (init) = 960.81ms rounded off to next 100ms resulting in Billed Duration: 1000 msFor the runtimes which Lambda supports; init time isn't counted towards the billed duration. | I am testing applications with two different runtimes: node.js and java native executable (ahead of time compiled with GraalVM).Here are the startup logs.Node.js:Duration: 556.31 ms Billed Duration: 600 ms Memory Size: 128 MB Max Memory Used: 81 MB Init Duration: 365.44 msNative executable:Duration: 548.98 ms Billed Duration: 1000 ms Memory Size: 256 MB Max Memory Used: 106 MB Init Duration: 411.83 msAs you can see,DurationandInit durationare very close, but for some reasonBilled Durationis almost 2 times more for the custom runtime with native executable.Could you please explain what is the difference and how I can avoid that? | Why AWS lambda billing differ for different runtimes? |
The issue was the NFS version.
Since it was a windows VM we tried disabling the older NFS versions and tried it did work when we selected the right version.But we moved to SMB since it was easier to setup on Windows VMAWS Data Sync agent uses below command to mount the share foldermount -o uid=65534, gid=65534, file_mode=0755, dir_mode=0755, forceuid, forcegid, noperm, noacl,rsize=1048576, wsize=1048576, soft -o user=awsDS, password=, vers=2.1 -t cifs <MOUNT_TARGET> <MOUNT_PATH> | I am trying to Sync the data from a On Premise VM to AWS S3 bucket using AWS Data Sync, I have already configured the AWS Data Sync Agent on the On Prem VM , The Agent is now Online and we have also created a new task, The task is available in state.As I am trying to sync the data from the NFS File System to S3 bucket using the task we get the below mentioned error:"DataSync could not detect any files in the source NFS filesystem" | AWS DataSync could not detect any files in the source NFS filesystem |
Adding the following lines to configuration helps:cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfigurationSo Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too.So this is Spring bug for sure.It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure. | I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't careI foundexamplesthat use CloudFormation stack for very strange reason to run the app locally.When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.Any ideas how to run the app locally? | How to run the app with spring-cloud-starter-aws locally? |
Miniohttps://min.io/- would give you DIY option for S3 - compatible on-prem storage. Alternatively - if you ready to spend a lot of money and keep your data on-prem with AWS S3 - take a look atAWS Outpost | Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed2 years ago.Improve this questionCan we configure/install Amazon S3 on-prem? Is there a way to configure Amazon S3 on-prem instead of Amazon. I know there is aHybrid cloud storagetype but this will store data on-prem and S3 as-well.But I am looking for a solution where the S3 should be managed and store data on-prem VPC only. | How to configure/install Amazon S3 on-prem [closed] |
So, I was actually able to achieve this by defining a new provider in the module which assumes theOrganizationAccountAccessRoleinside the newly created account.Here's an example:// Define new account
resource "aws_organizations_account" "my_new_account" {
name = "my_new_account"
email = "[email protected]"
}
provider "aws" {
/* other provider config */
assume_role {
// Assume the organization access role
role_arn = "arn:aws:iam::${aws_organizations_account.my_new_account.id}:role/OrganizationAccountAccessRole"
}
alias = "my_new_account"
}
resource "aws_config_config_rule" "s3_versioning" {
// Tell resource to use the new provider
provider = aws.my_new_account
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}However, it should be noted that defining the provider inside the module leads to a few quirks, notably once you source this moduleyou cannot delete this module. If you do it will throw aError: Provider configuration not presentsince you will have also removed the provider definition.But, if you don't plan on removing these accounts (or are okay with doing it manually when needed) then this should be good! | My goal is to create a Terraform Module which creates a Child AWS accountandcreates a set of resources inside the account (for example, AWS Config rules).The account is created with the followingaws_organizations_accountdefinition:resource "aws_organizations_account" "account" {
name = "my_new_account"
email = "[email protected]"
}And an exampleaws_config_config_rulewould be something like:resource "aws_config_config_rule" "s3_versioning" {
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}However, doing this creates the AWS Config rule in the master account, not the newly created child account.How can I define the config rule to apply to the child account? | Terraform Create resource in Child AWS Account |
The vertex you get back from the query is known as a reference vertex. It will only contain an ID and a label. For properties you need you should explicitly ask for them using a step likevalues,projectorvalueMap. | I am accessing a neptune DataBase instance from a Lambda, I have successfully configured the connection of the neptune database from the lambda using the followingCluster.Builder builder = Cluster.build();
builder.addContactPoint("endpoint");
builder.port(8182);
builder.enableSsl(true);
builder.keyCertChainFile("SFSRootCAG2.pem");I have even sent update and insert statements to the database usingGraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
g.addV("Custom Label").property(T.id, "CustomId1").property("name", "Custom id vertex 1").next();But when I try to retrieve properties of the vertexVertex vertex = g.V().has(T.id, "CustomId1").next(); System.out.println((String) vertex.value("name"));I receive the error that the property name doesn't not exist on that vertex:org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: The property does not exist as the key has no associated value for the provided element: v[CustomId1]:nameCan someone please let me know that is the mistake i am making here? | Vertex.value() Property Not found Gremlin Neptune Java |
As your error message what did you get, you are mixing callback style withasync/awaitstyle, then it throws a warning.I prefer useasync/await. This mean, handler function always is a async function (with async keyword), then instead of callcallbackfunction, just return the result, and you no needcallbackparameter in handler function.In error case, just throw the error (withouttry/catchblock).const wrapperHandler: Handler<CognitoUserPoolEvent> = async (
event,
context,
// callback
) => {
// let error = null;
try {
await myAsyncFunc();
} catch (e) {
// error = e;
// Do something with your error
throw e;
}
// callback(error, event);
return event; // just return result for handler function
};In simple:const wrapperHandler: Handler<CognitoUserPoolEvent> = async (
event,
context,
) => {
await myAsyncFunc();
return event;
}; | I created an API Gateway + lambda forsignUpwithamazon-cognito-identity-js.Then I implemented a Cognito trigger function for preSignUp with TypescriptI use Serverless framework to pack and deploy. The runtime is Node 12+++++++const wrapperHandler: Handler<CognitoUserPoolEvent> = async (
event,
context,
callback
) => {
let error = null;
try {
await myAsyncFunc();
} catch (e) {
error = e;
}
callback(error, event);
};Everything works fine, it can return the error to the actual endpoint lambda which will then be returned, if no error, the logic will be executed.However, this warning is pretty annoying.The code is forpreSignUpin CloudWatchWARNING: Callback/response already delivered. Did your function invoke the callback and also return a promise? For more details, see:https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.htmlIn the code, I didn't return anything before calling the callback, why would this happen? and how to solve it. | Why would this AWS lambda cause error: WARNING: Callback/response already delivered |
No - if you have already retrieved an item, you will not retrieve it again in the sameScanoperation even if it is modified. Moreover, if your modification adds anewitem, this new item may or may not be returned by the ongoing scan - you don't know.To understand why this is the case, you need to understand howScanactually works:Although DynamoDB doesn't guarantee any sorting order between the partition keys, in their internal implementation there is some order for this partition keys (based on a hash function of the key - this is why these keys are also known in some DynamoDB documentation ashash keys). AScaniterates over the partition keys in this hash order, and doesn't go back. If the scan passed some position in the hash values, it won't pass it again, and in particular the same item with the same partition key will not be retrieved again, and new items will or won't be retrieved depending on whether the new item's partition key is before or after the current position of the scan.If you really need to get all new modifications as they happen, you should consider using DynamoDB's "Stream" feature instead of - or in addition to -Scan. DynamoDB's Stream feature lets you read all the modifications to the database as their happen. Some applications combine both aScanto read existing items, and a stream to read items modified after the Scan started. | When I run scan operation on DynamoDb, I can useLastEvaluatedKeythat is returned as part of the response to get the scan the next items available. And I can repeat this process until it doesn't returnLastEvaluatedKeyanymore and that indicates there is no more items to scan.My question is, if while in the middle of that scan operation, some items that have been retrieved from the scanning process are updated (put), should I expect those items to appear again? | Scanning DynamoDb while updating some items |
event['body']is going to return a string, a json string.You need to parse it with something like this.body = JSON.parse(event['body'])
my_int = body['message']Also if you were to do.puts event.inspectinstead ofputs "#{event['body']}"you would have been able to see that body returned a string and not an object. I hope that helps and good luck. | So I'm new to ruby and I have a simple REST API.The post request looks like this:POST /endpoint, { 'message': 1 }My lambda handler looks like this:def run(event:, context:)
puts "#{event['body']}"
# prints the request body
endI'm trying to figure out how to storemessage, which is anintin a variable.I was trying to do something like this but doesn't workmy_int = event['body']['message'] | Getting the event body attributes in Ruby with AWS Lambda |
Above issue was resolved by explicitly specifying the connection details for aurora serverless cluster (instead of dropdown selection). But the answer to original question of using Aurora serverless DB as source in DMS replication -Yes, if only one time replication is requiredNo, If ongoing replication is required. For ongoing replication, It is required to change the values of binlog_format parameter for source database. Although, Aurora serverless allows changing value for this parameter but it has no impact in actual. Only a few parameters are supported for change which are listedhere | My DMS replication instance (which is in same VPC as of Aurora serverless DB instance) is not able to find DB while creating endpoint in DMS.However, I am able to create a cloud9 instance in same VPC as aurora serverless instance and connect to it from there.Am I missing something here or it is not possible to use AWS DMS for migrating data from Aurora serverless as source? | Can we use AWS Data Migration Service for replication from Aurora Serverless as source? |
Not sure if this is the same case for you but I got the same error when I was under the impression my environment was healthy - I figured out my issue by clicking on "Show all" in the Recent events section, at which point the culprit error logs were displayed. | I get a message when I am trying to deploy my php app on EC2 using Elastic Beanstalk.Environment named *** is in an invalid state for this operation. Must be Ready.I am unable to deploy. The server status is running, Environment Health is OK and no warnings.How can I resolve this? | Invalid Parameter Value: AWS Environment named *** is in an invalid state for this operation. Must be Ready |
As per the hadoop documentation different S3 buckets can be accessed with different S3A client configurations, having a per bucket configuration including the bucket name.Eg: fs.s3a.bucket.<bucket name>.access.keyCheck the below URL:http://hadoop.apache.org/docs/r2.8.0/hadoop-aws/tools/hadoop-aws/index.html#Configurations_different_S3_buckets | I have spark jobs running on a EKS cluster to ingest AWS logs from S3 buckets.
Now I have to ingest logs from another AWS account. I have managed to use the below setting to successfully read in data from cross account with hadoop AssumedRoleCredentialProvider.
But how do I save the dataframe back to my own AWS account S3? It seems no way to set the Hadoop S3 config back to my own AWS account.spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.external.id","****")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.credentials.provider","com.amazonaws.auth.InstanceProfileCredentialsProvider")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.arn","****")
val data = spark.read.json("s3a://cross-account-log-location")
data.count
//change back to InstanceProfileCredentialsProvider not working
spark.sparkContext.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","com.amazonaws.auth.InstanceProfileCredentialsProvider")
data.write.parquet("s3a://bucket-in-my-own-aws-account") | How to use Spark to read data from one AWS account and write to another AWS account? |
It turns out that each step in the SFN has access to the original input, but you have to explicitly state insert them usingParameters. All the input is available in$$.Execution.Input.When usingParameters, you are overriding all other input, so you willalsohave to define what you want to pass along from the previous step. In other words: If Y is passing{fruit: "apple"}to Z, and you want Z to see both that and the original input{Foo: "Bar"}, you would add the following to the Z step in SFN:{
"Parameters": {
"Foo.$": "$$.Execution.Input.Foo",
"fruit.$": "$.apple"
}Note the double$$in the first case, and the single$in the second. | I have an AWS Step Function that consists of a"Map"(looping over an Array in the input), and then a number of tasks carried out in sequence with all the results. However, in the last step I'dalsolike to be able to access some of the original input data, that was not used in the intermediate steps. How should I best do this?What I have tried so far:Passing all input data through the map tasks. This however fails quickly, because there is a limit on how large results can be passed between functions, and with hundreds of iteration the original input is multiplied hundreds of times.UsingParallelstate, and add an empty state in one branch of the parallel execution that simply passes the input on, while all of the iteration happens in the other. This feels like a hack, because my step structure now poorly reflects the actual logic of the code.What would be the “right” way to do this?Iterate
over Fiz
+--------+ +-------------+ +------------+
{ | +---+ | | | | |
Foo: "Bar", --> -------+ | +---+ Y +----+ Z |
Fiz: [...], | +-+ | | | | |
} +-----+ +-------------+ +------------+In box Z above, I want to have access to both 'Foo', and the output of Y | Pass data across loop in AWS Step Functions |
Try removingDouble quotesand execute.ALTER TABLE herdsysa.temperature ADD IF NOT EXISTS PARTITION (dt='2020-02-03') | ALTER TABLE "herdsysa"."temperature" ADD IF NOT EXISTS PARTITION (dt='2020-02-03')I am trying to run this query on athena workbench but it says missing column at if (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: 935dfae3-a4af-4438-be16-10d7884c9292)anybody know how to make this work? | Amazon Athena ALTER TABLE ADD PARTITION query giving missing column error |
The problem was related to Mysql version and TLS version. This matrix shows that for MySQL 5.6 only TLS 1.0 is supported. Node.js 12 by default uses TLS 1.2.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport | I am trying to connect to RDS through Lambda NodeJS 12.x with SSL. However I am receiving these errors:Error: 4506652096:error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol:
library: 'SSL routines',
function: 'ssl_choose_client_version',
reason: 'unsupported protocol',
code: 'HANDSHAKE_SSL_ERROR'I am connecting like this:const pool = mysql.createPool({
connectionLimit : 10,
host : 'db.cqgcxllqwqnk.eu-central-1.rds.amazonaws.com',
ssl : {
ca : fs.readFileSync(__dirname + '/rds-ca-2019-root.pem')
},
user : ‘xxxxx’,
password : ‘xxxxxx’,
database : ‘xxxxxx’,
multipleStatements : true
});When I connect with the certificate through MySql Workbench everything works just fine.Any idea on how to solve this?Thanks a lot! | Connection error in AWS RDS SSL in NodeJS |
The--limitseems to work. But it is a little long to retrieve records from the correct shard iterator. | I'm trying to run DynamoDBStreams GetRecords command with--limitoption but nothing is returned...$ aws --version
aws-cli/1.16.266 Python/3.5.2 Linux/5.3.0-28-generic botocore/1.13.2
$ aws dynamodbstreams --profile my-profile --region my-region get-records --shard-iterator my-shard-iterator --limit 2
# Output
{
"NextShardIterator": "my-next-shard-iterator",
"Records": []
}When I remove the--limitoption, Some records are returned:$ aws dynamodbstreams --profile my-profile --region my-region get-records --shard-iterator my-shard-iterator
# Output
{
"Records": [
{
# record 1
},
{
# record 2
}
],
"NextShardIterator": "my-next-shard-iterator"
}According to thedocumentation--limit (integer)
The maximum number of records to return from the shard. The upper limit is 1000.Am I doing something wrong or this option does not work globally ?Thanks | DynamoDBStreams GetRecords with --limit returns empty records |
C# .Net wrapper for extracting Key-Value Pairs from a Form DocumentThis Githublinkmight be useful to all of you. | I'm looking for sample C#.net code to integrate Textract into .Net application. I tried for a sample but all available are with Java and python. | Textract APi Call using C#.net |
This library is not super intuitive and took me a bit to figure out. Sounds like theymay be revampingthis a lot soon but as of spring-cloud-aws:2.2.3.RELEASE I got it working like this:AuthenticationMake sure you have your profile configuration in a <USER_HOME>/.aws/credentials file with a [default] profileRegionIn version 2.2.3 there is abugthat defaults the region to us-west-2 unless this is specified in your bootstrap.ymlaws:
secretsmanager:
region: <whatever region you'd like>Secrets ManagerMake a secret named/secret/application. For now, add a key/value pair "password:secret"CodeThese key/value pairs will be directly mapped to properties, so you should now be able to just throw this in your Spring app. Yourpasswordvariable will now have the value "secret"@Service
public class MyService {
@Value("${password}")
public String password;
}That's the basics. If youread the documentationthey describe how to determine which secrets are checked on startup. In your example, by default it would also be checking/secret/com.example.testin addition to/secret/application | I'm trying to integrate Spring Cloud application with the AWS Secrets Manager.While doing, I'm having issue finding example code for Spring Cloud and the AWS Secrets manager integration. I have got the spring-cloud-starter-aws-secrets-manager-config in our pom, looking at theofficial docs.As per this documentation, I need to just add property sources in a certain way, but I'm unsure how it can select the correct secrets?If my application is called com.example.test does that mean my secret should be called secret.com.example.test and anything I add in there will automatically be available as a property source?Do I even need to add any code for this to work? or Could you provide any other sources to complete this? | How to integrate Spring Cloud with AWS Secrets Manager? |
Here's an outline of one way to solve this:the client makes a/startAPI request that triggers Lambda #1Lambda #1 is short-lived and does the following:generates a UUID as a correlator for the task about to be undertakencreates a new item in DynamoDB, with the UUID as the keytriggers the long-lasting (1 minute) Lambda #2 to start, passing it the UUIDreturns the UUID to the clientLambda #2 is long-lived and does the following:whatever work it needs toperiodically update its status and results to the UUID item in DynamoDBthe client can poll a/status?id=UUIDAPI, on whatever schedule it likes, which triggers Lambda #3Lambda #3 is short-lived and does the following:query the UUID item from DynamoDBreturn the current status and any results to the clientWhen the/status?id=UUIDAPI call indicates that the long-lived task is complete (or failed), the client can make a final API request to indicate that it has the result associated with the UUID and the DynamoDB item can be deleted, or you could just implement a TTL on the DynamoDB item.This process looks complicated, but it's really not.Rather than the client polling the back-end for status and results, it could alternately poll an SQS queue for the same, or subscribe to an SNS topic. | The problem I have is the following.I currently have a system running in .NET. This system makes a call to a service which takes approximately 1 minute.We are currently migrating the solution to AWS. And the problem I find is that the Lambda works runs in 1 minute (since it makes the call to the other system that takes 1 minute) and everything works fine. But when I make the call from the Gateway API, y have a timout. Investigating I found that it has a maximum timeout of 29 seconds.Then I need to know what solution I can have to this problem, considering that i need wait 1 minute for the lambda function.One that occurred to me is to trigger the call from the API, and that the lambda function runs, and from the client create a pool to see the status of the transaction. But I don't know how to keep the initial call "in memory" and when I call again the api to see the status, I know I'm talking about the same request, to get the result data | Long call to aws api gateway |
I asked this question of AWS support who were able to give me a good answer so I thought I'd share the summary here for others.In short:AWS Elasticsearch distributes shards based onshard countrather thanshard sizeso keep your shard sizes balanced if you can.If you have your cluster configured to be spread across3 availability zones, make your data instance count adivisible by 3.My CaseEach of my 14 instances gets~100 shardsinstead of~100 GBeach.Remember that I have a lot of relatively empty indices.
This translates to a mixture of small and large shards which causes the imbalance when AWS Elasticsearch (inadvertently) allocates lots of large shards to an instance.This is further worsened by the fact that I have my cluster set to be distributed across 3 availability zones and my data instance count (14) is not divisible by 3.Increasing my data instance count to 15 (or decreasing to 12) solved the problem.From the AWS Elasticsearchdocson Multi-AZ:To avoid these kinds of situations, which can strain individual nodes and hurt performance, we recommend that you choose an instance count that is a multiple of three if you plan to have two or more replicas per index.Further ImprovementOn top of the availability zone issue, I suggest keeping index sizes balanced to make it easier for the AWS algorithm.In my case I can merge older indexes, e.g.data-2019-01...data-2019-12->data-2019. | BackgroundI have an AWS managed Elascsearch v6.0 cluster that has 14 data instances.It has time based indices likedata-2010-01,...,data-2020-01.ProblemFree storage space is very unbalanced across instances, which I can see in the AWS console:I have noticed this distribution changes every time the AWS services runs through a blue-green deploy.
This happens when cluster settings are changed or AWS releases an update.Sometimes the blue-green results in one of the instances completely running out of space.
When this happens the AWS service starts another blue-green and this resolves the issue without customer impact. (It does have impact on my heart rate though!)Shard SizeShards size for our indices are gigabytes in size but below the Elasticsearchrecommendationof50GB.
The shard size does vary by index, though. Lots of our older indices have only a handful of documents.QuestionThe way the AWS balancing algorithm does not balance well, and that it results in a different result each time is unexpected.My question is how does the algorithm choose which shards to allocate to which instance and can I resolve this imbalance myself? | AWS Elasticsearch cluster disk space not balanced across data instances |
I have had the same problem. Ended up pulling the necessary libraries+fonts from Amazon Linux 2 image like followed:1) Run and enter the docker container for Amazon Linux 2:docker run -it --rm amazonlinux:2.0.20191217.02) Install the necessary tools inside of the docker container, and automatically prepare the necessary dependencies (64-bit based):mkdir -p /deps
yum install -y yum-utils rpmdevtools
yum install -y libXrender.x86_64 fontconfig.x86_64 freetype.x86_64 libXext.x86_64 libX11.x86_64 expat.x86_64 libxcb.x86_64 libXau.x86_64
yumdownloader libXrender.x86_64 fontconfig.x86_64 freetype.x86_64 libXext.x86_64 libX11.x86_64 expat.x86_64 libxcb.x86_64 libXau.x86_64
rpmdev-extract *rpm
cp /tmp/*/usr/lib64/* /deps
cp -R /tmp/*/etc/fonts /deps/3) Open a new termial windows and navigate into the PDF lambda folder. Usingdocker pscommand, locate the container id and paste following command:docker cp <CONTAINER_ID>:/deps/ . && mv deps/* . && rmdir deps4) Replace the content of<your_lambda_path>/deps/fonts/fonts.confwith this, or provide your own config + font files:<fontconfig>
<dir>/var/task/fonts/</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>5) Inside your handler you will need to set following to find the font:process.env['FONTCONFIG_PATH'] = process.env['LAMBDA_TASK_ROOT'] + '/fonts'After doing so, simply zip your package and deploy as you usually have.Hope that helps | I have updated the lambda function from nodejs8 to nodejs12.wkhtmltopdf was working well with the nodejs 8 but now I get this error :"wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or directoryI have tried to put manually the librairie libXrender into the file project but it doesn't work.If someone have the solution on how to make wkhtmltopdf work on aws lambda in nodejs 12 that would be great. Thank you in advance. | AWS Lambda NodeJS12.x - error while loading shared libraries: libXrender.so.1 |
The user who accesses the EC2 has different role from the machine itself.The role and access of the machine to secrets manager are defined by eitherUser Role (upon creation of EC2 in AWS Console), or./aws/credentials and ./aws/configurationIn my code, I used boto.utils.get_instance_identity() to get the region. Then get the access_key and secret_key from boto3.Session().get_credentials().get_frozen_credentials()You may also want to use botocore.credentials.RefreshableCredentials since the token from get_frozen_credentials() expires. | I have a Python program running on a linux EC2 instance, I am trying to get a value from secrets manager but I keep getting a permissions errorAn error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:sts::user_id_here:assumed-role/AmazonSSMRoleForInstancesQuickSetup/somestring is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:eu-west-2:xxx_my_secretIn my IAM settings the user is inside a group with Administrator access and then the user itself has the perimssionSecretsManagerReadWriteWhat permissions do I need to change? | Permission error when accessing AWS secrets manager from an EC2 instance |
I've made a simple demo on using PDFKit using Serverless framework using layer. Checkouthttps://medium.com/@crespo.wang/create-pdf-using-pdfkit-on-serverless-aws-lambda-with-layer-721ca86724b2 | I'm using AWS Lambda to generate pdf file using a ninja2 template. I am trying to usepdfkitto convert my HTML into pdf. I realizepdfkithas an internal dependency -wkhtmltopdfwhich needs to be used as a binary or installed via a package manager. I am not sure how to make this work on AWS Lambda?With my current template and python code using pdfkit, I am getting the following error -{
"errorMessage": "No wkhtmltopdf executable found: \"b''\"\nIf this file exists please check that this process can read it. Otherwise please install wkhtmltopdf - https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf",
"errorType": "OSError",
.....
.....
}Any ideas on how can I makepdfkitwork on lambda?Any suggestions forwkhtmltopdfreplacements?Thanks | How to use pdfkit in AWS Lambda? |
If many records are generated in a very short amount of time, for example if many S3 files were uploaded simultaneously, then Amazon may send a list of records in a single event. | I have an infrastructure such as this:S3->SQS->Lambda. When a file is dropped to S3, it puts the event into an SQS queue, which then is consumed by a lambda function.Lambda is written in .net core. In lambda handlers parameter, I receive anSQSEventand within its body, anS3Eventthat is serialized to Json.The class structures are roughly as below. They are straight from the AWS .net SDK.class SQSEvent
{
public List<SQSMessage> Records;
...
}
class SQSMessage
{
public string Body; // json serialized S3Event is put into here
...
}
class S3Event
{
public List<S3EventNotification> Records;
...
}The part I'm curious about is that, bothSQSEventandS3Eventhave a list of records. In my experiments, I always received a single item in those lists. Is it known under which circumstances any of theseRecordslist will contain multiple items in it? I failed to find a document stating about this behaviour. | AWS lambda event parameters: When does Records hold multiple items in it? |
After reading thru the AWS documents, I figured out that a UDF cannot reference the contents of another UDF.https://docs.aws.amazon.com/redshift/latest/dg/udf-python-language-support.htmlTherefore, my function always throws an exception. I figured out an alternative way to accomplish this using python librarydateutil.parserWorking function below.create or replace function f_Is_timestamp(val VARCHAR(20000))
returns bool
IMMUTABLE
as $$
from dateutil.parser import parse;
try:
parse(val,ignoretz=True);
except:
return 1==2;
else:
return 1==1;
$$ language plpythonu; | I am trying to write a redshift udf to validate timestamp. But, it always returns false. Can some explain why?create or replace function f_Is_timestamp_sql(VARCHAR(20000))
returns timestamp
STABLE
as $$
select $1::timestamp as a;
$$ language sql;
create or replace function f_Is_timestamp(val VARCHAR(20000))
returns bool
IMMUTABLE
as $$
try:
(f_Is_timestamp_sql(val));
except:
return (1==2);
else:
return 1==1;
$$ language plpythonu;
select f_Is_timestamp('2019-10-09') | Redshift UDF logical issue |
Instructions for migrating from Node.js v8 to v10 are documented atNode Version Update.Amplify doesn't control the runtime; you do, through configuration. | As we all know, AWS have done a good job at informing us that the NodeJS 8.10 EOL is approaching. However, there is limited information on how to update the runtime if we have been using AWS Amplify and the Lambda functions have been automatically created using the Amplify CLI.I have an autogenerated lambda function, "add-to-group", that is triggered on post confirmation during sign up in my React app.I have tried opening the Lambda function in the Lambda Function console and changing the runtime in the dropdown box that sits above the code editor. However, when invoking this lambda by signing up in my app, I get the following error returned to the client:"code":"UserLambdaValidationException","name":"UserLambdaValidationException","message":"PostConfirmation failed with error Cannot find module 'add-to-group'\nRequire stack:\n- /var/task/index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js."How do you update the runtime of a lambda function that was generated by AWS amplify?Thanks! | Update NodeJS runtime from 8.10 to 10.x or 12.x - AWS Amplify |
The general idea of the way that you implement your solution is the proper way as suggested by AWS.About the user key that expires after 80 days, as the user keys do not expire automatically and they are valid until you deactivate them, i imagine that this is a process that you set up in order to rotate your credentials for security reasons. That's a very good practice indeed. In order to avoid hard code these credentials in the code, something that is a bad practice, you can just set up your credentials as anenvironment variableor store them in theAWS credentials filein your instance. By doing this you can then easily rotate them through your deployment pipeline.You can make the ./aws/credentials configuration through user data, when you start an ec2 instance.
You will run "aws configure" and then pass your aws credentials to set them up in your ec2 instance.
Ideally you will have a CI/CD pipeline which will automate the whole process instead for doing this manually.Please see below recommended best practices by AWS:Presigned URL for s3 buckets - Recommended wayshttps://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/AWS access keys best practiceshttps://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.htmlConfiguration and Credential File Settingshttps://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html | I want to generate an s3 link to download a file, the link should be live for at least 6 days. I have tried with options InstanceProfileCredentialsProvider(false)(Which worked only for 24 hours), ProfileCredentialsProvider(doesn't even create a link ), Access KeyAccess key of IAM user worked, but this user key will expire after some days so every time I have to change the same in the code and also I think it is not a good practice to expose the key in the code.Is there any other way I can generate an s3 download link which will expire only after 6 days.Below is the code snippet:-AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(new InstanceProfileCredentialsProvider(false))
.build();
java.util.Date expiration = new java.util.Date();
long milliSeconds = expiration.getTime();
milliSeconds += 1000 * 60 * 60 * 24 * 7; // Add 7 days.
expiration.setTime(milliSeconds);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("s3bucket",
"fileLocationpath");
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
link = s3Client.generatePresignedUrl(generatePresignedUrlRequest); | How to create an s3 download link which will expire only after 6 days |
I think this is a misunderstanding of the docs. I was under the impression that the refresh token is being re-issued on every session, thus users should never get to the expiration time while they are active.
Apparently this is not the case, as users are issued a refresh token upon login only and that token is being persistent on the client side storage. No matter if they are active or not, this token is expired after 30 days (or else configured) and then need to re-login again.(of course I'm aware that this is not an Amplify implementation) | I'm using React Native and Expo. Also using aws-amplify to manage users with Cognito's user pool.
Every so often my users are getting kicked out of the system because of "Refresh Token has expired" error. Those users were in the system in the previous week so their refresh token should still be valid. Any ideas?
I'm using:
aws-amplify 2.2.0
aws-amplify-react-native 2.2.3
react-native 0.59
expo 35 | AWS Amplify "Refresh Token has expired" after less than configured time (30 days) |
If a single AWS Region becomes isolated or degraded, your application can redirect to a different Region and perform reads and writes against a different replica table. You can apply custom business logic to determine when to redirect requests to other Regions.If a Region becomes isolated or degraded, DynamoDB keeps track of any writes that have been performed but have not yet been propagated to all of the replica tables. When the Region comes back online, DynamoDB resumes propagating any pending writes from that Region to the replica tables in other Regions. It also resumes propagating writes from other replica tables to the Region that is now back online.Refer -AWS DynamoDB DocumentationTo answer - The replication is taken care of by AWS, however you will have to take care of the region where your app will be connecting in the event of downtime. | In DynamoDB, if one region is not available or down (When we have global table and multiple replica) how to redirect request to a different Region and perform reads and writes against a different replica table ?Does DynamoDB handles that internally or do we need to handle it?
If we need to handle it through a program then how should we do that? | DynamoDB Global Table |
The answer from @David Adams is out of date. See theAttribute key matchingdocs.Use "exists": false to return incoming messages that don't include the specified attribute.It is now possible to exclude any messages that have a particular key by using the policy:{
"key": [
{
"exists": false
}
]
} | We have two types of SNS messages coming in:1.has MessageAttributes empty like this:"MessageAttributes": {}2.has MessageAttributes coming in like this:"MessageAttributes": {
"Generator": {
"Type": "String",
"Value": "some-service"
}
}I would like to use a filter subscription policy that ignores the second type but passes the first type to the subscriber.So I tried this for the policy:{
"Generator": [
{
"exists": false
}
]
}I thought this would mean it will only pass along messages that do NOT contain theGeneratorkey inMessageAttributesHowever I am seeing now that no messages are getting passed along.The AWS Subscription Filter docs seem to support this as a solution, but they only show the opposite way of checking that a key does exist, so I'm not sure if they support checking a key doesn't exist:https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html#attribute-key-matchingIs this possible? | AWS SNS Subscription Filter policy checking a key in Message Attributes does NOT exist - possible? |
In case it helps someone else, for the work I'm doing inside my build scripts executed by CodeBuild. These are the IAM permissions I had to add (finding them one by one as I hit the error).{
"Action": [
"ecr:GetAuthorizationToken",
"ecr:DescribeRepositories",
"ecr:CreateRepository",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecs:UpdateService"
],
"Resource": "*",
"Effect": "Allow"
} 'I'm sure there are more permissions that may be required if you're doing stuff I'm not doing in your builds. I'm pushing to ECR and forcing the Service (and the related tasks) to deploy the new image. | COMMAND_EXECUTION_ERROR: Error while executing command: $(aws ecr get-login --no-include-email --region us-east-1). Reason: exit status 127Below is my buildspec.yml fileversion: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region ***-east-*)
- REPOSITORY_URI=***********.dkr.ecr.***-east-*.amazonaws.com/repositoryname
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing definitions file...
- printf '[{"name":"project-container","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > taskdefinition.json
artifacts:
files: taskdefinition.json | ECR image push AWS CodeBuild issue |
Usehttps://rclone.org/tool. This is what you need.1)install rclone2)configure two different S3 providers3)check buckets$ sudo rclone lsd amazon-service:Output: -1 2020-02-10 20:22:24 -1 bucket-one$ sudo rclone lsd non-amazon-service:Output: -1 2020-02-10 20:22:24 -1 bucket-two4)sync$ sudo rclone sync amazon-service:bucket-one non-amazon-service:bucket-two | How can I sync two S3 buckets, if one is accessible with the--endpoint-urlparameter, for instance--endpoint-url=https://s4.us-east-2.stackpathstorage.com, and the other is a normal S3 bucket (s3.amazonaws.com)? | How to sync S3 buckets with different --endpoint-url |
I would recommend usingDynamoDB Lock Clientto maintain the read-write lock on the SSM parameter.The Amazon DynamoDB Lock Client is a general purpose distributed locking library built for DynamoDB. The DynamoDB Lock Client supports both fine-grained and coarse-grained locking as the lock keys can be any arbitrary string, up to a certain length. DynamoDB Lock Client is an open-source project that will be supported by the community. Please create issues in the GitHub repository with questions.Reference:https://aws.amazon.com/blogs/database/building-distributed-locks-with-the-dynamodb-lock-client/ | I have a bash script that uses AWS CLI to put value to a parameter in AWS Systems Manager Parameter Store.The bash script is run on an EC2 instance and there are several instances deployed. So I have no control over the concurrency of the bash scripts. I need the script to retry if there were concurrent updates and the update from the script was rejected.I have checked the AWS documentation and searched other questions and forums for a documentation on this topic.All I can refer to is a "TooManyUpdates" 400 error documentedhere.What is the behaviour of AWS Systems Manager Parameter Store on concurrent updates? | AWS SSM Parameter store concurrent updates gives "TooManyUpdates" error |
Please see similar Github Issuehttps://github.com/aws-amplify/aws-sdk-ios/issues/1671The comments talk aboutthe file is non-sensitive data, so resources that should be accessed by authenticated users should be configured with the approiate controls. Amplify CLI helps you with this, depending on the resources you are provisioning in AWS.there is a way to configure it in-memory viaAWSInfo.configureDefaultAWSInfo(awsConfiguration) | I'm using awsconfiguration.json for AWS Cognito for my iOS Application written in swift. But I'm afraid of security that awsconfiguration.json is stored in my local directory. How can I protect this json file against a third man attack? | How to protect awsconfiguration.json data details in iOS app? |
As for throttling a queue, you could of added a Delivery Delay time or make it long polling but as yours is event driven this isn't a choice. So this leaves you with throttling your lambda to x many you want done a concurrently.As for the messages which cant be processed that depends whether you are using
- standard queue, which wont hold any prioritization which message is picked up next.
- a .fifo queue Which will try to process it again as it would be next in line chronologically.But if you caught the error you should send it straight to a dead letter queue to prevent unnecessary retries.Although by throttling it you're removing all scalability of AWS, which is against its native architecture. Id recommend going back to the Database and seeing if any work can be improved there instead to avoid throttling. | If using SQS as an event source for a Lambda function, is there a way to limit the maximum amount of "active" messages to x. So, imagine there's a SQS queue with 1000 messages but instead of trying to process as many messages as possible (up to the default concurrency limit of 1000) we only want to process up to x messages at the same time. This obviously means that it'll take more time to process all messages but it would give us a possibility to better control e.g. writes to a database.Also, in case a message can't be processed (due to e.g. an error that occurred in the Lambda function), is the message appended to the end of the queue (so all other messages are coming first) or is there a way to prioritise them after a certain waiting time (visibility timeout)?Many thanks | SQS and Lambda: Limit max. amount of processed messages |
There is a now a way toimport existing resources into cloudformation.This means that you can do a PiTR and then import the newly created table into your stack. | I would like to be able to perform PITR restoration without losing benefit of Infrastructure-as-a-code with CloudFormation.Specifically, if I perform PITR restoration manually and then point application to the new database, won't that result in new DynamoDB table falling out of CloudFormation managed infrastructure? AFAIK, there is no mechanism at the moment to add a resource to CloudFormation after it was already created.
Has anyone solved this problem? | DynamoDB - restoring table using PITR for DynamoDB table managed by CloudFormation |
In theAPI Gatewayarea, you have a option in the left menu calledCustom Domain Nameswhere you can set a specific domain you already have and set an alias to the specific Lambda function you want to run.TheRoute 53service is not necessary, you only need to register the domain in the certificates areaACMto have it available in this custom domain names option inAPI Gateway | I've created couple of AWS Lambda functions which are invoked via API Gateway Proxy request. Note that I am usingServerlessframework for deployment. Also, I am usingAWS SAMfor testing lambda functions locally.Once I've deployed my lambda function, its API endpoint looks something like this:https://38sp8vme5j.execute-api.us-east-1.amazonaws.com/{STAGE}/{PATH}I would like to know if there is a way to change38sp8vme5j.execute-apithis part of my API endpoint.Thanks in advance | How can I change name of my Lambda Functions API endpoint |
The "Encryption at Rest" is more of a systemic feature than a user-based one. The IAM Role used for Encryption at Rest applies more to the Service and is not meant to be used as an access control for users.Hope this clears up the confusion here. | I'm trying to use AWS Elasticsearch with "Encrytion at Rest". I configured this setting while creating the Elasticsearch domain in Elasticsearch services in AWS.Lets consider there are two users named A and B.
I have created the KMS access policy whereUser A is having permission to perform both es:* and kms:* actionsUser B is having permission to perform only es:,but not kms:actions.Here,
When ES Client is performing indexing and search some data as User A, Then it works.
When ES Client is performing indexing and search some data as User B, Then also it works. But I expect this to fail as User B don't have access to kms:encrypt or kms:decrypt and other kms actions.Any leads would be greatly appreciated. | Encryption at Rest: AWS Elasticsearch |
That format is correct, and I can confirm I have successfully sent messageAttributes using that format using the SDK for NodeJS.You may find that the issue is on the receiving side. The receiver does not get attributes unless you specify which attributes you want to receive in the messageAttributeNames on the ReceiveMessageRequest. The specific syntax to do this differs by language SDK, and on the Java and Swift SDKs you can supply "All" as the attributeNames to get all attributes. For my Swift SDK, I specified this withreceiveMsgRequest.messageAttributeNames = ["All"], then I started receiving the attributes successfully.Also, don't confusemessage.attributeswithmessage.messageAttributeson the receiver side. The former is for system attributes. The latter is what you want. | I was trying to send message to AWS SQS using Node Js.For that I installed the npm package aws-sdk. I need to send a json array as message attribute and its format is{"Header": {"OrganizationName": "testOrg","TYPE": "TestMsg", "UserName": "TestUser"}}but this format does not allow me to send messagevar params = {
DelaySeconds: 10,
MessageAttributes: {
"Title": {
DataType: "String",
StringValue: "The Whistler"
},
"Author": {
DataType: "String",
StringValue: "John Grisham"
},
"WeeksOn": {
DataType: "Number",
StringValue: "6"
}
},
MessageBody: "Information about current NY Times fiction bestseller for week of 12/11/2016.",
// MessageDeduplicationId: "TheWhistler", // Required for FIFO queues
// MessageId: "Group1", // Required for FIFO queues
QueueUrl: "SQS_QUEUE_URL"
};
sqs.sendMessage(params, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.MessageId);
}How to send JSON array in Message Attribute ? | Unable to send messageAttributes to AWS SQS using node js |
FromAWS CloudFormation Template Snippets - AWS CloudFormation, it appears that you can reference outputs of nested stacks like this:{
"AWSTemplateFormatVersion" : "2010-09-09",
"Resources" : {
"myStack" : {
"Type" : "AWS::CloudFormation::Stack",
"Properties" : {
"TemplateURL" : "https://s3.amazonaws.com/cloudformation-templates-us-east-1/S3_Bucket.template",
"TimeoutInMinutes" : "60"
}
}
},
"Outputs": {
"StackRef": {"Value": { "Ref" : "myStack"}},
"OutputFromNestedStack" : {
"Value" : { "Fn::GetAtt" : [ "myStack", "Outputs.BucketName" ] }
}
}
}So, just use a normal Output in the nested stack (no need to Export), then reference it as above from the top-level stack. | I have one parent stack which calls to 2 nested stacks and I need to import values of these nested to the parent. Example:NestedStack:"Outputs": {
"TargetGroup":{
"Value": {
"Ref": "ggTG"
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-TargetGroup"
}
}
},
}When I execute all nested stacks I get these output in the child stack but I would like to get this output in the parent stack to access from another independent stack.The reason of that is because if I import in another independent stack I cant use the name of the nested because it is created at runtime.StackImporting:"TargetGroupARN" : {"Fn::ImportValue" : {"Fn::Sub" : "${StackName}-TargetGroup"}}As I said, I only know the name of parent stack, so I must to export from parent and not in the child stack. | Share outputs from CloudFormation nested stack |
You need to set list and get permissions to your bucket, in a bucket policy, not in the role.{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-dev-personalize"
},
{
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-dev-personalize/*"
}
]
} | I am following the tutorial onGetting Started (Console) - Amazon Personalizeof the recommendation engine on Amazon SageMaker.
When importingUser-iteminteraction data, I got the following error:There was an error with your dataset importInsufficient privileges for accessing data in S3. Please look athttps://docs.aws.amazon.com/personalize/latest/dg/getting-started.html#gs-upload-to-bucketand fix the bucket policy onrecommendation123.I have tried different bucket policies but none of them is allowing to import the data.The user-item interaction data flag should change from failed to active. | Insufficient privileges for accessing data in S3 |
It is not supported right now. Likely because of this:If you are capturing logs for Amazon CloudFront, create the firehose in US East (N. Virginia)Which means this stack would need to be creating resources in multiple regions.You can track and vote for the issue on the CloudFormation road-map pagehere | I was going throughAWS WAF Cloudformationdocumentation and I couldn't see a way to enable logging. I can enable logging by console, however I want to do it by Cloudformation so that it is enabled by default in new stacks.How do I enable logging in AWS WAF WebACL by Cloudformation.Thanks | How to enable logging for WebACL in AWS WAF using Cloudformation? |
There is an example of associating a function with a layer in theAWS documentation:$ aws lambda update-function-configuration --function-name my-function \
--layers arn:aws:lambda:us-east-2:123456789012:layer:my-layer:3 \
arn:aws:lambda:us-east-2:210987654321:layer:their-layer:2 | If I have the Version ARN of a layer, is there a CLI command that I can use to in my Jenkins that will attach the layer to my lambdas function? I am new to using AWS and Jenkins so I have no idea where to begin with this problem.Thanks! | Is there a CLI command to attach a AWS Layer to my Lambda? |
+50Yes, it's correct.When your app starts reading withLATESTiterator type it will start reading from the next record coming. So all the data that has already been in the queue will be ignored. Which means that if your app has a downtime - every message during that downtime will be skipped.You can overcome this by saving the sequence number of the latest message your app read and then usingAFTER_SEQUENCE_NUMBERiterator type and provide the saved sequence number. It's like a checkpoint.If your lambda deployed for the first time(no previous sequence number saved) you probably want to start with either:TRIM_HORIZON- start by reading the oldest data in the queue. Might be a bit too much if you have a lot of data and a long retention periodLATEST- start reading from the next incoming message | We are trying to determine the bestshard-iterator-typefor our lambda but I'm getting mixed information about the functionality of the shard iterator typeAFTERa lambda has been deployed for the first time.I have been told that if we use ashard-iterator-typeofLATESTthat when we go to deploy a updated version of the lambda we will loose messages since the lambda will just always pull the most recent messages from kinesis and will ignore the ones it didn't process while it was being deployed.My question is: is this correct? | Does setting a kinesis shard-iterator-type to LATEST risk losing messages in lambda? |
Update:DeregisterImage()nowdoeshave Condition Keys, so this answer is out-of-date.Based onActions, Resources, and Condition Keys for Amazon EC2 - AWS Identity and Access Management, it appears thatDeregisterImage()does not have any Condition Keys.Therefore, it looks like it would not be possible to restrict this command only to certain AMIs or tags.Some options:Restrict this permission only to certain trusted users,orPut the AMI in a separate AWS account where users can access it (via sharing), but have no permission to delete it | When cleaning up the old/unused resources, at times we may get into trouble deleting the used/current AMIs.
Have to prevent the accidental delete/deregister of the AMIs.I was thinking to add a tag to the AMI which should never be deleted if that tags exist.In a similar fashion to instance termination protection,
I would like the ability to have CERTAIN AMI's have a double failsafe mechanism to avoid accidental deletion.Please suggest a way for the same. | Prevent certain AWS AMI from accidental deletion |
If you have only the direct integration api-gw <-> dynamodb integration, check describe -https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.htmland do two calls to get the desired data.But I recommend using a Lambda in between Api Gateway and DynamoDB, which will facilitate the pagination. | I have an API GatewayGETendpoint that scans a DynamoDB table and retrieve results according to a Limit parameter:requestTemplates:
application/json: >-
{
"TableName": "employee",
"Limit": 2
}It is properly working and the response for this request when I send alimit = 2is:{
"Count":2,
"Items":[
{
"id":{
"S":"18"
},
"department":{
"S":"sales"
},
"name":{
"S":"Roger"
}
},
{
"id":{
"S":"16"
},
"department":{
"S":"technology"
},
"name":{
"S":"Petterson"
}
}
],
"LastEvaluatedKey":{
"id":{
"S":"16"
}
},
"ScannedCount":2
}The problem is: I have 20 records stored in this table, and theScannedCountandCountare both equal to 2.I really need to know the count of the total amount of records I have stored in order to make a pagination frontend componenet to work.I've looked through the documentation and a see that the expected result for this request would beScannedCount = 2andCount = 20.Is there a way to have it? Thanks a lot. | Pagination with AWS API Gateway + DynamoDB |
Create an SNStopicand subscribe to this topic to get notifications.Create a CloudWatchEvent Ruleto trigger an action whenever a spot instance is terminated.
Configure your event as shown in the screenshot below.SelectSNS Topicas the target and enter the ARN of the topic you created. | Hi came to know that I can enableAWS cloud watch alarmforAWS EC2 spot intanceif there is intrputtion for termination notice,here is more detailsNow CloudWatch users can setup arulethat automatically sends the EC2
Spot two-minute warning to an SNS topic to get a push notification.I have no clue how to setup SNS topic to get intrputtion for termination notice? | How to enable cloud watch alarm / event rule for AWS spot interruption notification? |
AWS restricts access to a number of APIs and settings.supported API on AWS:https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_7_1 | I am usingAWS Elasticsearchand I am trying to update its cluster setting but got an error:$ curl -XPUT https://vpc-tf-security-search-roxodu3f3fdsfsdfuhu.ap-southeast-2.es.amazonaws.com/_cluster/settings -H 'Content-Type: application/json' -d'
> {"persistent" : { "indices.recovery.max_bytes_per_sec" : "50mb"}}'
{"Message":"Your request: '/_cluster/settings' payload is not allowed."}I wonder why I can't change that. Is it a limitation on AWS Elasticsearch? | How to update AWS elasticsearch cluster setting via its rest API? |
Indeed, AWS Lambda has issues deploying packages that are not pure Python but work with extension modules.
You will have to make sure that your code is compiled for Linux.
Perhaps the following guide can help you:https://markn.ca/2018/02/python-extension-modules-in-aws-lambda/ | I am using fuzzywuzzy on Amazon Aws Lambda. I get the following error:warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')I don't have this problem on my local pc because I have:
pip install python-LevenshteinBut how to do this on AWS Lambda?I know python-Levenshtein uses C, and that seems to be the problem. Is there a way to do so? If so, can you provide step by step instructions?I have added python-Levenshtein in my deployment package.But it doesn't seem to prevent the warning. | Install python-Levenshtein on AWS (lambda) to speed up fuzzywuzzy |
+25The screenshot that you have posted asks you to select the user with whom you'd like to log in, and it has nothing to do with the App Client. Every App Client of a User Pool works with the same set of users in the pool, but with different authentication settings. If you want to select the App Client for your Cognito Authentication Engine, you can specify the App Client ID in your Cognito Hosted UI Domain. AnexampleURL is as follows:https://auth.example.com/login?response_type=code&client_id=<your_app_client_id>&redirect_uri=<your_callback_url>If you specify the App Client ID in your Hosted UI Domain/Custom Domain, you can run your Engine only on that App Client. | I am using AWS cognito for authentication purpose.Every time i hit the cognito domain name it asks me to select the the App client.Is there a way i can stop cognito asking me to select the Appclient i want to use for authentication.please do let me know the way | Why AWS cognito asks me to which appclient to use everytime? |
Check URL settings via Administrator Dashboardor change the URL from wp-config.phpsimply add these two lines of code theredefine('WP_HOME','http://example.com');
define('WP_SITEURL','http://example.com');3. or check deactivating all plugin, anyof plugin maycauing the issueor try using Simple SSL plugin. | I recently migrated from HostGator to AWS. Everything has worked, including the SSL. This happens on all browsers. Here is what I have setup: (Screenshots attached)An EC2 instance that's running the WordPress AMI and has an Elastic IP assignedEC2ACM for both domain.com and www.domain.com, verified by DNS on Route 53ACMTarget Group with port 80 and HTTP protocol being targeted to the EC2
instanceTarget GroupLoad Balancer listening to port 80 and 443 both forwarded to Target GroupLoad BalancerRoute 53 with the A record using the Load Balancer as an aliasRoute 53CloudFront with domain.com as origin domain name(www.domain.com as CNAME), protocol of "Match Viewer", and connected to the ACM certificateCloudFront.What would be causing the error? Where should I start looking? | AWS WordPress ERR_TOO_MANY_REDIRECTS |
From the Elasticsearchdocumentation:The results that are returned from a scroll request reflect the state of the index at the time that the initial search request was made, like a snapshot in time. Subsequent changes to documents (index, update or delete) will only affect later search requests.Therefore, you don't need to delete the scroll context. In fact, you neverneedto delete the context as it will eventually delete itself. However, it is best practice to delete the scroll context when you are finished to free up resources.One use-case for the situation you described would be to see if the program is still using the outdated documents. Depending on the code, you may not want it to be using deleted documents and instead want to retrieve a fresh scroll context. | Should we clear scrolls everytime whenever we are deleting some items from the elastic search cluster? What impact will it have if we don't do that?I saw in some example code, that for deletion, before deleting the items, it first searches the elements and then clear scrolls for that. | Clearing Scroll while deleting elements from Elastic search |
VPC peering is the best choice if you have small infra.Both are used to establish connectivity between multiple VPC's but the main difference is Transit Gateway can establish connectivity
between multiple VPC's and with multiple on-premise Datacenter's. The other disadvantage with VPC peering is that when we have number VPC's we need to do VPC peering with each and every VPC which becomes a mesh. But with Transit Gateway, we can just create one Transit Gateway and connect many VPC's and on-premise Datacenter. Here is the detailed architecture digram that explains better.For more detailed steps, I would suggest you watch thisre:invent video | I'm confronting with a problem in my enviroment.
I have two VPCs (A - B): - An ec2 instance is based on the first one (VPC A).
- A RDS is based on the second one (VPC B).And I have two questions :How can I access to my RDS by EC2 base in a different VPC ?I found two ways but, what is the differences between a Transit Gateway and VPC Peering and what is the best way ?Thank you per advanceBest | A DB Instance in a VPC Accessed by an EC2 Instance in a Different VPC |
I think this is how I would do it. There may be more elegant/efficient solutions:tar --list -zf file.tar.gz | while read -r item
do
tar -xzvfO file.tar.gz $item | aws s3 cp - s3://the-bucket/$item
doneSo you're iterating over the files in the archive, extracting them one-by-one to stdout and uploading them directly to S3 without first going to disk.This assumes there is nothing funny going on with the names of the items in yourtarfile (no spaces, etc.). | I have a very large (~300GB) .tar.gz file. Upon extracting it (with tar -xzvf file.tar.gz), it yields many .json.xz files. I wish to extract and upload the raw json files to s3 without saving locally (as I don't have space to do this). I understand I could spin up an ec2 instance with enough space to extract and upload the files, but I am wondering how (or if) it may be done directly.I have tried various versions of tar -xzvf file.tar.gz | aws s3 cp - s3://the-bucket, but this is still extracting locally; also, it seems to be resulting in json.xz files, and not raw json. I've tried to adapt this response fromthis questionwhich zips and uploads a file, but haven't had any success yet.I'm working on Ubuntu16.04 and quite new to linux, so any help is much appreciated! | How to extract and stream .tar.xz directly to s3 bucket without saving locally |
It looks like you're mixing JSON and YAML syntax for the REF. Also, just to be safe you should put quotes around your version as shown below.Your Policy should look more like thisSNSAddTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Id: 'accounts-sns-add-policy-dev'
Version: '2012-10-17'
Statement:
Sid: 'accounts-sns-add-statement-dev'
Effect: Allow
# this probably needs narrowed down
Principal:
AWS: '*'
Action: sns:Publish
Resource: !Ref BucketAddEventInterfaceSNSTopic
Topics:
- !Ref BucketAddEventInterfaceSNSTopic | Can't figure out what I am doing wrong, if I comment out the SNSAddTopicPolicy, everything works fine, however once uncommented I get:SNSAddTopicPolicy - Invalid parameter: Policy Error: null (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 26870c3b-4829-5080-bd88-59e9524c08e4).I have tried every single combination but can't get it to work, any help?BucketAddEventInterfaceSNSTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: accounts-bucket-add-interface-dev
SNSAddTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Id: 'accounts-sns-add-policy-dev'
Version: 2012-10-17
Statement:
Sid: 'accounts-sns-add-statement-dev'
Effect: Allow
# this probably needs narrowed down
Principal:
AWS: '*'
Action: sns:Publish
Resource: { "Ref":"BucketAddEventInterfaceSNSTopic" }
Topics:
- { "Ref": "BucketAddEventInterfaceSNSTopic" } | CloudFormation: Cannot create policy for SNS topic on AWS using serveless framework |
I was able to remove an element from thecart_itemsfield as below -table name -carthere is my data -{
"id": "1",
"cart_items": [
"1", "2"
]
}AWS CLI query to remove 1st element from the cart_items array -aws dynamodb update-item \
--table-name cart \
--key '{"id":{"S":"1"}}' \
--update-expression "REMOVE cart_items[0]" \
--return-values ALL_NEW | I have a Dynamodb table containing a Listitems. I'm looking to remove an element from this list, given a specifiedindex(using index 0 to test):docClient.update({
TableName: 'cart',
Key: { 'id': id },
ReturnValues: 'ALL_NEW',
UpdateExpression: "REMOVE #items[0]",
ExpressionAttributeNames : {
"#items": "items"
}
}, function(err, data) {
})Instead of removing element0from the items list, it appends a new element (map) (please see attached pic of the last appended element)...what am I doing wrong? Also, how should I be substituting a variable in place of 0, above? I tried:...
UpdateExpression: "REMOVE #items[:index]",
ExpressionAttributeValues :
":index": index
},
......which results in error:Invalid UpdateExpression: Syntax error; token: \":index\", near: \"[:index]Thank you so much!AWS-SDK: ^2.500.0node: v10.16.0--
EDIT:
1) AWS SDK doesn't supportExpressionAttributeValueswith REMOVE, so I have to"REMOVE List[" + listNumber + "]", instead.
2) ChangingUpdateExpression: "REMOVE #items[0]"toUpdateExpression: "REMOVE #i[0]"forExpressionAttributeValues : { ":i": items }removed element 0 properly; however, DynamoDB is still appending a new list elementIndex : iwhere i was the index that I was removing.Is this a bug? | Dynamodb UpdateExpression: "REMOVE #items[0]" appends new list element? |
Documentation ofpyathenais not super extensive, but after looking into source code we can see thatconnectsimply creates instance ofConnectionclass.def connect(*args, **kwargs):
from pyathena.connection import Connection
return Connection(*args, **kwargs)Now, after looking into signature ofConnection.__init__onGitHubwe can see parameterwork_group=Nonewhich name in the same way as one of the parameters forstart_query_executionfrom theofficialAWS Python APIboto3. Here is what their documentation say about it:WorkGroup (string) -- The name of the workgroup in which the query is being started.After following through usages and imports inConnectionwe endup withBaseCursorclass that under the hood makes a call tostart_query_executionwhile unpacking a dictionary with parameters assembled byBaseCursor._build_start_query_execution_requestmethod. That is excatly where we can see familar syntax for submitting queries to AWS Athena, in particular the following part:if self._work_group or work_group:
request.update({
'WorkGroup': work_group if work_group else self._work_group
})So this should do a trick for your case:import pandas as pd
from pyathena import connect
conn = connect(
s3_staging_dir='<ATHENA QUERY RESULTS LOCATION>',
region_name='<YOUR REGION, for example, us-west-2>',
work_group='<USER SPECIFIC WORKGROUP>'
)
df = pd.read_sql("SELECT * FROM <DATABASE-NAME>.<YOUR TABLE NAME> limit 8;", conn) | I have developed different Athena Workgroups for different teams so that I can separate their queries and their query results. The users would like to query the tables available to them from their notebook instances (JupyterLab). I am having difficulty finding code which successfully covers the requirement of querying a table from the user's specific workgroup. I have only found code that will query the table from the primary workgroup.The code I have currently used is added below.from pyathena import connect
import pandas as pd
conn = connect(s3_staging_dir='<ATHENA QUERY RESULTS LOCATION>',
region_name='<YOUR REGION, for example, us-west-2>')
df = pd.read_sql("SELECT * FROM <DATABASE-NAME>.<YOUR TABLE NAME> limit 8;", conn)
dfThis code does not work as the users only have access to perform queries from their specific workgroups hence get errors when this code is run. It also does not cover the requirement of separating the user's queries in user specific workgroups.Any suggestions on how I can add alter the code so that I can run the queries within a specific workgroup from the notebook instance? | Query a table/database in Athena from a Notebook instance |
You could check using AWS APIs, but a simpler alternative (and one that doesn't require making HTTP calls, helping you shave off some latency) is to set an environment variable on your remote server that tells it it's the production server and read it from the code.import boto3
from json import load
from os import getenv
if getenv('IS_REMOTE', False):
params_raw = boto3.client('ssm').get_parameters_by_path(Path='/', Recursive=True)['Parameters']
params = format_params(params)
else:
with open('parameters.txt') as json_file:
params = load(json_file)You could also apply the same logic but defining a variable that equalstruewhen your server is supposed to be the testing one, and setting it on your local testing machine. | I have a python application which should run remotely via an AWS pipeline and use secrets to get parameters such as database credentials. When running the application locally the parameters are loaded from aparameters.jsonfile. My problem is how to to test I run remotely (so replacingIN_CLOUD_TEST):import boto3
from json import load
if [IN_CLOUD_TEST]:
params_raw = boto3.client('ssm').get_parameters_by_path(Path='/', Recursive=True)['Parameters']
params = format_params(params)
else:
with open('parameters.txt') as json_file:
params = load(json_file)I could of course use a try/except, but there must be something nicer. | How to test if process is "in cloud" on AWS |
Assuming your API Gateway is using Lambda Proxy integration, just addcontent-type: text/htmlto your response. | I have the following lambda function, with an API Gateway trigger point:def lambda_handler(event, context):
resp = {
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*",
},
"body": "Hello, World!"
}
return respWhen I navigate to the API endpoint, I expected to only see the text "Hello, World!". Instead, I see the entire JSON response. How do I change this function so that It interprets the headers and status code as such, and not as content to render in the browser? | Why does this AWS Lambda function return JSON instead of HTML? |
You are probably looking for AWS CloudTrail:AWS CloudTrail is a service that enables governance, compliance,
operational auditing, and risk auditing of your AWS account. With
CloudTrail, you can log, continuously monitor, and retain account
activity related to actions across your AWS infrastructure. CloudTrail
provides event history of your AWS account activity, including actions
taken through the AWS Management Console, AWS SDKs, command line
tools, and other AWS services. This event history simplifies security
analysis, resource change tracking, and troubleshooting.AWS CloudTrail increases visibility into your user and resource
activity by recording AWS Management Console actions and API calls.
You can identify which users and accounts called AWS, the source IP
address from which the calls were made, and when the calls occurred.https://aws.amazon.com/cloudtrail/ | I am fairly new to AWS, totally new to IAM. I've set up some user accounts and groups.What I haven't seen yet is a log of user actions. If an EC2 instance gets created, rebooted, stopped, or deleted from the console, I'd like to know which user issued that command. | Where can I obtain activity logs of what AWS users have done |
You could useAmazon Cognitoto generate temporary credentials.Users can authenticate to Cognito via username/password, or using federated logins such as Facebook, Google and OpenID. | Using IAM roles you can issue temporary credentials to IAM users to access AWS resources which are deemed more secure, primarily because access and secret keys are rotated frequently.However, you still have to issue standard Access and Secret Key to the user to assume the role which will be saved in~/.aws/configfile. From a security perspective, if the credentials are stolen, it can still be used to assume the role and access the resources.I am just wondering if temporary credentials prevent such a threat?PS: I understand the benefits of AWS resources assuming roles, cross-account access and ease of user management. | Are AWS temporary credentials safer to use? |
Too late but I faced same problem. In my case I found few messages in my DLQ where thosemessageIdwere not even pulled/processed by my lambda function (why?!?). Aclaration: it was minor percentage (20 over +10k messages) but sometimes it happened.Searching why ocurred this, I found thisparagrahpon AWS documentation:Because Amazon SQS is a distributed system, it is possible for a consumer to not receive a message even when Amazon SQS marks the message as delivered while returning successfully from a ReceiveMessage API method call.There is a possibility that few message couldn't been processed by SQS service and then those message are sent to DLQ because are consider as "failed". Thats why AWS suggest us setting the number of maximum receives at least to 5, default value is 10, for a DLQ (in my case was setted to 1).Upgrating mymaxReceiveCountto 5 solved my issue.Hope this answer helps someone :) | Messages are moving to DLQ without processing even once.Messages available : 1500Messages processed : 1430Moved to DLQ without processing even once : 70I want to know the reasons why these 70 messages are moved to DLQ even atleast once processingChecked online/stackoverflow but didn't find anything related to this.I checked logs for these messages but didn’t find like I found for others.
I didn’t open AWS console and saw the messages which will increase max_receives which I didn’t do.I am using @sqsListener to process 20 messages at a time irrespective of how many messages we get.So, there no possibility of system hangout/something.Any help is appreciated. | SQS Messages are moving to DLQ without processing even once |
If you used the AWS AppSync Console wizard to create this. You will need to do the following:type ToDo {
id: ID!
title: String
completed: Boolean # add here
}
input UpdateToDoInput {
id: ID!
title: String
completed: Boolean # add here
}
input CreateToDoInput {
title: String
completed: Boolean # add here
}
input TableToDoFilterInput {
id: TableIDFilterInput
title: TableStringFilterInput
completed: Boolean # add here
}Now their should be an orange button "Save Schema" in the upper right hand corner of the console. If you press that it will save your new schema and you can run some new queries against your AWS AppSync API.Go to the query window and add completed into your mutation and listToDos selection sets.# Click the orange "Play" button and select the createToDo
# mutation to create an object in DynamoDB.
# If you see an error that starts with "Unable to assume role",
# wait a moment and try again.
mutation createToDo($createtodoinput: CreateToDoInput!) {
createToDo(input: $createtodoinput) {
id
title
completed
}
}
# After running createToDo, try running the listToDos query.
query listToDos {
listToDos {
items {
id
title
completed
}
}
}Update your query variables to include a value for completed{
"createtodoinput": {
"title": "Hello, world!",
"completed":true
}
}That should be all you need to do for a simple attribute. | I'm using AWS AppSync web console, I created a new API from scratch.I created a new resource like this:type ToDo {
id: ID!
title: String!
}After AWS AppSync created the DynamoDB table and Schema, what can I do if I want to update the schema and add a new field?type ToDo {
id: ID!
title: String!
completed: Boolean
}I know AWS Amplify has a commandamplify api gql-compileand thenamplify pushand it will update the schema and the DynamoDB tables.Is there a way to do this from the AWS AppSyncweb console? | AWS AppSync Update Schema |
Nope. From the docs (emphasis mine):Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater).This means that the time is flowing even when the Redis instance is not active.https://redis.io/commands/expireIf you want backups to exist indefinitely, all keys must be persisted. | I got snapshot rdb file from server. At the point of snapshoting there was keys with defined ttl usingEXPIREcommand . After starting server locally with the key--dbfilename dump.rdball keys with defined ttl expired.
For me it seems that there should be keys in binary file anyway.If it can help: the snapshot was created in AWS elasticache environment.Is it possible to start server from backup and restore keys? | Restart redis server from rdb restoring expired keys |
You shouldn't have to worry about separating the jobs onto different instances because the containers the jobs run in are limited in how many vCPUs they can use. For example, if you launch two jobs that each require 4 vCPUs, Batch might spin up an instance that has 8 vCPUs and run both jobs on the same instance. Each job will have access to only 4 of the vCPUs, so performance should be identical to a job running on its own with no other jobs on the instance.However, if you still want to separate the jobs onto separate instances, you can do so by matching the vCPUs of the job with the instance type in the compute environment. For example, if you have a job that requires 4 vCPUs, you can configure your compute environment to only allow c5.xlarge instances, so each instance can run only one job. However, if you want to run other jobs with higher vCPU requirements, you would have to run them in a different compute environment. | I have setup a batch environment withManaged Compute environmentJob QueueJob DefinitionsThe actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved.My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same instance as the first container is still initializing and has not started using CPUs yet.
It seems like a race condition to me where both jobs see the instance created as available.Is there a way I can launch one instance for each job without looking for instances that are already running? Or any other solution to lock an instance once it is designated for a particular job?Thanks a lot for your help. | AWS batch to always launch new ec2 instance for each job |
BothRVMandrbenvwill allow you to install the correct version of Ruby you need for your application. They are both distro agnostic so you can run installation commands as simple as this:gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
\curl -sSL https://get.rvm.io | bash -s stable
source /etc/profile
rvm install "ruby-2.6.3"
rvm use 2.6.3 --defaultThe full list ofRuby managersare here, along with several other tools andInstallersthat explain how to manually install a later version of Ruby.When all else fails,build from source. | I booted up a amazon linux machine in which the default ruby version wasruby 2.0.0p648 (2015-12-16 revision 53162) [x86_64-linux]. I want to update it to2.6.3.I found this articleHow to upgrade ruby version in Amazon Linux system?. But when I ransudo yum install -y ruby26it says ruby26 not found. There was no other article. | How to upgrade ruby version to 2.6.3 in amazon linux |
I found an answer in aws docs:Support for Just-In-Time (JIT) capability – RDS PostgreSQL 11 instances are created with JIT capability, speeding evaluation of expressions.To enable this feature, set jit to ON.Doc:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.version111 | How I can turn on/off subj feature on my AWS RDS PG 11 db?About PG featurePostgreSQL has builtin support to perform JIT compilation using LLVM when PostgreSQL is built with --with-llvm. | Can be "JIT compilation using LLVM" Postgres 11 func enabled on AWS RDS? |
This will be due to the scope down policy that Cognito applies to unauthenticated users. It is further explained here:https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.htmlAs stated in the above documentation:If you need access to something other than these services for your
unauthenticated users, you must use the basic authentication flow. | I am trying to allow access to a Kinesis video stream using Cognito Identity Pools, but get anAccessDeniedExceptionwhen callingGetDataEndpoint.IAM Role Policy Doc:{
"Sid": "Stream",
"Effect": "Allow",
"Action": [
"kinesisvideoarchivedmedia:GetHLSStreamingSessionURL",
"kinesisvideo:GetDataEndpoint"
],
"Resource": "arn:aws:kinesisvideo:us-west-2:XXXXXXXXXXXX:stream/<stream-name>/<stream-id>"
}I have tested the policy using the policy simulator, and it shows that theGetDataEndpointaction is allowed on the stream, but when testing it in the browser the access denied exception occurs:AccessDeniedException:
User: arn:aws:sts::XXXXXXXXXXXX:assumed-role//CognitoIdentityCredentials
is not authorized to perform: kinesisvideo:GetDataEndpoint on resource:<resource-name>This is how I'm getting the temporary credentials on the site:AWS.config.region = 'us-west-2';AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: <identity-pool>,
});
AWS.config.credentials.get(function (err, data) {
if (!err) {
id = AWS.config.credentials.identityId;
accessKey = AWS.config.credentials.accessKeyId;
secretKey = AWS.config.credentials.secretAccessKey;
token = AWS.config.credentials.sessionToken;
}
});I've tried using wildcards for the Kinesis video actions and the resource, but still get the same errors. Any advice would be appreciated. | Cognito Identity Credentials are not authorized to perform action on Kinesis video resource |
I think you are missing the region_name parameter.You can set the region_name is your code as:ec2 = boto3.resource('ec2', region_name='us-east-2')Hope it helps. | I am not able to create an EC2 instance using boto3.I am trying to create an instance using boto3.ec2 = boto3.resource('ec2')
ec2.create_instances(ImageId='ami-0d8f6eb4f641ef691', MinCount=1, MaxCount=1, InstanceType='t2.micro')My region is US East(Ohio)I am not sure how to find the AMI to a specific region. I just selected what was available.The error message isbotocore.exceptions.ClientError: An error occurred (InvalidAMIID.NotFound) when calling the RunInstances operation: The image id '[ami-0d8f6eb4f641ef691]' does not existand I copied the AMI ID from the: | Unable to create an ec2 instance using boto3 |
You should definitely try this in your dev environment before doing it in production.
Updating EngineVersion through CloudFormation does currently NOT work like the console. If I recall correctly, CloudFormation willreplaceyour existing instance. | Currently using AWS RDS PostgreSQL 10.6 and would like to upgrade to 10.9.
We're using Cloudformation templates, and I would like to schedule the upgrade to the next maintenance window, instead of immediately.From theRDS FAQs:By default, the upgrade will be applied or during your next maintenance window. You can also choose to upgrade immediately by selecting the Apply Immediately option in the console API.From this I understand that by default, changes will applied to your next maintenance window unless specified to apply immediately.That said - If I update my RDS Cloudformation template, will it apply immediately or during the maintenance window? Is there a way to specify it to update during the maintenance window?Thanks! | Updating RDS Engine Version (Minor) via cloudformation - Immediate or during next window? |
please seehttps://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Bucket.download_fileby the doc, first argument is file key, second argument is path for local file:s3 = boto3.resource('s3')
bucketname = 'vemyone'
s3.Bucket(bucketname).download_file(train_fns[0], '/path/to/local/file') | I am trying to download a file to sagemaker from my S3 bucket.the path of the file iss3://vemyone/input/dicom-images-train/1.2.276.0.7230010.3.1.2.8323329.1000.1517875165.878026/1.2.276.0.7230010.3.1.3.8323329.1000.1517875165.878025/1.2.276.0.7230010.3.1.4.8323329.1000.1517875165.878027.dcmThe path of that file is stored as a list element attrain_fns[0].the value oftrain_fns[0]isinput/dicom-images-train/1.2.276.0.7230010.3.1.2.8323329.1000.1517875165.878026/1.2.276.0.7230010.3.1.3.8323329.1000.1517875165.878025/1.2.276.0.7230010.3.1.4.8323329.1000.1517875165.878027.dcmI used the following code:s3 = boto3.resource('s3')
bucketname = 'vemyone'
s3.Bucket(bucketname).download_file(train_fns[0][:], train_fns[0])but I get the following error:FileNotFoundError: [Errno 2] No such file or directory: 'input/dicom-images-train/1.2.276.0.7230010.3.1.2.8323329.1000.1517875165.878026/1.2.276.0.7230010.3.1.3.8323329.1000.1517875165.878025/1.2.276.0.7230010.3.1.4.8323329.1000.1517875165.878027.dcm.5b003ba1'I notice that some characters have appended itself at the end of the path.how do I solve this problem? | AWS: FileNotFoundError: [Errno 2] No such file or directory |
I've noticed these as well. Before I even received a delivered report. Seems like google is loading the tracker pixel on a server somewhere before even delivering it to the recipient inbox. I'm filtering out these events by useragent. This blog post does a great job explaining:https://www.gmass.co/blog/false-opens-in-gmail/It will help when you have to tell clients that their open rates have been inflated. | I'm having a problem with Amazon's SES Open Tracking and Gmail accounts.When I send an email to gmail account through SES, sometimes I'll receive an Open click event immediately, when I know the email hasn't been opened. That is a very bad thing because we have to have precise metrics.I've read some things about google image proxying, don't know if it has something to do with that, there was nothing conclusive.The open tracking object comes with this data:
ipAddress: 66.249.89.16
userAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246 Mozilla/5.0
timestamp: 2019-07-09T19:14:31.494ZAny ideas why this is happening? | False open trackings using SES and gmail |
Useboto3 send_commandto execute a command on ec2.Example for your case:boto3.client('ssm').send_command(
InstanceIds=[val.id],
DocumentName='AWS-RunShellScript',
Parameters={'commands': ['crontab -r']},
Comment='Crontab remove'
) | I am working on an AWS Lambda script written in Python where I am currently getting all the instances with specific tags, and removing the oldest one from them. After that, from the remaining instances, I would like to call a linux command on the instances. The only thing I require is to callcrontab -r, as the oldest instance will have the cron set, and adding those crons in the ASG generated instances will cause duplicate emails being sent.I am done till the part of getting all the instances except the oldest one, but how can I callcrontab -ron each of those instances? Any ideas. Thank you.Code :import boto.ec2
import boto3
conn=boto.ec2.connect_to_region("eu-central-1")
reservations = conn.get_all_instances()
instances_list = []
process_instance_list = []
for res in reservations:
for inst in res.instances:
if 'Name' in inst.tags:
if inst.tags['Name'] == 'PROJECT_NAME' :
instances_list.append(inst);
instances_list.sort(key=lambda x: x.launch_time, reverse=False)
non_processed_id=instances_list[0]
for val in instances_list:
if val.id != non_processed_id.id:
// Call crontab -r here.Thank you. :-) | AWS Lambda, Python : Call Shell script from Lambda or Linux command |
According to your latest comment:testing by posting a HTTP request from Postman directly to Api gateway for the lambdaThis is the cause of the issue you are facing.To explain, when you have an API gateway proxy to Lambda, API gateway handles the error cases that Lambda sends back (instead of the Lambda service itself, which has the DLQ configuration), and the errors will not end up in DLQ. In order to implement a DLQ, you need a different design, potentially something like calls going to SNS -> Lambda, and then on fail Lambda will send those messages to DLQ.You might also be able to fix this if you don't have a proxy integration, but I haven't tested that out personally and I don't know for sure if this will work. | I am trying to set up DLQ to capture failed events from Lambda function.Here is what I have done:Created a DeadLetterQueue (QueueX) in SQS,Set my lambda function DLQ resource to 'Amazon SQS'Set SQS Queue to QueueXCreated a Policy to give all permissions (sqs:*) to all resources (*); VisibilityTimeout=5 mins, MessageRetentionPeriod=3 daysAttach the policy to the role which executes the lambda functionNow via 'Queue Actions', I can send a message and see it show up in "Messages Available". But if I send a http request to the lambda function - I purposely created a malformed JSON whose exception is not caught - I saw the error message in CloudWatch but nothing sent to QueueX.What am I missing? | Why my failed event is not sent to AWS Dead Letter Queue DLQ? |
This is not possible using job bookmarks. From AWS documentation:Job bookmarks are implemented for a limited use case for a relational database (JDBC connection) input source. For this input source, job bookmarks are supported only if the table's primary keys are in sequential order. Also, job bookmarks search for new rows, but not updated rows. This is because bookmarks look for the primary keys, which already exist.https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.htmlGlue will need to load the entirety of the RDS data into a Dynamic Frame or DataFrame. However, this data could be used to perform an upsert into the redshift database if what you're trying to avoid is truncating the redshift table and reloading all the data.https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html | I am trying to load the data from the AWS RDS (MySQL) to the redshift using AWS glue. And I want to load the data incrementally. By using Job Bookmarks, glue can track only the newly added data but cant track the updated rows. Is there any way to load only the updated data? may be by using the field updated_at in the source table from MySQL? | aws glue rds incremental load |
Option 1 :Glue uses spark context you can set hadoop configuration to aws glue as well. since internally dynamic frame is kind of dataframe.sc._jsc.hadoopConfiguration().set("mykey","myvalue")I think you neeed to add the correspodning class also like thissc._jsc.hadoopConfiguration().set("mapred.output.committer.class", "org.apache.hadoop.mapred.FileOutputCommitter")example snippet :sc = SparkContext()
sc._jsc.hadoopConfiguration().set("mapreduce.fileoutputcommitter.algorithm.version","2")
glueContext = GlueContext(sc)
spark = glueContext.spark_sessionTo prove that that configuration exists ....Debug in python :sc._conf.getAll() // print thisDebug in scala :sc.getConf.getAll.foreach(println)Option 2:Other side you try using job parameters of the glue :https://docs.aws.amazon.com/glue/latest/dg/add-job.htmlwhich has key value properties like mentioned in docs'--myKey' : 'value-for-myKey'you can follow below screen shot for editing job and specifying the parameters with--confOption 3:If you are using, aws cli you can try below...https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.htmlFun is they mentioned in the docsdont setmessage like below. but i dont know why it was exposed.To sum up : I personally preferoption1since you have
programmatic control. | I haven't been able to figure this out, but I'm trying to use a direct output committer with AWS Glue:spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2Is it possible to use this configuration with AWS Glue? | Use Spark fileoutputcommitter.algorithm.version=2 with AWS Glue |
Are you invoking your Lambda via API Gateway? If so, check the Lambda integration for the endpoint you are hitting in API Gateway and see if the version/alias of the Lambda function is hardcoded. You can find this by looking at the value of "Lambda Function" in the Integration Request section of the API Gateway method:Lambda Function: my_function:devIn the example above, this means your API Gateway is invoking the "dev" version of the "my_function" lambda.Then check in the Lambda console if the version/alias you are invoking in the console, the one with the recent DynamoDB changes, matches the version/alias that is being invoked by the API Gateway.I have spent a day or two smashing my head against the keyboard trying to figure out why my updates weren't being executed to realize that the API Gateway was pointing at a different/older version of my function. | I have a lambda function invoked from my browser. I know that is working because the response is correct. In my lambda, I want to write into a dynamo table so I updated my function to include this logic.When I test my function in the lambda console it works as expected. When the lambda is called from the browser (via API Gateway), it does not execute any of the new code that I added.Here is my code:#set-up table connection
dynamodb = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamodb.Table('XXXX')
tString = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
#print("Received event: " +
# json.dumps(event, indent=2))
#recieve parameters
if 'userid' in event:
userid = event['userid']
else:
userid = 'nothing'
if 'token' in event:
token = event['token']
else:
token = 'nothing'
if 'appid' in event:
appid = event['appid']
else:
appid = 'connection'
response = table.put_item(
Item = {
'ID': userid,
'token': 'test2',
'appid': 'test2',
'authApp': 'test2',
'authUser': 'test2'
})
return userid | API Gateway invoking an older version of my lambda function |
Since you do not pass thelimitargument explicitly in your query, the Request Mapping Template of thejournalsresolver defaults it to 10 items. If you would like to change this default value, go to your schema page on the AppSync console, navigate to thejournalsfield, found under the Resolvers section of the schema page. This will then show the resolver definition for this field, and you can then update the default value of10to anything you like. Alternatively, you can pass this as your query argument.FYI - This default value is defined in theamplify-clirepo on GitHub and can be foundhere. | I'm new to AppSync and trying to see how this works and what's the proper way to set this up.I created schema.graphql looks like below.type User @model {
id: String!
following: [String]
follower: [String]
journals: [Journal] @connection(name: "UserJournals", sortField: "createdAt")
notifications: [Notification] @connection(name: "UserNotifications", sortField: "createdAt")
}
type Journal @model {
id: ID!
author: User! @connection(name: "UserJournals")
privacy: String!
content: AWSJSON!
loved: [String]
createdAt: String
updatedAt: String
}and this created queries.js automatically by AppSync.export const getUser = `query GetUser($id: ID!) {
getUser(id: $id) {
id
following
follower
journals {
items {
id
privacy
content
loved
createdAt
updatedAt
}
nextToken
}
notifications {
items {
id
content
category
link
createdAt
}
nextToken
}
}
}
`;I noticed that queryinggetUseronly returns 10journalsitems and not sure how to set that to more than 10 or proper way to query and add more journals into that 10 items that were queried bygetUser. | AWS AppSync only returns 10 items on query on connection |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.