Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
TheROLLBACK_COMPLETEstatus exists only after a failed stackcreation. The only option is to delete the stack. This is to give you a chance to correctly analyze the reason behind the failure.You can delete the stack from the command line with:aws cloudformation delete-stack --stack-name <value>From thedocumentationofROLLBACK_COMPLETE:Successful removal of one or more stacks after a failed stack creation or after an explicitly canceled stack creation. Any resources that were created during the create stack action are deleted.This status exists only after a failed stack creation. It signifies that all operations from the partially created stack have been appropriately cleaned up. When in this state, only a delete operation can be performed.Normally theROLLBACK_COMPLETEshould not happen in production. I would suggest validating your stack in a development environment or have one successful stack creation in your production environment before continuously deploying your stack.Still, you could have a custom script in your CI that checks the stack status (DescribeStacks) and if it'sROLLBACK_COMPLETEdelete it (DeleteStack). This script would run beforesam deploy.
I am usingsam deploycommand to deploy my lambda to AWS. Sometimes I get this errorAn error occurred (ValidationError) when calling the CreateChangeSet operation: Stack:arn:aws:cloudformation:ap-southeast-2:xxxx:stack/xxxx/xxxx is in ROLLBACK_COMPLETE state and can not be updated.I know there is a failure happens on the previous deployment. I can manually delete the stack in AWS cloundformation console and retry the command. But I wonder is there is way to force the command to delete any rollback state stack?I know I can delete the failed stack via aws cli or console. But mydeployscript is on CI and I'd like to make CI to usedeploycommand to override the failed stack. So the scenario is:1. CI failed on deploy lambda function 2. My team analysis the issue and fix the issue in cloudformation template file 3. Push the fix to github to tigger the CI 4. CI is triggered and use the latest change to override the failed stack.I don't want the team to manually delete the stack.
How to force deploy to lambda via SAM cli
Is your Dimensions correct? You state the name as "environment", you may want to use "stage" or maybe ApiName. When you look at the metric in the CloudWatch Console, what is the name of the dimension you want "environment"?
I am new in the cloud and I have the requirement to configure CloudWatch to invoke a Lambda in case of 504 error. For that, I have written below Serverless code: But on 504 error, the code in not invoking the Alarm. In the code, I have defined 29000 milliseconds (29 seconds) threshold and any request taking time more than or equal to should invoke Alarm.Please help me to figure out what am I missing here?TaskTimeoutAlarm: Type: AWS::CloudWatch::Alarm Properties: Namespace: "AWS/ApiGateway" MetricName: "Latency" AlarmDescription: "API Gateway timeout" Threshold: 29000 Period: 300 EvaluationPeriods: 1 ComparisonOperator: "GreaterThanOrEqualToThreshold" AlarmActions: - arn:aws:sns:${self:provider.region}:${self:provider.awsAccountId}:${self:custom.alertSnsTopic} OKActions: - arn:aws:sns:${self:provider.region}:${self:provider.awsAccountId}:${self:custom.alertSnsTopic} TreatMissingData: "notBreaching" Statistic: "Maximum" Dimensions: - Name: environment Value: ${self:provider.stage}Edited -----------The problem was in the key-value passed in Dimensions. This is how it should beDimensions: - Name: ApiName Value: dev-employee-api - Name: Stage Value: devApiNameis the name of API which you can also find in AWS API Gateway.Stageis the name of the Sever like Dev, Staging or Production
Trigger an AWS Alarm when an API Gateway invocation hits its 29 second timeout and returns a 504 error
It should depend on the channel you are going to use, but I know that Lex itself cannot initiate a conversation. Also, channels like Facebook Messenger highly discourage bots that initiate a chat because it could become flagged as a spam bot.However, you could definitely build a workaround to do it, but that will have to be channel specific and outside of Lex. Perhaps as simple as detecting a user opens a chat, and send a "hello" to Lex from that user yourself so that Lex replies with the welcome message. But something like that depends completely on the channel you use.Word of Warning:Initiating a conversation may violate a user agreement or developer guidelines of Amazon Lex, or the chat channel your bot uses, so I don't suggest doing so.
I want to create a Lex bot that would send a welcome message every time the chat gets opened. Does anyone know if this is possible?
Can Lex start the conversation?
It's the application's responsibility to provide the correct MIME type when uploading the object. S3 does not do any interpretation of the payload.Browsers should not interpret the file based on its extension, they should use only the MIME type from theContent-Typeresponse header -- which S3 sets to whatever you specify when creating the object.
In my application, users can upload profile pic. The image is sent directly to S3 using presigned URLs.Now, let's talk a bit about security. Isn't it unsafe? Let's say someone renamed the file fromfile.pdftofile.png. Now browser thinks it ispngfile, because of the extension.So, the question is: does S3 in any way, detect the mime type and can reject a file if its mime type is different from what we have specified it to be?
How does S3 detect mime type when using presigned URLs?
Unfortunately, as of May 2019, there is not a way to join datasets from different databases or different schemas hosted in the same database.A few options to consider that I have done to work around this:1) If your data sources are all hosted in same database but are from different schemas, you could create a view in the database that joins the data there and then pull the data from the view2) Use theDMSservice to move your data all into the same spot. We ended up creating a datalake (i.e. S3 bucket) where we used DMS to dump a nightly snapshot of our RDS database tables from different schemas into S3. We then also have other processes that put emailed reports and other streams of data into S3. Once everything is in S3, you can useGlue Crawlersto put the S3 data into a catalog which can then be imported into QuickSight via Athena tables.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed4 years ago.Improve this questionI'm just wondering if there is any way to join data within Amazon Quicksight. I have several data sets each created from a separate database. I was wondering if there is any way to join these data sets together.Thanks
Joining Multiple Datasets in AWS Quicksight [closed]
Naming the array in the JSON file like this:"values":[{"key":"value"},...}And updating the classifier:$.values[*]Fixes the issue... Interested to know if there is a way to query anonymous arrays though. It seems pretty common to store data like that.Update: In the end this solution didn't work, as Spectrum would never actually return any results. There was no error, just no results, and as of now still no solution other than using individual records per line:{"key":"value"} {"key":"value"} etc.It does seem to be a Spectrum specific issue, as Athena would still work.Interested to know if anyone else was able to get it to work...
I have a JSON array of structures in S3, that is successfully Crawled & Cataloged by Glue.[{"key":"value"}, {"key":"value"}]I'm using the custom Classifier:$[*]When trying to query from Spectrum, however, it returns:Top level Ion/JSON structure must be an anonymous array if and only if serde property 'strip.outer.array' is set. Mismatch occured in file...I set that serde property manually in the Glue catalog table, but nothing changed.Is it no possible to query an anonymous array via Spectrum?
Redshift Spectrum: Query Anonymous JSON array structure
ACM can be used with CloudFront/API gateway or ELB/ALB. just issuing a ACM certificate won't do anything, where is your website pointing ? Is it on Load balancer or CloudFront ? You need to use this new ACM certificate there. What certificate and error do you see when you access your website (e.g: hostname mismatch , cert expired)?
Because they expired before, I just updated the certificates of a website with DNS in Amazon Route 53 usingthis tutorial. For the new certificate I listed a domain (somedomain.com) and several subdomains (a.somedomain.com , b.somedomain.com).All the steps described in the tutorial worked and checking on ACM the certificate is already listed as issued. I used the Create record in Route 53 tool in ACS to write the records in Route 53.In Route 53's dashboard, the CNAMEs for the new certificate are listed.However, in ACS, the certificate is listed as not used and, more importantly, my website shows up as having invalid certificates when accessed from a browser.Am I missing any step here to update the certificate?Is there something else needed to make the certificate renewal eligible?Any help would be very appreciated.
SSL certificates not working (AWS Route 53)
Could you once try with these methods to drop all partitions:ALTER TABLE my_table DROP PARTITION (year > 0.0); (or) ALTER TABLE my_table DROP PARTITION (year > 0);(or)Change thedatatypeofyeartoStringthen try to drop the partitionsALTER TABLE my_table DROP PARTITION (year='2019.0')
I have a poorly formatted partition in Athena. I partition on year, month, day, and hour as integer columns, but mistakenly created partitions as floats.i.e/year=2019.0/month=4.0/day=22.0/hour=6.0instead of/year=2019/month=4/day=22/hour=6I removed the s3 files responsible and ran aMSCK REPAIR TABLEbut the partition wasn't removed. I tried removing the partition manually with-ALTER TABLE my_table DROP PARTITION (year=2019.0) ALTER TABLE my_table DROP PARTITION (year='2019.0')But I got the errorFAILED: SemanticException [Error 10006]: Partition not found (year = null)Noticeyear = null. It seems Athena doesn't know what to do with decimals.How do I get rid of this faulty partition?EDIT:The only way I was able to resolve this was to recreate the table and repair it. Still looking for another solution because that would be a bummer in prod.
Unable to Delete Partition in Athena
Use SNS when you want to "notify" other services of an event - especially if there are multiple services that might be interested in the event.The whole format-based differentiation seems like an artificial construct that makes using SQS necessary. SQS is best for point to point communications.Posting to 2-3 places from one service opens your service up to atomicity & reliability issues (what happens is the posting service crashes after posting to SQS but before posting to SNS?). Having a single system where the messages are being put makes these issues easier to tackle.In the end, it comes down to whatyoursystem requires. If it were up to me, I would change all the downstream services to get notifications over SNS (with a consistent format for all of them).
We were having a debate on the best solution today with neither side coming to an agreement.We have a product which ingests "messages". Every time we get a new message we need to send this data to 3 services for processing.Service #1 requires the data to be in a special format. For this, we put the data in SQS which the service reads from.Service #2 reads message fields: [a, b, c] and we send in a protobuf format.Service #3 reads message fields: [a, b, c, d, e], also protobuf.For service #2 and #3 we are sending the data into 2 separate SQS queues.However, we could send the data in an SNS topic which queue #2 and #3 read off of. For this, we would send service #3's protobuf as it has all fields service #2 needs.The person who wrote service #2 does not want to do this because he does not want to get extra data which they will just ignore.The person who wrote service #3 thinks it's a waste of resources for the system to protobuf and send to 2 separate SQS queues instead of 1 SNS topic when service #2 could simply read protobuf #3 and just ignore the unwanted fields.From an architecture standpoint who is correct?
Multiple SQS queues vs 1 SNS topic
The only way this is possible is to add the "question" entity as a top level attribute on the item, in this case the partition key, in addition to being embedded in the JSON. Whether that is a good partition key remains to be seen. I cannot comment on that without know more about your use case and its access patterns to start with.
I have a JSON document like:{ "best_answer": { "answers": { "a" :"b", "c" :"d" }, "question": "random_question" }, "blurbs": [] }And I want to create the partition key on the "question" field (nested inside best_answer). How to do this on the AWS Console?
Is there a way to choose a nested field as a partition key in AWS DynamoDB?
I was looking into this also for general education/research purpose. The closest example is featured on AWS blog. And this isgithub repo. From the README.mdIf the source is a sequence of buffered webcam frames, the browser client posts frame data to an API Gateway - Lambda Proxy endpoint, triggering the lambda/WebApi/frame-converter function. This function uses FFmpeg to construct a short MKV fragment out of the image frame sequence. For details on how this API request is executed, see the function-specific documentation.
I'm developing a web application that captures video from a webcam and saves the stream to Amazon Kinesis. The first approach I came up with is getUserMedia / mediaRecorder / XMLHttpRequest which posts chunked MKV to my unix server (not AWS), where simple PHP backend proxies that traffic to Kinesis with putMedia.This should work, but all media streams from user will go through my server which could become a bottleneck. As far as I know, it's not possible to post chunked mkv to Amazon directly from browser due to cross-origin problems. Correct me if I'm wrong or there's a solution for this.Another thing that I feel I'm missing - is WebRTC. XHR feels a little bit like a legacy in 2019 for streaming media. But if I want this to work, I will need a stack of three servers: webrtc server to establish connection, webrtc->rtsp proxy, and Kinesis gstreamer plugin, which grabs rtsp stream and pushes it to Kinesis. It looks a bit overcomplicated, and media traffic still runs through my server. Or maybe there is a better approach?I need a suggestion on how to make better architecture for my app. I feel the best solution would be direct webrtc connection with some amazon service, which proxies stream to kinesis. Is it possible?Thanks!
Streaming video from browser to Amazon Kinesis Video
as every business have limitations on amount of history that can be storedI would argue that this assertion too broad.There are business cases that need to be able to document historical events indefinitely (or indefinitely, for all practical purposes) because either the data remains relevant or because without retaining the entire history, there is no way to conclusively prove that the current state of the database is as it should be... and that is the purpose of QLDB -- maintaining historical records that cannot be modified or deleted, either accidentally or on purpose.With QLDB, your data’s change history is immutable – it cannot be altered or deleted – and using cryptography, you can easily verify that there have been no unintended modifications to your application’s data.https://aws.amazon.com/qldb/Each transaction builds on the one before it. Oversimplified, it lools like this:hash(t1) = SHA256(t1) hash(t2) = SHA256(t2 + hash(t1)) hash(t3) = SHA256(t3 + hash(t2)) ...Those hash values are also stored, so each transaction can be cryptographically verified against its predecessor, all the way back to the beginning of time. Deleting older records removes information necessary for verifying newer records.A use case where you plan to purge historical data seems like an incorrect application of QLDB.
I was looking at AWS QLDB service to store the audit trail history of changes that were made to our application, so that it can be immutable.But, at the end it is a database & we can't just keep on adding data (storing such large amount of data is costly).At some point in time we will need to roll over / archive existing data & start over everything a fresh.Was wondering how AWS QLDB will be able to handle such scenarios ?P.S. I am a newbie to AWS QLDB.
Can we archive the AWS QLDB data, as every business have limitations on amount of history that can be stored
You can specify anAWS::Serverless::Apiresource in your SAM template that is configured with anAuthobject which in turn should haveAWS_IAMasDefaultAuthorizer. In other words, something like:Resources: ApiWithIamAuth: Type: AWS::Serverless::Api Properties: StageName: Prod Auth: DefaultAuthorizer: AWS_IAMNext, you need to create a policy for your users so that they can invoke the API.Control Access for Invoking an APIprovides the reference,IAM Policy Examples for API Execution Permissionscontains two examples{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "execute-api:Invoke" ], "Resource": [ "arn:aws:execute-api:us-east-1:*:a123456789/test/POST/mydemoresource/*" ] } ] }and finallyCreate and Attach a Policy to an IAM Userlists the manual steps to associate the policy with an IAM user, an IAM role or an IAM group.
I'm trying to create an API Gateway which invokes a Lambda function using SAM. I want to restrict access to the API in such a way that only certain IAM accounts/users can access the API. How should I do that? I couldn't find a proper way to attach a resource access policy to an API endpoint in SAM.
Specify which account/user can invoke the API using SAM and API Gateway
Youcansend URL's (no HTML tags) in the response as a normal message. But how that URL is displayed to the user depends on the channel you are using and their output formatting of that message.I know that Facebook Messenger will automatically change a URL string to be a link. Most of the other channels probably do too. But the Lex Test Chat will not.For testing this sort of thing, it is best to do it in the actual channel your Lex bot will use because a lot of formatting like this works in the actual channel but does not work in the Test Chat.
While creating a chatbot usingAWS Lex, I would like to provide the response in hyperlink format. But I don't want to useResponse cardin this case. As per the AWS Lex docs, I knew that hyperlinks can't be given directly inresponses. Am new to Lamda functions and tried with the following.exports.handler = (event, context, callback) => { callback(null, { "dialogAction": { "type": "Close", "fulfillmentState": "Fulfilled", "message": { "contentType": "CustomPayload", "content": "my link" } } }); };but still am getting the result in text format. Am even okay with any other approaches.
Provide AWS Lex response in Hyperlink format
As you say the intent is triggered by voice. A relatively easy way to do it would be:Generate the audio file expressing the intent using polly tool. E.g. "play my song"https://docs.aws.amazon.com/polly/latest/dg/API_SynthesizeSpeech.htmlWhenever the user clicks on the web link, invoke the intent using the PostContent API. Basically pretending the user said it.An example of invocation would be:aws lex-runtime post-content --bot-name yourBot --bot-alias \"\\$LATEST\" --user-id youruserid--content-type \"audio/l16; rate=16000; channels=1\" --input-stream request.wav answer.mp3where yourBot is your Bot name and request.wav is the audio file previously generated with polly. You will get the audio answer in the file answer.mp3Drawback is you need to use lex/lambda for this, not just flask... Hope it helped! Ester
I am working on a flask app that links to Alexa skills. I am trying to building a capability when a user click on some content (e.g. notifications), Alexa asks if the user wish to proceed, if the user says 'yes', then Alexa takes the user to the relevant webpage.My question is, is it possible to trigger Alexa intent with clicks on the website content instead voice? My understanding that intent can only be activated through voice.Any thoughts will be much appreciated.
How can I trigger Alexa intent with clicks rather than voice?
Very late on this, but for anyone else googling the correct link is :https://{api-id}-{vpc-endpoint}.execute-api.{region}.amazonaws.com/{stage}/.I believe the issue in the original post here is the vpc-endpoint id is missing from the link. I've seen that incorrect format referenced in a few places, I don't know then but it definitely won't work without it now.
The current set up is:EC2 instance deployed in a VPC in subnet A.VPC Endpoint for execute-api in the same VPC in the same subnet (A)Private API Gateway with a resource policy to Allow both the VPC and VPC Endpoint to invoke the APIVPC has all its DNS settings enabled. DNS Hostnames & DNS resolution.VPC Endpoint and EC2 instances both have allowed all traffic to port 443.What am I missing here? The EC2 instance cannot seem to resolve the API via itshttps://(apiID).execute-api.(region).amazonaws.com/(api)
EC2 could not resolve private API Gateway
So I can't see anything wrong with your script. But I would suggest following actions;executable flag to your script sochmod +x /opt/sonar/bin/linux-x86-64/sonar.shfor debugging purposes add the following lineexec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1So your userdata should look like;#!/bin/bash -xe exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1 sudo chown -R ec2-user:ec2-user /opt/sonar/temp/conf chmod +x /opt/sonar/bin/linux-x86-64/sonar.sh /opt/sonar/bin/linux-x86-64/sonar.sh startThe additional line taken fromaws docs;To troubleshoot issues on your EC2 instance bootstrap without having to access the instance through SSH, you can add code to your user-data bash script that redirects all the output both to the /var/log/user-data.log and to /dev/console. When the code is executed, you can see your user-data invocation logs in your console.
I am facing issue while launching cloudformation template passing userdata. Seems like no commands are running inside. Instance is coming up healthy without running these commands. Please help to resolve.AWSTemplateFormatVersion : 2010-09-09 Description: sonar Resources: Ec2Instance: Type: AWS::EC2::Instance Properties: KeyName: sivakey_feb ImageId: ami-04bfee437f38a691e UserData: Fn::Base64: !Sub | #!/bin/bash -xe sudo chown -R ec2-user:ec2-user /opt/sonar/temp/conf /opt/sonar/bin/linux-x86-64/sonar.sh start InstanceType: t2.large Tags: - Key: Name Value: sonar
AWS Cloudformation userdata issue
You can use theurllibpackage.From the Documentation:urllib is a package that collects several modules for working with URLs:urllib.request for opening and reading URLsurllib.error containing the exceptions raised by urllib.requesturllib.parse for parsing URLsurllib.robotparser for parsing robots.txt filesA simple get request usingrequestsimport requests r=requests.get('http://www.python.org/') print(r.text)Alternative usingurllibimport urllib.request r=urllib.request.urlopen('http://www.python.org/').read() print(r)
Can you use the Python's request library in AWS Glue? Is there a replacement to the Requests library that can be used with Glue since Glue only supports pure python modules?
Python requests library in AWS Glue
Resources: Service: Type: "AWS::CloudFormation::Stack" Properties: Parameters: ... ... TaskPolicyArn: !Ref ThisServicePolicy DynamoTable: Type: "AWS::DynamoDB::Table" Properties: AttributeDefinitions: ... ... ... ThisServicePolicy: Type: "AWS::IAM::ManagedPolicy" Properties: ManagedPolicyName: SomePolicyName PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "dynamodb:GetItem" - "dynamodb:BatchGetItem" - "dynamodb:Query" Resource: "*"
I currently have the following cloudformation .yaml file:Resources: DynamoTable: Type: "AWS::DynamoDB::Table" Properties: ... ... ...How do I give other resources permission to query this table?
DynamoDB Cloudformation Permission
After the user migration lambda is called your pre sign-up lambda will be called, assuming you have implemented it. The parameters received by your lambda will include username with the value being the UID you referenced. Parameters will also include user attributes containing email. You can use this information to update your database.
Cognito has a migration lambda that allows us to confirm a user in our db. They send the email and PW to Cognito, the lambda fires, we verify matches, and the user is entered into Cognito.At this point - behind the scenes - Cognito generates a username of some kind (UUID). The problem is, I need a way to get this username into our existing database, because our systems going forward will no longer rely on email and instead rely on this username.Ideal flow:Sign InMigration SucceedsCognito generates usernameUsername is sent to our server.Now because we have email set to auto-verified, no post-confirmation lambda can be called. The only way I see to do this with Cognito as-is is to either:Ask users who already exist in our system to confirm their email again. This is a non-starterCreate a post-auth lambda, check user login count through a custom attribute, and if 0 (or if not already registered with the service, etc.) migrate the username to the new service.If there isanyother way to do this, please let me know.
AWS Cognito - run another lambda after migration lambda has run
I think that you can possibly reach the goal viak8smodule as it natively supportskubeconfigparameter which you can use for EKS cluster authentication. You can follow the steps described in the officialdocumentationin order to composekubeconfigfile. There was a separate thread discussed on GitHub#45858about kubernetes manifest file implementation through k8s module, however Git contributors were facing some authorization issue, therefore take a chance and look through the conversation maybe you will find some helpful suggestions.
From local machine, we can apply Kubernetes YAML files toAWS EKSusingAWS CLI+aws-iam-authenticator+kubectl. How to do it in Ansible Tower / AWX?Understand that there are a few Ansible modules available but none seems to be able to apply Kubernetes YAML to EKS.k8sdoesn't seem to support EKS at the moment.aws_eks_clusteronly allows user to manage EKS cluster (e.g. create, remove).
How to apply Kubernetes YAML files to AWS EKS using Ansible Tower/AWX?
You need to have the bytes of the image, and after you get the image from S3 (S3 copy or wget), you can call:with open(file_name, 'rb') as f: payload = f.read() payload = bytearray(payload) client.invoke_endpoint(EndpointName=sagemaker-model-endpoint, ContentType='application/x-image', Body=payload)
I have an image that I upload to s3 inmybucket. Suppose my s3 endpoint for this data iss3://mybucket/imgnameNow I also have a model deployed in SageMaker atsagemaker-model-endpointI looked into examples of how to invoke this SageMaker endpoint from a boto clientherebut I am not sure how to specify the s3 paths3://mybucket/imgnamein theinvoke_endpointcall.client = boto3.client("runtime.sagemaker", region_name = 's3.us-east-2.amazonaws.com') client.invoke_endpoint( EndpointName=sagemaker-model-endpoint Body=payload, ContentType='image/jpg', Accept='Accept')What should be thepayloadin this case? Where do I specify thes3url?
Invoking SageMaker endpoint with data payload in S3 from a python boto client
Increase the memory limit of the lambda. If the instance is memory constrained, it will run much slower. Additionally (as @Michael points out in the comments), the amount of CPU available to a lambda is proportional to its memory allocation.
I have an AWS lambda function that is taking over a minute just to run imports. My code is doing nothing in the global scope that I can tell. How do I fix this?
AWS Lambda extremely slow on imports
You're waiting for the response of an async call, and it's likely that you aren't getting one. Check theSES API logs in CloudTrailto make sure that the request is actually being made. It sounds like your lamdba function can't access SES, which would happen if you are running it in a VPC. You would need to add a NAT Gateway to the VPC. Consider moving your lambda outside of your VPC.Here is a guideto help determine the tradeoffs.
I am using nodejs version 8.1 and severless frameworkin my serverless.yml I have:provider: name: aws runtime: nodejs8.10 region: eu-west-1 iamRoleStatements: - Effect: "Allow" Action: - "ses:GetIdentityVerificationAttributes" Resource: "*"and my lambda looks like this:const AWS = require('aws-sdk'); var ses = new AWS.SES({ region: 'eu-west-1' }); module.exports.handler = async (event, context, callback) => { context.callbackWaitsForEmptyEventLoop = false; let identityVerif = await ses.getIdentityVerificationAttributes({Identities: ['email']}).promise(); }I don't understand why the getIdentity function is never executed. The function exit with a timeout.
aws lambda SES function timeout
+100'Tags'can be added in your filter as follows:response = client.get_cost_and_usage( TimePeriod={ 'Start': '2019-01-10', 'End': '2019-01-15' }, Metrics=['BLENDED_COST','USAGE_QUANTITY','UNBLENDED_COST'], Granularity='MONTHLY', Filter={ 'Dimensions': { 'Key':'USAGE_TYPE', 'Values': ['APN1-EBS:SnapshotUsage'] }, 'Tags': { 'Key': 'keyName', 'Values': [ 'keyValue', ] } } )You can find the exact usage in theboto3 cost explorer API reference.You could also group by tag keys like this:Filter={ 'Dimensions': { 'Key':'USAGE_TYPE', 'Values': ['APN1-EBS:SnapshotUsage'] } }, GroupBy=[ { 'Type': 'DIMENSION'|'TAG', 'Key': 'string' }, ],It won't filter out tags, but it will group the returned data by tag key. This will return ALL tag values matching the tag key, so it may be too broad, but you can use it to troubleshoot any additional problems.I'd confirm that your tag values and keys all match up.
I am trying to use the cost explorer API using boto3. I am trying to get cost for EC2 snapshots. These snapshots have custom tags associated with them. What I am trying to retrieve is the cost of snapshots which have a particular tag.I have written the following script:import boto3 client = boto3.client('ce') response = client.get_cost_and_usage( TimePeriod={ 'Start': '2019-01-20', 'End': '2019-01-24' }, Metrics=['BLENDED_COST','USAGE_QUANTITY','UNBLENDED_COST'], Granularity='MONTHLY', Filter={ 'Dimensions': { 'Key':'USAGE_TYPE_GROUP', 'Values': ['EC2: EBS - Snapshots'] } } )This gives me the cost. But this is the total cost for the snapshot usage, i.e. for all the volumes. Is there any way to filter based on tags on the snapshot?I tries adding the fallowing Filter:Filter={ 'And': [ { 'Dimensions': { 'Key':'USAGE_TYPE_GROUP', 'Values': ['EC2: EBS - Snapshots'] } }, { 'Tags':{ 'Key': 'test', 'Values': ['aj'] } } ] }There is 1 snapshot where I have added that tag. I checked the date range and the snapshot was created within that time range and is still available. I tried changing granularity toDAILYtoo.But this always shows 0 cost.
Getting snapshot cost based on tags
Yes it is possible, when you are building lambda authorizer you can chooseLambda Payload Typeto beRequest.Assuming that you have named your first lambda parameterevents, then inside of the lambda, you will have access to your parameter values viaevent.pathParametersas well as access to your query string viaevent.queryStringParametersAnd other request information if needed, such as authorization token which you can extract fromevent.headers.the above code uses NodeJs syntax, the same logic holds true for Java but you will need to modify it according to Java syntax
In API Gateway I have a GET endpoint like following (with some request headers too)http://awesomedomain/v1/myspecialkey/find?a=bIs there a way the Lambda (Authorizer) code can read "myspecialkey"?Thanks in advance
Accessing URL path in AWS Lambda Authorizer
The first field should include the mount type and the bucket name, e.g.,s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0Thes3fs READMEhas other examples.ShareFollowansweredJan 16, 2019 at 18:26Andrew GaulAndrew Gaul2,34411 gold badge1313 silver badges1919 bronze badges11Thanks, It worked as follows; myresearchdatasets /var/s3fs-drive-fs fuse.s3fs _netdev,iam_role=EC2-to-S3-Buckets-Role,allow_other,umask=777, 0 0–MyShadeJan 16, 2019 at 22:13Add a comment|
I have tried all the answers provided in similar questions but none is helpful.I installed S3 Fuse so that I can mount S3 bucket. After the installation, I performed the following the steps:Step 1 Create the mount point for S3 bucket mkdir –p /var/s3fs-drive-fs Step 2 Then I am able to mount the S3 bucket in the new directory with the IAM role by running the following commands: s3fs myresearchdatasets /var/s3fs-drive-fs -o iam_role=EC2-to-S3-Buckets-Role -o allow_other, and it works fine.However, I found out that the bucket disappears each time I reboot the system, which means I have to run the command above to remount the S3 bucket each time after restarting the system.I found the steps to complete an Automatic mount at reboot by editing the fstab file with the lines belows3fs myresearchdatasets /var/s3fs-drive-fs fuse_netdev,allow_other,iam_role=EC2-to-S3-Buckets-Role,umask=777, 0 0To check whether the fstab is working correctly, I tried mount /var/s3fs-drive-fs/but I got the following errors, "mount: can't find /var/s3fs-drive-fs/ in /etc/fstab"Can anyone help me please?
Automatically mounting S3 bucket using s3fs on Amazon CentOS
It appears that your requirement is to useGlacier Vault Lockon some objects to guarantee that they cannot be deleted within a certain timeframe.Fortunately, similar capabilities have recently been added to Amazon S3, calledAmazon S3 Object Lock. This works at the object or bucket level.Therefore, you could simply useObject Lockinstead of moving the objects to Glacier.If the objects will be infrequently accessed, you might also want to change the Storage Class to something cheaper before locking it.See:Introduction to Amazon S3 Object Lock - Amazon Simple Storage ServiceShareFollowansweredJan 1, 2019 at 22:54John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges2Our main reason for using glacier was because it was Finra compliant. I don't think s3 did.–Ninad GaikwadJan 2, 2019 at 3:402As per the first link: "S3 Object Lock has been assessed for SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Regulation 1.31 by Cohasset Associates."–John RotensteinJan 2, 2019 at 3:58Add a comment|
I need to create a python flask application that moves a file from s3 storage to s3 glacier. I cannot use the lifetime policy to do this as I need to use glacier vault lock which isn't possible with the lifetime policy method since I won't be able to use any glacier features on those files. The files will be multiple GBs in size so I need to download these files and then upload them on glacier. I was thinking of adding a script on ec2 that will be triggered by flask and will start downloading and uploading files to glacier. This is the only solution I have come up with and it doesn't seem very efficient but I'm not sure. I am pretty new to AWS so any tips or thoughts will be appreciated.Not posting any code as I don't really have a problem with the coding, just the approach I should take.
Approach to move file from s3 to s3 glacier
After looking at the responses, I did my own research. Here is what I found. Snapshots go across regions where volumes stay in the same region as the snapshot. You can create a copy of a snapshot but you can't create a copy of a volume. In order to make a copy of a volume you have to use a snapshot. Volumes, images, instances all depend on the snapshot. Snapshot is the glue between volumes, images and instances. Please add if anyone finds other interesting facts.ShareFollowansweredDec 24, 2018 at 2:06GucciGucci6111 silver badge55 bronze badges13That is a good start. However, approach understanding volumes in the same way that you would look at a disk drive attached to your system except that this disk is network attached. Approach snapshots as optimized incremental backups. You can restore your system from snapshots. You can also create new volumes from snapshots. The two are very different technologies that are complementary to each other.–John HanleyDec 24, 2018 at 2:34Add a comment|
I see that snapshot is just a backup of volume except that you can create another volume from the snapshot. I guess there would be other differences. Does anyone notice other differences which really matter?
Difference between AWS Snapshot and Volume
I don't think you should go for Dynamo or ES for this kind of operation.After all, what you want is to store and serve it, not going into the file's content which both Dynamo and ES would waste time to do.My suggestion is to use AWS Lambda + S3 to optimize for cost S3 does have some small downtime after putting till the file is available though ( It get bigger, minutes even, when you have millions of object in a bucket )If downtime is important for your operation and total throughput at any given moment is not too huge, You can create a server ( preferably EC2) that serves as a temporary file stash. It willReceive your fileTry to upload it to S3If the file is requested before it's available on S3, serve the file on diskIf the file is successfully uploaded to S3, serve the S3 url, delete the file on diskShareFollowansweredDec 19, 2018 at 4:39qkhanhproqkhanhpro4,68622 gold badges3535 silver badges5050 bronze badges2Thanks that was very helpful. Can you please tell me why you suggested Lamdba, can I not just route the requests through the backend?–mohitshah9920Dec 20, 2018 at 13:001@mohitshah9920 You can, for sure. But there are some trade-offs. If you use Lambda + S3 only, it will most likely be cheaper. 10K files = 10K Lambda run, 2 seconds for each. Cost you total like $0.5 The cheapest EC2 you can get if running 24/7 for a month be approx $10–qkhanhproDec 20, 2018 at 14:10Add a comment|
I have backend that recieves, stores and serves 10-20 MB json files. Which service should I use for superfast put and get (I cannot break the file in smaller chunks)? I dont have to run queries on these files just get them, store them and supply them instantly. The service should scale to tens of thousands of files easily. Ideally I should be able to put the file in 1-2 seconds and retrieve it in the same time.I feel s3 is the best option and elastic search the second best option. Dyanmodb doesnt allow such object size. What should I use? Also, is there any other service? Mongodb is a possible solution but i dont see that on AWS, so something quick to setup would be great.Thanks
DynamoDB vs ElasticSearch vs S3 - which service to use for superfast get/put 10-20MB files?
Don't trust file extensions blindly. The image provided in not a Jpeg. You can download it to another system where you can check it usingfileor else. In the case at hand it is a WebP image (WebP is a new image format pushed by Google).One possible cause of the confusion is that Web servers generate the Mime-type from the file extension, so the WebP image is returned with a mime type of image/jpeg, and this is normally trusted blindly by most software (including your browser).ShareFollowansweredDec 20, 2018 at 12:41xenoidxenoid8,58933 gold badges2424 silver badges5252 bronze badges1Thanks for the answer. I know about the real type of the image. Some users simply rename files to be able to upload them. The issue here is that ImageMagick does recognise the real format of the image and converts it to PNG (what I intend to do). It only doesn't do that in AWS Lambda due to lack of image delegate as the error shows. The challenge is only to install the delegates to kill this issue.–Lee MaanDec 20, 2018 at 21:49Add a comment|
I have this ImageMagick error with one of the images my site is trying to convert:{ Error: Command failed: convert: no decode delegate for this image format `/tmp/925bf249f8297827f51f0370642eb560.jpg' @ error/constitute.c/ReadImage/544. convert: no images defined `/tmp/abdf362d-f7eb-435f-bafe-5a134be0235f.png' @ error/convert.c/ConvertImageCommand/3046. at ChildProcess.<anonymous> (/var/task/node_modules/imagemagick/imagemagick.js:88:15) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:191:7) at maybeClose (internal/child_process.js:886:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5) timedOut: false, killed: false, code: 1, signal: null }The weird part is that it's happening only in my AWS Lambda function, not on my machine (Mac). I am reading about versioning, reinstalling ImageMagick and stuff, but I can't do that in Lambda runtime environment. Is there any way around this?
Only in AWS Lambda: ImageMagick Error: Command failed: convert: no decode delegate for this image format
Your all External IP's (Public IP's) should be available fromkubectlKubernetes command line.To show these information run:kubectl get services --all-namespaces -o wideIf it is needed specify with--kubeconfigflag your unique kubeconfig file.Example:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54sShareFollowansweredDec 13, 2018 at 0:09user9008857user9008857Add a comment|
I've followed the steps to set up an EKS cluster and successfully have one service which exposes port 31515 from a pod..but I'm stuck at finding out what my public url is. EKS seems to have no such thing so how do I access it from the outside? Or am I not looking in the right place.
How to get the URL of your EKS cluster?
If you're using this command programatically, you can pass a negative response to theeb initoreb deploywith aUnix command calledyes- the name seems to contradict what you're trying to achieve, but it can be used to pass a user-definied string instead of the default affirmative response. Usage:yes n | eb deployIt will behave as if you pressed the 'n' key. Keep in mind that 'n' will be looped (it will be an answer to all the prompts during the command execution).Another option is usingprintf:printf '\n\n\n\n' | eb deployThis would behave as if you pressed theEnterkey 4 times (4 prompts).There are some more alternatives and usage examples inthis question.ShareFolloweditedDec 12, 2018 at 9:37answeredDec 12, 2018 at 9:28arudzinskaarudzinska3,23211 gold badge1818 silver badges3030 bronze badges1thanks, I didn't know about the 'yes' command. But I was hoping for a more robust solution as I use it in scripts as well as manually.–richflowDec 12, 2018 at 10:47Add a comment|
Every time I use a command on the Elastic Beanstalk CLI likeeb initoreb deployit prompts me with:Do you wish to continue with CodeCommit? (y/N) (default is n):And I always say 'no'.Is there a way to suppress this prompt or provide a default answer?I have checked the EB CLI documentation but I haven't been able to find anything.
EB CLI - how to suppress code commit prompt "Do you wish to continue with CodeCommit?"
I figured it out... so for anyone else trying to figure this out:You need to set the read permissions on the App client to read the Email Verified attribute.Go to: General settings -> App clients -> Show details -> Set attribute read and write permissions link and check off Readable Attributes: Email VerifiedShareFollowansweredDec 8, 2018 at 7:17codergurlcodergurl8111 silver badge77 bronze badges2I did find the option in cognito, but still email_verified does not appear in my user attributes. What other settings are you aware of that would be necessary?–ilmiacsAug 19, 2019 at 12:44Uh, ok found answer here:github.com/aws-amplify/amplify-js/issues/2827So apparently new attributes won't load until accessToken is renewed.–ilmiacsAug 19, 2019 at 12:48Add a comment|
I am using the JavaScript AWS Amplify Authentication module. If an existing and confirmed user changes their email address, the user in the cognito user pool is set tonot verifiedand the user is sent a verification code to the new email address. However, I can't find any way with the API to determine if the current user's email is verified or not verified. How can I find out if the user's email address is verified or not via the API?
Checking if email is verified in aws cognito using AWS Amplify Authentication module
Unfortunately youcan'tcall an external API from the Step Function. You have to wrap the call in a Lambda.AWS documentation:Step Functions supports the ability to call HTTP endpoints through API Gateway, but does not currently support the ability to call generic HTTP endpoints.ShareFollowansweredOct 15, 2021 at 13:35forzagreenforzagreen2,6093030 silver badges3939 bronze badgesAdd a comment|
I want to implement a simple sequence of tasks on AWS Step Function. Something like next:I can't fire and forget External API, because I need a response from it. So it is a bad idea to wrap it in a lambda function.I can't implement the External API task on Lambda Function, because work exceedslambda limitations.The best way that I see is the implementation of a call to External API from the task of Step Function. If I understand correctly it is possible to do withActivitiesand Worker.I see some Ruby example, but it isn't clear for me. Could anybody suggest me a good tutorial with clear examples of similar implementation?PS: External API I could wrap in anything on EC2.
How to call external API from AWS Step Function?
aws dynamodb scan \ --table-name Movies \ --projection-expression "title" \ --filter-expression 'contains(info.genres,:gen)' \ --expression-attribute-values '{":gen":{"S":"Sci-Fi"}}' \ --page-size 100 \ --debugWhat you think this does: "read the entire table, find the rows that satisfy filter, and then return up to 100 of those"What Dynamo does: "read 100 rows, find the rows that satisfy filter, and return them". Seehttps://www.dynamodbguide.com/filteringI think you need an index :)ShareFolloweditedJul 22, 2023 at 5:38answeredOct 28, 2021 at 21:58Maria Ines ParnisariMaria Ines Parnisari17.1k1010 gold badges8989 silver badges134134 bronze badgesAdd a comment|
I am using node.js.If you look at this example:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.htmlIt says:aws dynamodb scan \ --table-name Movies \ --projection-expression "title" \ --filter-expression 'contains(info.genres,:gen)' \ --expression-attribute-values '{":gen":{"S":"Sci-Fi"}}' \ --page-size 100 \ --debugWherepage-sizelimits the number of result items:Ordinarily, the AWS CLI handles pagination automatically; however, in this example, the CLI's --page-size parameter limits the number of items per page.But if you read the Node.js AWS documentation:https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB.html#scan-propertyThere is no parameters associated with "page-size".(OnlyLimitwhich limits the number of items being scanned, not returned).How do I limit the number of returned items (that satisfies my condition)?
How to limit the number of items from an AWS DynamoDB scan?
As you've seen, API Gateway has hard limits response sizes. This is because it's designed for quick and transactional use-cases. (API Gateway will also not keep a connection open longer than 30 seconds, so if you're streaming a file that takes longer than this to download, you'd be in trouble too.)For these cases you might consider a different pattern, like:Have your EC2 machine upload the result to S3 and have API Gateway return a pre-signed url to download the response from S3. This would stream the download, but would have to wait for the EC2 -> S3 upload to complete first.Use Elastic Beanstalk, that way you would be in control of the server and able to keep your connections open for as long as you wanted, and send as much data as you want.ShareFollowansweredNov 23, 2018 at 10:52thomasmichaelwallacethomasmichaelwallace8,15611 gold badge2828 silver badges3636 bronze badgesAdd a comment|
Is there any way I can stream content in API response which is backed by AWS API gateway. My content can be very large size and i want to stream it to the requestor. At present i see there is a limit of 10Mb payload size on API Gateway.I also generate the data at runtime when i get the request on my EC2 machine and as soon as some data is generated i want to start streaming it to the requestor.Is it possible? How?
API Gateway stream large size content in response
This is not an error. Its basically saying that you have made changes to the template and the designer is not reflecting them yet so all you need to do is click refresh on the cfn window.ShareFollowansweredNov 21, 2018 at 1:47Harshana NanayakkaraHarshana Nanayakkara18411 gold badge44 silver badges1313 bronze badgesAdd a comment|
I am using AWS Cloud Formation using a template and have dragged and dropped an EC2 instance (as this is what I'm working with). I get the error that "designer is out of date, hit refresh". Why would this be? How can I fix this?
AWS Cloud Formation - "Designer is out of date, hit refresh" message
Create abuildspecfile for each of the builds you want to run. In the pre-build phase of the buildspec file, change to the appropriate directory. When you invoke the build, point to the necessary buildspec by usingbuildspecOverride.In the long term it might be easier to separate your three projects into their own repositories.ShareFollowansweredNov 19, 2018 at 22:54bwestbwest9,30233 gold badges2828 silver badges5858 bronze badges1docs.aws.amazon.com/codebuild/latest/userguide/…You can also have multiple output artifacts!–Randall HuntNov 24, 2018 at 23:47Add a comment|
This is probably a simple question, but I did not find the answer.I'm using AWS CodeBuild to build my code. However, in my repository (in this case it is on bitbucket), I kind of have 3 projects in the same repository.These projects are in different folders. Two of them are angular projects.So, I want to build just one project at a time in my Continuous Integration with CodeBuild.If I try to use commands like "ng build", I receive an error because my main folder does not have an angular project. And that is right. Because my project is in a folder inside the main folder.So, how can a change the "build path" of my build definition on AWS CodeBuild?Thank you in advance.
How to change the build folder on aws CodeBuild
There actually isn't a queue, but while an approval action is in-progress it holds the stage "lock" for that stage so that the change in that stage does not change underneath you while you run manual testing.While that stage lock is held, there is a "slot" for the change waiting to be promoted into that stage when the stage lock is released. As newer changes pass the previous stage they will replace the change in the slot. Therefore, when you approve or reject a manual approval action, only the most recent pending change is promoted.Rather than a manual approval, you might want to just disable the transition between staging and prod. Disabling the transition won't hold the lock in either stage, so when you enable it again the most recent change will be promoted.Transitions are better for when you want to simply control when you deploy to prod, and manual approvals are better when you want to run some manual testing against a consistent version.See this documentation on transitions:https://docs.aws.amazon.com/codepipeline/latest/userguide/transitions.htmlShareFollowansweredNov 16, 2018 at 21:23TimBTimB1,49799 silver badges1010 bronze badges12I never thought about disabling the transition as an "approval" ... good idea! Thanks! @TimB–Asem HasnaNov 18, 2018 at 8:29Add a comment|
I have been using AWS for more than a year now. Lately, I have been focusing on building a CI/CD Pipeline.My pipeline has 4 stages:Source(Github)Testing(using CodeBuild)Staging(deploys to Staging)Manual ApprovalProd(deploys to Staging)According tothisAWS Doc,If no response is submitted within seven days, the action is marked as "Failed."The Pipeline is relatively active (several deploys to staging per day) and what I found out is, that the Approvals "queue" and you have to Approve many times before the most recent changes get to Production.Is there a way to set the expiration time of an Approval to less than 7 days?
How to ignore AWS CodePipeline Approval automatically in less than 7 days
There are many ways to do this:Have a single ops server run all the tasks that need to be run on only 1 server. Your bitbucket pipeline can trigger this ops server for single server tasks and the others for multi-server tasks.Create a custom Artisan command that attains locks (DB or cache) to run migrations while avoiding parallel runs / race conditions.Trigger deployments serially (dont know if that's possible on Beanstalk).As the OP mentioned, setting theleader_only: trueflag on Elastic Beanstalk scripts to only run the command on a single instance does the trick!ShareFolloweditedNov 14, 2018 at 16:16answeredNov 13, 2018 at 19:30ParasParas9,3663434 silver badges5656 bronze badges22Thanks for the great suggestions, @Paras! I have applied theleader_only: trueflag on the Elastic Beanstalk scripts which is most probably another solution.–thitamiNov 14, 2018 at 9:40Thanks for sharing @thitami, I've updated my answer to include your solution. Seems like the best one of the lot! :)–ParasNov 14, 2018 at 16:16Add a comment|
We experience a very weird issue at the moment. Our tech stack involves AWS Elastic Beanstalk,EC2 and Laravel deploying the code with Bitbucket Pipelines.The problem is that whenever we include a migration in the deploy then it's run twice (as many times as our EC2 instances in this environment!).Our scripts are located under.ebextensionsdir:option_settings: "aws:elasticbeanstalk:container:php:phpini": document_root: /public container_commands: 01initdb: command: "php artisan migrate"We ended up breaking our deploy a few times because the system can't tell that this migration has already run.Anyone saw this issue before?UpdateWe came up with this implementation as MySQL connection is refused if we addphp artisan migratein the build script.
EC2: Laravel migrations run as many times as the instances
You have to define the template directly with the array field. Here an example:Request:POST apigateway/stage/resource?query=test { "id": "id", "list": [1,2,3,4] }Mapping:#set($inputRoot = $input.path('$')) { "query": "$input.params('query')", "id": "$inputRoot.id", "list": $inputRoot.list }ShareFolloweditedApr 17, 2019 at 12:49answeredApr 17, 2019 at 12:43wodkawodka1,3301010 silver badges2121 bronze badgesAdd a comment|
I havePOSTmethod in API Gateway that accepts the data passed from body params.From the API Gateway, I managed to get the userName and uuid, but I'm having an error getting the traveledCities. How can I map an array or an object passed from body paramsI'm getting an error from $inputRoot.traveledCities line 5
How to map an array or object from Api Gateway integration request mapping template?
The CDK renames theRefto make them look like any other properties, and they have a name that is automatically generated from the resource name and theReftype (typically eitherName,IdorArn).In the particular case you're facing here, you need to use theUserPoolResource.userPoolIdproperty (userPoolis the resource type name, andIdis theReftype).ShareFollowansweredNov 6, 2018 at 16:39RomainRomain12.7k33 gold badges4242 silver badges5454 bronze badgesAdd a comment|
How to call !Ref function in aws-cdk stack? I have a UserPool resource and UserPoolClientResource with userPoolId property:const userPool = new cognito.cloudformation.UserPoolResource(this, userPoolResourceName, { userPoolName, usernameAttributes: ['email'], autoVerifiedAttributes: ['email'], policies: { passwordPolicy: { minimumLength: 8, requireLowercase: false, requireNumbers: false, requireSymbols: false, requireUppercase: false } } }); new cognito.cloudformation.UserPoolClientResource(this, userPoolClientResourceName, { userPoolId: `!Ref ${userPool.id}`, // failed clientName: userPoolClientName });
!Ref function in aws-cdk
Add the following code to software configuration (create emr -> step1: software and steps -> edit software configuration -> enter configuration)[ { "Classification": "spark-env", "Configurations": [ { "Classification": "export", "Properties": { "PYSPARK_PYTHON": "/usr/bin/python3" } } ] } ]ShareFollowansweredFeb 19, 2019 at 8:20nofar mishrakinofar mishraki56811 gold badge66 silver badges1616 bronze badges31It doesnt works as it is being ignored "Warning: Ignoring non-spark config property:"–A.BApr 24, 2019 at 10:[email protected], did you try to put this configuration in the AWS UI? (not by command line)–nofar mishrakiApr 24, 2019 at 14:52While the command line supports configurations, this does not work when provisioning a cluster from the aws cli. It does work when performed in the UI. The fact that there is a difference is frustrating.–ctpenroseDec 6, 2019 at 22:40Add a comment|
I am using aws with emr, and trying to change to bootstrap script in order to set the default python in pyspark to be python 3, I am followingthistutorialthis is changing the /usr/lib/spark/conf/spark-env.sh file, but does not change the python version in pyspark, I am still getting jobs done with python 2.7. this is only working when I ssh to the machine and specifically use$source /usr/lib/spark/conf/spark-env.sshWhen I try to add this line to the bootstrap script I am getting bootstrap error that the file is not found./bin/bash: /usr/lib/spark/conf/spark-env.sh: No such file or directoryI assume that the file does not exist in this stage. How can I set the pyspark python to be python 3 in the bootstrap script?
aws emr can't change default pyspark python on bootstrap
That is not possible. CloudFront signed URLs do not use IAM -- it's a different system -- so using IAM roles is not possible when generating CloudFront signed URLs. You can use them for S3 signed URLs, but not CloudFront.One option I have used is to store the CloudFront key pair ID and the private key -- encrypted -- inSSM Parameter Store. Your application can then use the SDK and the IAM role in order to fetch the keypair ID and to fetch and decrypt the CloudFront private key for use when generating the URLs. Parameter Store is free.ShareFollowansweredOct 30, 2018 at 0:06Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges0Add a comment|
Currently, we use theCloudFront Key Pair ID and Private Keyto generate the cloudFront signed url which we use to upload the file into s3. CloudFront Key Pair ID and Private Key are being kept in property file which we inject using Spring and construct the signed url. We wanted to change this, instead of keeping CloudFront Key Pair ID and Private Key in properties file, we wanted to use IAM role to find it and construct the signed URL. Is that possible? If yes, how?
How to get CloudFront Key Pair ID and Private Key using IAM role JAVA
The example build.spec file assumes that your build image has Docker already installed. I was assuming "wrongly" that CodeBuild will install/configure Docker tools inside the image automatically.ShareFollowansweredNov 2, 2018 at 10:45NKMNKM64111 gold badge77 silver badges2121 bronze badgesAdd a comment|
I am using AWS CodeBuild to build my application. I am using example build spec file as given here:https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-exampleI have already uploaded my custom Docker image to AWS ECR having requisites to build my application (Java/Scala based). I get following error:Reading package lists... [Container] 2018/10/26 10:40:07 Running command echo Entered the install phase... Entered the install phase... [Container] 2018/10/26 10:40:07 Running command docker login -u AWS -p ..... /codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: docker: not foundWhy should I get this error ? AWS CodeBuild is supposed to download this Docker image from ECR and then follow the instructions that I provide in the build spec file for building my application.
AWS CodeBuild /codebuild/output/tmp/script.sh: docker: not found
The end solution was to make sure all traffic was forced through OpenVPN.This would mean anyone connecting to the VPN would have the public IP that was assigned to the VPN server.Hence, this IP was the only one allowed to access the site via the WAF.ShareFollowansweredJan 17, 2019 at 15:45mysterykidmysterykid11311 silver badge99 bronze badgesAdd a comment|
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed2 years ago.Improve this questionGoal:Use AWS WAF to filter out traffic that hits CloudFront so that only users connected to the OpenVPN network can access the web application.OpenVPN assigns any connected user to an IP in the network range of 172.xx.yyy.z/a.I therefore whitelisted this range via a a WAF rule to a Web ACL, and blacklisted any other IP's.However, I cannot access the site.Looking through CloudWatch, it becomes clear that this is because the VPN assigned IP is not actually being used to hit the web application. It is a modified IP that is very similar to the Public IP of my device.As far as I can see, there is no way for me to determine a range for these 'custom' ip's. Given this, how do I ensure only VPN connected users can access the site?Have I missed something important?
Blocking IP's using AWS WAF so that only users connected to a VPN can access CloudFront [closed]
You can't increase the policy size, but you can remove old ELK lambda policies and replace them with a wildcard policy. This can only be done on AWS command line, as of Aug 2019 AWS does not expose this in the web dashboard.Three commands to do that (replaceus-west-1with your region):List all policies:$ aws lambda get-policy --function-name <your-ELK-lambda-name> --region us-west-1Delete an individual policy by its statement ID - once you add wildcard below, all individual policies become redundant and can be removed:$ aws lambda remove-permission --function-name <your-ELK-lambda-name> --statement-id <statement-id> --region us-west-1Add the wildcard policy:$ aws lambda add-permission --function-name <your-ELK-lambda-name> --statement-id WildcardPolicy --action "lambda:InvokeFunction" --principal "logs.us-west-1.amazonaws.com" --source-arn "arn:aws:logs:us-west-1:<your-AWS-account-number>:log-group:*" --source-account "<your-AWS-account-number>" --region us-west-12 more issues - as you add new logs, it will keep adding policies, so even with the wildcard policy you will have to delete new individual policies because it's not smart enough to not add them. Also, there is a UI glitch - these newly attached logs will not show up on the ELK Lambda web page properly. But at least this will help get past the policy size limit.ShareFollowansweredAug 27, 2019 at 20:06Ivan BalepinIvan Balepin49033 silver badges1515 bronze badgesAdd a comment|
My Use case is to stream all system logs, application logs and aws cloudtrail logs to aws elasticsearch service.work flow isapplication logs --> cloudwatch log group -->default lambda function -->aws esnow i can able to stream 40+ log groups to es. after some point of time i am trying to stream more loggroup to es that time i am unable to stream. i am getting following error"The final policy size is bigger than the limit of 20480 "How to increase policy sizePlease help me on this.updated:My IAM role inline policy{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:*:*:*" ] }, { "Effect": "Allow", "Action": "es:ESHttpPost", "Resource": "arn:aws:es:*:*:*" } ]}
The final policy size is bigger than the limit of 20480 - AWS ELK
You need to containerize it before deploy to SageMaker.This might be a good start:https://aws.amazon.com/blogs/machine-learning/train-and-host-scikit-learn-models-in-amazon-sagemaker-by-building-a-scikit-docker-container/ShareFollowansweredOct 23, 2018 at 19:12JohnaJohna10122 bronze badgesAdd a comment|
I have already developed a scikit learn based machine learning model and have it in a pickle file. I am trying to deploy it only for inferencing and found sagemaker on aws. I do not see scikit learn based libraries on their available libraries and I also do not want to train the model all over again. Is it possible to only deploy the model that is already trained and present in AWS S3 on sagemaker?
Can i deploy pretrained sklearn model (pickle in s3) on sagemaker?
Make sure that the security group that you have initialized with your RDS instance is correct.By default, a default security group is selected when creating a new RDS instance. If your RDS instance is behind a firewall or other connection-limiting restriction such as behind a VPN then you won't be able to reach the RDS instance without the right security group.The solution here is to assign the security group that allows incoming connections from your source IP address on the port designated when you created the RDS instance. The default port for MySql on RDS and MySql in general is 3306.ShareFollowansweredOct 11, 2018 at 19:31fIwJlxSzApHEZIlfIwJlxSzApHEZIl12.5k77 gold badges6464 silver badges7373 bronze badges0Add a comment|
I receive an errorhandshake inactivity timeoutwhen trying to connect to a newly created Amazon RDS MySql database instance.
Handshake inactivity timeout coneccting to Amazon RDS Instance
If the objects were moved from S3 to Glacier via a Lifecycle Policy, add apermanently deletesetting to the lifecycle policy to Delete the objects afterndays. This will delete the objects from both S3 and Glacier.If, instead, the objects were uploaded directly to Glacier, then there is no auto-deletion capability.ShareFollowansweredOct 3, 2018 at 2:16John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges2I have created a s3 life cycle policy which willExpire the current version of the object in 540 days. I am a bit confused here, whether it deletes the objects from s3 or glacier, if not I want to delete the objects from a bucket in 540 days and the glacier in some 4 years! how will I set it up?–Rajat jainJun 17, 2019 at 9:32@Rajatjain Please create a new question rather than asking via a comment on an old question.–John RotensteinJun 17, 2019 at 9:35Add a comment|
Thanks for reading this.I am able to transfer files from S3 to Glacier after 30 days using lifecycle rule. However, how do I make the same files get deleted from Glacier after 3 months?Thanks.
Amazon Glacier How to delete files after a certain period of time
The other way to use it is to install awsebcli in virtualenv with python 3.7.if you dont have virtualenv install it first.pip install virtualenvthen make virtualenv with python 3.7virtualenv -p python3.7 <name of virtualenv>activate this virtualenvcd <name of virtualenv> source bin/activateNow install awsebcli,pip install awsebcliThis virtualenv will now have python3.7 as default python version.ShareFollowansweredSep 20, 2018 at 14:55GraphicalDotGraphicalDot2,73422 gold badges3030 silver badges4545 bronze badgesAdd a comment|
My system as Python 2.7 and 3.7 installed. I have attempted to install the EB CLI connected to Python 3 but the CLI tool seems only to connect to the 2.7 installation.Attempt 1When I run$ brew install awsebcliI get a version of EB that seems to be associated with 2.7, which is incorrect:$ eb --version EB CLI 3.14.4 (Python 2.7.1)Attempt 2When I attempt to install EB CLI using pip, the installation appears fine but I am unable to access EB.$eb --version -bash: eb: command not foundThe docs suggest this might be to do with not having the path in the .bash_profile I've set up, however I have added the following to my .bash_profile and reloaded the .bash_profile:# Adding path to Elastic Beanstalk CLI export Path=/Library/Python/3.7/bin:$PATH
EB CLI installing with Python 2 rather than Python 3
I've had to recently contact AWS Enterprise support about thisCommonly requested services that aren't receiving tags from cloud formation includeDynamoDBElasticacheIAM resourcesECS clustersCloudfront distributionsGlue jobsSQSFirehose Delivery streamThere is an internal feature request open, however their suggested action was to just manually tag the resources.ShareFollowansweredAug 2, 2019 at 6:30dtx300dtx3003122 bronze badges11This is sad and a little silly. I guess in this instance using the API or something like Terraform is the only option 🤷–mcouthonFeb 7, 2022 at 9:30Add a comment|
I'm trying to understand the behavior of CloudFormation with respect to applying tags to the resources it creates.As per their documentation -https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.htmlIn addition to any tags you define, AWS CloudFormation automatically creates the following stack-level tags with the prefix aws:: aws:cloudformation:logical-id aws:cloudformation:stack-id aws:cloudformation:stack-nameI created a DynamoDB table from CloudFormation and I visited the DynamoDB console and selected the tags tab and couldn't find any specific tag being added. I also did not find the aws:cloudformation:logical:id tag being added.I then tried to create a S3 bucket using CloudFormation. That seems to work and I was able to visit the S3 console and find the aws:cloudformation:logical-id tag for the S3 bucket.Is this some kind of inconsistency? Is there any specific documentation I can follow to find the list of AWS resources to which CloudFormation applies the tags prefixed with aws: as mentioned in the documentation?Any help would be appreciated. Thanks!
CloudFormation - Applying tags to other AWS resources
If the code doesn't need outbound Internet access at all, place the function in a subnet in a VPC.If the code needs outbound access but it can be limited to trusted servers, place the function in a private subnet having a route to a NAT Gateway, all in an Internet Gatway-enabled VPC. Then whitelist trusted server IPs in the security group associated with the Lambda.How a compromised NPM package can steal your secrets (POC + prevention).ShareFollowansweredApr 17, 2021 at 21:43Max IvanovMax Ivanov6,0484242 silver badges5555 bronze badges1Won't this have a huge impact on cold start times?–Alexander GrassAug 4, 2022 at 18:07Add a comment|
I'm running a sensitive AWS Lambda function, which is required to never connect outbound to the Internet. However, lambda function uses several 3rd party open source libraries, which are not trust-able (potentially leak data). Is there a way to block outbound connections entirely from Lambda?
How to disable outbound internet connections on AWS Lambda?
Require your fileconst abc = require('./abc.js');And in the handler function call your codeabc.yourExportedMethod();ShareFollowansweredSep 13, 2018 at 17:52cementblockscementblocks4,5001919 silver badges2626 bronze badges1Could you elaborate ? When i triedabc.yourExportedMethod();in my handler.js it is throwing error. Please check my Updated Section in my question–PrivateSep 13, 2018 at 18:59Add a comment|
I have node.js file(i.e.abc.js) which will give the output when i run in my node.js editor. I want to run the same file inAWS Lambda.For that, I created a lambda and movedabc.jsto there. To run, it seems i need to implement myabc.jsfile in handler.js(i.e.in lambda way means callback etc).Is there any way to triggerabc.jsfromhandler.jsrather than implementing again the same thing inhandler.js?Checked regarding the above usecase but didn't find much on google.UpdatedMy abc.js filevar AWS = require('aws-sdk'); // Set the region AWS.config.update({ region: "ap-south-1" }); // Create S3 service object s3 = new AWS.S3(); var params= {}; s3.listBuckets(params, bucketList); function bucketList(err, data) { if (err) console.log(err, err.stack); // an error occurred else { console.log(data) } }My handler.js in lambda and modifying it based on my interpretation of your answer.exports.handler = async (event) => { const abc = require('./abc.js'); // TODO implement abc.bucketList(); };This is the error i am gettingResponse: { "errorMessage": "abc.bucketList is not a function", "errorType": "TypeError", "stackTrace": [ "exports.handler (/var/task/index.js:5:5)" ] }Any help is appreciated.
Invoking a file from AWS lambda handler.js?
The handler supports a rewrite functionality that allows you to modify the url, that is likely to be the simplest way to achieve it:https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/appendix-b.htmlBasically, you can rewrite all url's to always append /cloudfront_assets/, similar to how the example rewrites to add/fit-in/Rewriting something like .* should catch pretty much everything. As the code is python based, you should use python regexp syntax.The underlying code for the function can be found in the github repos:https://github.com/awslabs/serverless-image-handler/blob/master/source/image-handler/lambda_rewrite.pyShareFollowansweredSep 18, 2018 at 10:43coldecolde3,23211 gold badge1616 silver badges2626 bronze badgesAdd a comment|
Hi i got the Serverless Image Handler up and running (using this template:https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/deployment.html). Deployment worked fine, all good.I pointed it to my already existing bucket "MyBucket", and i can do image rescaling and stuff when placing images into that bucket. However we have all our images in a subfolder to that bucket, called "cloudfront_assets".So after assigning my CNAME to the new cloudfront distribution, i am stuck with having to reference my images like this:https://subdomain.mydomain.com/cloudfront_assets/image.jpginstead ofhttps://subdomain.mydomain.com/image.jpgI tried editing the cloudfront disitrbutions origin settings, and set "Origin Path" from /image to things like /cloudfront_assets or /image/cloudfront_assets.It fixed the path issue, so i didnt have to write the "/cloudfront_assets/" before the image, but regardless of what i set, the image rescaling stopped working.What is the correct way to do this?Please help, currently stuck at the moment Set the log level to debug in the lambda function in order to see whats happening, but it only says its getting "access denied" as far as i can tell
Serverless Image Handler - How to set subfolder as root
We have a service available from Amazon called "DMS" (database Migration Service), there all it needs is the endpoint, connection details of source and target database systems. Here your source is your local DB and target is AWS aurora MySQL DB that you created. It is the simple guide, and you can achieve DB migration by merely following their documentation: check out this linkhttps://docs.aws.amazon.com/dms/latest/sbs/DMS-SBS-Welcome.htmlIt is almost of no cost for the first user because it offers an instance for free that you can use as a medium for migration.ShareFolloweditedSep 24, 2018 at 22:55Reza Mousavi4,51055 gold badges3131 silver badges4949 bronze badgesansweredSep 11, 2018 at 14:28winnervcwinnervc80988 silver badges1111 bronze badges22The problem i have is when i try to connect to the amazon rds Aurora Serverless cluster database endpoint, the migration schema conversion tool just says connection error: 2018-09-12 11:06:04.906 [ 64] LOADER ERROR Connection to 'jdbc:mysql://' wasn't established. ERROR: code: 0; message: Communications link failure–BUMASep 12, 2018 at 9:08is it possible to migrate the whole database from Amazon RDS PostgreSQL to Amazon Aurora Serverless using DMS?–A l w a y s S u n n yNov 4, 2019 at 16:42Add a comment|
i have a database that i want to move to AWS RDS Amazon Aurora Serverless, i dont have an instance i only have a cluster that i have created which is MySQL Aurora serverless, so is it possible to do a dump from MySQL directly to the cluster instead of dumping to an instance then creating a snapshot to restore on the serverless cluster.
Is it possible to copy data directly from MySql Local to AWS RDS Cluster Aurora Serverless
You will have to update yourawsconfiguraiton.jsonfile to have information aboutLambdaInvokerso that it can load the configuration for default service configuration. Your updated file should look like:{ "Version": "1.0", "CredentialsProvider": { "CognitoIdentity": { "Default": { "PoolId": "us-east-1:05aab771-99b5-4a9b-8448-de92fe86ba56", "Region": "us-east-1" } } }, "IdentityManager" : { "Default" : { } }, "LambdaInvoker" : { "Default" : { "Region": "us-east-1" } } }ShareFollowansweredSep 6, 2018 at 18:15Rohan DubalRohan Dubal84755 silver badges1010 bronze badges2Thanks! this solved my issue. I thought it was this problem, but couldn't find how to update the json...–DanfSep 6, 2018 at 18:26As I mentioned, I solved this issue, but then I run in to another with the credentials! if could take a look I'd be very thankful, I'm not having any answers...stackoverflow.com/questions/52214407/…–DanfSep 7, 2018 at 15:18Add a comment|
I'm trying to implement a lambda function with an iOS app. I follow all the steps on this tutorial form AWS:https://docs.aws.amazon.com/aws-mobile/latest/developerguide/how-to-ios-lambda.html.But when I add the following line:let lambdaInvoker = AWSLambdaInvoker.default()it throws this error:*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'The service configuration is `nil`. You need to configure `Info.plist` or set `defaultServiceConfiguration` before using this method.'I added the awsconfiguration.json file to the project with this content:{ "Version": "1.0", "CredentialsProvider": { "CognitoIdentity": { "Default": { "PoolId": "us-east-1:05aab771-99b5-4a9b-8448-de92fe86ba56", "Region": "us-east-1" } } }, "IdentityManager" : { "Default" : { } } }The app runs well importing AWSLambda and the mobileClient, and I'm able to validate credentials with Cognito (I get the "welcome to AWS" message)Any ideas??
The service configuration is `nil` when instantiating AWSLambdaInvoker on Swift
Yes it's possible. It's a bit cryptic, but here's a filter pattern that will do the trick:[a != "START" && a != "END" && a != "REPORT" && a != "RequestId:", ...]When tested against:START RequestId: 9538d388-c156-4680-b9d0-ba98c73742c7 Version: $LATEST 2019-02-06T20:30:49.096Z 9538d388-c156-4680-b9d0-ba98c73742c7 Hello World END RequestId: 9538d388-c156-4680-b9d0-ba98c73742c7 REPORT RequestId: 9538d388-c156-4680-b9d0-ba98c73742c7 Duration: 24.45 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 47 MB RequestId: 9538d388-c156-4680-b9d0-ba98c73742c7 Process exited before completing requestOnly this will match:2019-02-06T20:30:49.096Z 9538d388-c156-4680-b9d0-ba98c73742c7 Hello WorldShareFollowansweredFeb 6, 2019 at 20:35Daniel VassalloDaniel Vassallo340k7272 gold badges509509 silver badges443443 bronze badges2that is awesome, thank you. I cannot find much information at all about how to create these. What is the, ...at the end? Can we give names to the columns? When I use this filter above in the CloudWatch test pattern screen, it does work, but my columns are $a, $2, $3, and $4. Thank you for your help.–Jon HAug 20, 2020 at 1:10ok, I think I got it[timestamp != "START" && timestamp != "END" &&timestamp != "REPORT" && timestamp != "RequestId:", aws_request_id, log]–Jon HAug 20, 2020 at 1:20Add a comment|
I'm using a subscription filter to get logs from a specific log group to Firehose which will eventually put it into Elasticsearch. The logs in this log group are from a Java Lambda. All theSTART RequestId ...,END RequestId ...andREPORT RequestId ...messages also end up in Elasticsearch.Is it possible to have a subscription filter so that these messages don't reach firehose and only the actual log messages from Lambda function reach the firehose. Or, is processing them with a "Transformation Lambda" the only way to achieve this ?
subscription filter for AWS CloudWatch logs to weed out Lambda Report messages
Are you logged in as ec2-user? If so, use sudo to edit the file.If you're setting up several things that require root access, try: sudo su -then make all the changes and when your finished: exitthen you'll be back on the ec2-userShareFollowansweredOct 9, 2018 at 0:49QuadriviumQuadrivium13722 gold badges22 silver badges1515 bronze badgesAdd a comment|
I am trying to install mongodb on my amazon linux 2 server by following the documentationMongoDBbut when I am trying to save the repo file it shows the following errorError writing /etc/yum.repos.d/mongodb-org-4.0.repo: Permission DeniedThe repo file contents are:[mongodb-org-4.0]name=MongoDB Repositorybaseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/4.0/x86_64/gpgcheck=1enabled=1gpgkey=https://www.mongodb.org/static/pgp/server-4.0.ascThe error is displayed when I try to save my file. How can I resolve this error? Thanks in advance.
Error writing /etc/yum.repos.d/mongodb-org-4.0.repo : Permission Denied
Ooh, nice question!I found some conversion code here:Salesforce.com ID Converter allows you to convert 15 digit, case-sensitive IDs to an 18 digit, case-safe version for use with Salesforce.com records.sf15to18/sf15to18.py at master · mslabina/sf15to18I used that to create an Amazon Redshift User-Defined Function (UDF):CREATE OR REPLACE FUNCTION f_salesforce_15_to_18 (id varchar) RETURNS varchar STABLE AS $$ # Code comes from: https://gist.github.com/KorbenC/7356677 for i in xrange(0, 3): flags = 0 for x in xrange(0,5): c = id[i*5+x] #add flag if c is uppercase if c.isupper(): flags = flags + (1 << x) if flags <= 25: id += 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'[flags] else: id += '012345'[flags - 26] return id $$ LANGUAGE plpythonu;Run it with:SELECT f_salesforce_15_to_18 ('500A000000D34Xf')It seems to work, but please test it!ShareFollowansweredAug 23, 2018 at 0:01John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges0Add a comment|
I'm Holding SalesForce Data on Amazon Redshift DB. I would like to create a function on Redshift that will convert SalesForce 15 Char ID to 18 Char ID. I found this topic that gives a direction of how to:salesforce id - How can I convert a 15 char Id value into an 18 char Id value? - Salesforce Stack ExchangeBut non of this functions is working on Redshift and I cannot use that \ create a similar function on Amazon Redshift DB. (Have to say I'm pretty new @ this.Can someone have a code that works on Redshift?
Amazon Redshift: How can I convert a 15 char Salesforce Id value into an 18 char Id value
You should have permission to create S3 bucket. AddCreateBucketpolicy to your IAM user.ShareFollowansweredAug 20, 2018 at 11:03Kumaresh Babu N SKumaresh Babu N S1,67855 gold badges2323 silver badges4141 bronze badges12I do have I can create the S3 bucket if I do it in another CloudFormation template ... not sure why ... I dont expect other resources in CloudFormation template to affect the results of this? Esp, since its an access denied error–Jiew MengAug 20, 2018 at 11:06Add a comment|
I am gettingAPI: s3:CreateBucket Access Deniedin CloudFormation template, but when I try the same code to create the S3 bucket, in another barebones template, it worksAWSTemplateFormatVersion : '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: 'Testing' Parameters: TagCostCenter: Type: String TagDeveloper: Type: String TagProject: Type: String Resources: S3Artifacts: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-artifacts AccessControl: Private Tags: - Key: Cost Center Value: !Ref TagCostCenter - Key: Developer Value: !Ref TagDeveloper - Key: Project Value: !Ref TagProjectWhat is wrong? The stack used to work, all I did was add in the S3 bucket
Cloudformation: API: s3:CreateBucket Access Denied
SeeHow to find unused credentials.Specifically with theawscli, use a combination of:aws iam list-access-keysto get information about the access keys for a given useraws iam get-access-key-last-usedto see when a given access key was last usedShareFollowansweredAug 13, 2018 at 18:12jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badges21What about console access?–KutziFeb 3, 2022 at 15:28@Kutzi you canuse CloudTrailfor that.–jarmodFeb 3, 2022 at 15:34Add a comment|
what aws-cli command should i execute to list all IAM users whose account had last activity more than 180 days ago.Basically we have to filter all those resources so that we can delete their accounts later
AWS cli command to list all the IAM users with last activity for than 180 days ago
Yes this is possible but it will need some overhead: You can pass your own docker images for training and inference to sagemaker.Inside this containers you can do anything you want including return yourmy_squarefunction. Keep in mind that you have to write your own flask microservice including proxy and wsgi server(if needed).In my opinionthis exampleis the most helpfull one.ShareFollowansweredAug 13, 2018 at 9:39dennis-wdennis-w2,16611 gold badge1414 silver badges2323 bronze badgesAdd a comment|
I have been playing with Amazon Sagemaker. They have amazing sample notebooks in different areas. However, for testing purposes, I want to create an endpoint that returns the result from a function. From what I have seen so far, my understanding is that we can deploy only models but I would like to clarify it.Let's say I want to invoke the endpoint and it should give me the square of the input value. So, I will first create a function:def my_square(x): return x**2Can we deploy this simple function in Amazon Sagemaker?
deploy a simple function to amazon sagemaker
There is no in-bound SMS capability with Amazon Simple Notification Service. It is only for outbound messaging.However,Amazon Pinpointcan respond to SMS messages, typically as part of a marketing campaign.See:Amazon Pinpoint Launches Two-Way Text Messaging | AWS News BlogShareFollowansweredJul 27, 2018 at 9:53John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badgesAdd a comment|
When I receive an SMS from a friend's mobile phone, I can respond to it (as you would expect). When I receive an SMS from my application, via AWS SNS, I can not respond. Why is that? And can I configure a response phone number?
How to respond to SMS from AWS SNS
I have found the quickest and easiest way is to make a backup, copy it to s3, and then tell RDS to import it from there:Amazon RDS supports importing MySQL databases by using backup files. You can create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file onto a new Amazon RDS DB instance running MySQL.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.htmlShareFollowansweredJul 19, 2018 at 14:08E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badgesAdd a comment|
I need to import my MySQL db of size around 25 GB toaws rds.How can i do this. I tried using phpmyadmin of RDS. But my browser hang on.Also my AWS don't have public IP.
Import large db to aws RDS?
To answer your specific question, you need to useJavaScript Promises(I'm assuming you are using NodeJS) in your Lambda function. When all of the promises are fulfilled, you can proceed.However, I do not recommend doing it that way, as your initial Lambda function is sitting idle, and being billed, waiting for the responses from the other functions.IMO, the best way of achieving this parallel execution is usingAWS Step Functions. Here you map out the order of events, and you will want to useParallel Statesto make sure all tasks are complete before proceeding.ShareFollowansweredJul 15, 2018 at 14:43Matt DMatt D3,37811 gold badge1515 silver badges3030 bronze badgesAdd a comment|
What is the best way to invoke aws lambda function when multiple lambda functions have successfully finished?So for example, LambdaA should run when LambdaB1, LambdaB2, ... LambdaBn have successfully returned success. Also, the last LambdaB function is not guaranteed to finish last...
Invoke AWS Lambda function when multiple Lambda function is done
Thank you for your suggestion! We will incorporate your feedback into our roadmap planning and prioritize this feature accordingly. As always, we deliver a feature as fast as we can if we see strong customer needs in it.Thanks for using Amazon Sagemaker !!!ShareFollowansweredJul 16, 2018 at 15:07Udit BhatiaUdit Bhatia52511 gold badge55 silver badges1414 bronze badges2Thats awesome. I also had another suggestion from another comment. Maybe you could add this one to your scoring system as well:stackoverflow.com/questions/51333159/…–tlaniganJul 16, 2018 at 17:36I was thinking about this today. It would be really handy if you could get these Objective metrics in the response using the AWS CLI or using any of the SDKs as well as having it in the web console.–tlaniganSep 5, 2018 at 21:56Add a comment|
In the SageMaker hyper parameter tuning jobs, you can use a RegEx expression to parse your logs and output a objective metric to the web console. Is it possible to do this during a normal training job?It would be great to have this feature so I don't need to look through all the logs to find the metric.
Is it possible to have SageMaker output Objective Metrics during a training job?
Yes, you can create a new deployment using the AWS CLI, and as you figured,RebuildEnvironmentis not the API call. You are looking for a combination of three calls -- one to S3, and two to Beanstalkcreate a zip file of your application codeUpload the zip file to S3. Note the bucket and key names (This would make the new version available to AWS and hence to Beanstalk)perform a call to ElasticBeanstalk'sCreateApplicationVersionAPI:aws elasticbeanstalk create-application-version --application-name <beanstalk-app> --version-label <a unique label for this version of code> --description <description of your changes> --source-bundle S3Bucket="<bucket name previously noted",S3Key="<key name previously noted"perform a call to Beanstalk'sUpdateEnvironmentAPI:aws elasticbeanstalk update-environment --environment-name <name of environment> --version-label <label of app. version created above>Clearly, this is tedious, so I also suggest you look into deploying through the EBCLI, which does all these things for you through a single command --eb deployShareFollowansweredJul 13, 2018 at 18:06progfanprogfan2,48633 gold badges2323 silver badges2828 bronze badgesAdd a comment|
My Settings: - I've got a multidocker application specified in my Dockerrun.aws.json file. - The images of my applications are stored on ECR.In the AWS console for Elastic Beanstalk, I can "upload and deploy" a new Dockerrun.aws.json file. And then Elastic Beanstalk deploys that version.Is it possible to do the same ("upload and deploy") via theaws elasticbeanstalkcommand line?The closest thing I found wasaws elasticbeanstalk rebuild-environment --environment-id $ENVIRONMENT_ID. But that only rebuilds the existing environment with existing Dockerrun.aws.json file. What if I want to deploy my environment with another version of my Dockerrun.aws.json file in the cli?
How to upload and deploy on Elastic Beanstalk with the aws cli?
ApiGateway does not support passing complete body to custom authorizer. One option is to have two level of authentication - first just based on header/query parameter ( which api gateway support ) and enough to detect spoof senders. Second can be SHA1 hash based on complete body which you can implement in your backendShareFolloweditedJul 7, 2018 at 7:04answeredJul 7, 2018 at 0:12VishalVishal68644 silver badges1111 bronze badges4And if i keep open API, could I somehow tell the gateway to block a sender of spoofed messages?–Jan SilaJul 7, 2018 at 5:52One way to restrict access without authorizer to use resource policy (docs.aws.amazon.com/apigateway/latest/developerguide/…) - not sure if it can help detect spoof senders for you.–VishalJul 7, 2018 at 6:10Another option is to have two level of authentication - first just based on header/query parameter ( which api gateway support ) and enough to detect spoof senders. Second can be SHA1 hash based on complete body which you can implement in your backend.–VishalJul 7, 2018 at 6:13Yh thats what I started working on yesterday actually. Do you want to put it in your answer so I can accept it? :)–Jan SilaJul 7, 2018 at 6:49Add a comment|
Is there a way to validate request in API Gateway based on its body? I need to calculate SHA1 hash of the body to validate the sender - Facebook messenger events... Is there a workaround for it?
AWS API authorizer include body
boto3.resource('s3').ObjectAcl('S3BUCKETNAME', account_id + "_" + date_fmt + ".json").put(ACL='bucket-owner-full-control')The above will give the bucket owner full control of any objects created and put into the bucket by the lambda job with this name format.ShareFollowansweredJul 3, 2018 at 12:38user9877726user98777268111 gold badge33 silver badges88 bronze badges2Similar answer using different syntax:boto3.client('s3').put_bucket_acl(Bucket=bucket, ACL='bucket-owner-full-control')–GuidoDec 8, 2020 at 13:262In case you use upload_object, you could also use:s3_client.upload_fileobj(obj, bucket, filename, ExtraArgs={'ACL': 'bucket-owner-full-control'})–GuidoDec 9, 2020 at 7:18Add a comment|
I have an s3 bucket that multiple accounts are putting objects in. I'd like the account that owns the bucket to also own these files.The script uses boto3 to name and put the object so Would I set permissions in this script?Or is there an s3 policy that can force ownership on new files?I'd prefer a bucket policy but I am doubtful it's possible. The file name is"account# and the date.json"account_id = (boto3.client('sts').get_caller_identity()['Account']) s3.Object('S3bucketname', account_id + "_" + date_fmt + ".json").put(Body=json.dumps(iplist))edit: I should add, The process that is trying to read from this bucket which has the objects has a role associated with it so I'm assuming my principal would bearn:aws:iam::ACCOUNT_ID_of_bucket:role/ROLENAME
How to automate permissions for AWS s3 bucket objects
The answer to question "Is there any way to put a message to AWS SQS without access & secret key?" isYESWhen you use SDK/CLI from within EC2 then you can simply attach IAM role to EC2 that lets you communicate with your SQS. And then once you have that role correctly setup then you can put a message to AWS SQS without access and secret key. And this is recommended.The answer to question "Is there any way to send a message to queue without any SDK/CLI Support of AWS? Only with Simple REST Call from EC2 instance?" isYESas well.For more detailscheck this.But in that case (using Simple REST Call) you will have tosign the request.ShareFollowansweredJun 26, 2018 at 6:30Arafat NalkhandeArafat Nalkhande11.4k99 gold badges4141 silver badges6666 bronze badgesAdd a comment|
We configured the EC2 instance which has IAM role with full permission for SQS and EC2. Is there any way to send a message to queue without any SDK/CLI Support of AWS? Only with Simple REST Call from EC2 instance?
Is there any way to put a message to AWS SQS without access & secret key?
The purpose of SQS Message attributes is that they are designed to be used as message metadata (like message category or message type of ) and not the message itself.E.x. if your application supports both JSON and XML payloads types, then possibly you can put the payload type as one of the message attribute and when you fetch the message, then based on the payload type attribute you can choose if a XML message processor is to be used or a JSON processor. This is just an superficial example for explaining usage of body and attributesThe actual message payload should be given in body of SQS Message, ideally.Following para is an extract fromAWS DocAmazon SQS lets you include structured metadata (such as timestamps, geospatial data, signatures, and identifiers) with messages using message attributes. Each message can have up to 10 attributes. Message attributes are optional and separate from the message body (however, they are sent alongside it). Your consumer can use message attributes to handle a message in a particular way without having to process the message body first.ShareFollowansweredJun 21, 2018 at 9:52Arafat NalkhandeArafat Nalkhande11.4k99 gold badges4141 silver badges6666 bronze badges0Add a comment|
I think using Message Attributes is the way to go. We only use 4 attributes and are worried that eventually we'll hit the 10 attribute limitation.Is there any benefit to using MessageBody instead of individual attributes other than the 10 attribute limitation?I believeMessageBodydoesn't have a limit except for the total message size limit of 256 KB which is huge. Then again, a single attribute also has the same limit.A better question is when to use one over the other?
Any reason to use a json string in the MessageBody instead of individual attributes?
fix for future wasdef upload(self): s3 = boto3.client('s3') try: with open(self.flow_cells +'.zip', 'rb') as data: s3.upload_fileobj(data, self.output_s3_bucket, self.flow_cells +'.zip',ExtraArgs={'ServerSideEncryption': 'AES256'}) return True except botocore.exceptions.ClientError as error: print(error.response['Error']['Code'])ShareFolloweditedMay 31, 2018 at 23:39AChampion30k44 gold badges6060 silver badges7878 bronze badgesansweredMay 31, 2018 at 23:35user8128927user8128927Add a comment|
I am getting an accessed denied error due to SSE How do I modify my current code to include SSE in the form of ServerSideEncryption='AES256'def download(self): s3 = boto3.client('s3') try: with open(self.flow_cells +'.zip', 'wb') as data: s3.download_fileobj(self.source_s3_bucket, self.source_key, data) return True except botocore.exceptions.ClientError as error: print(error.response['Error']['Code']) def upload(self): s3 = boto3.client('s3') try: with open(self.flow_cells +'.zip', 'rb') as data: s3.upload_fileobj(data, self.output_s3_bucket, self.flow_cells +'.zip') return True except botocore.exceptions.ClientError as error: print(error.response['Error']['Code'])
Boto3 upload ServerSideEncryption
Each aws SNS notification will contain not more than 1 message.please read this Reliability section in SNS FAQ :https://aws.amazon.com/sns/faqs/Having said that, each lambda function trigger will have just a single recordnow you may have the problem why event.Records is defined as an array then? and also can it be triggered by another means with multiple entries?answer for that would be , Records is an array because other event sources can send multiple events at one shot (like s3 events or dynamo db streams) but for SNS though it is an array will have just one sns message.ShareFollowansweredJun 28, 2020 at 5:27Sithija Piyuman Thewa HettigeSithija Piyuman Thewa Hettige1,7982020 silver badges2727 bronze badgesAdd a comment|
An AWS SNS event has a Records list which contains the message for a given notification. Is it always a single element list?The blueprint code for reading an SNS message in node is..const message = event.Records[0].Sns.Message;and in python it is..message = event['Records'][0]['Sns']['Message']
Are AWS SNS Records always a single element list?
The drop-down lets you select an AWS Lambda function from the region selected in the "Region" drop-down. If you don't see anything populated in the list, it usually means that no Lambda functions exist in that region.Can you check if you have Lambda functions in SA-EAST-1?ShareFollowansweredMay 18, 2018 at 15:15Rohan DeshpandeRohan Deshpande3,57511 gold badge2626 silver badges3131 bronze badges3how to check that? do I have to go somewhere else to add Lambda function in SA-EAST-1? if so it is not mentioned in the tutorial & I'm an aws noob :(–deadcoder0904May 18, 2018 at 15:21You can check here:sa-east-1.console.aws.amazon.com/lambda/home?region=sa-east-1#/…You can toggle the region in the region selector on the top right.–Rohan DeshpandeMay 18, 2018 at 15:22thank you I made it work & saw the CloudWatch Logs too–deadcoder0904May 18, 2018 at 15:45Add a comment|
I am trying to followthis tutorial on the AWS Site.I added theGraphQL Schemabut then when I try to add theLambda Functionin theData SourcesI can't add it because theFunction ARNfield is disabled.How do I add Lambda function to AWS AppSync?
Add Lambda function to AWS AppSync Data Source?
FromAWS Developer Forums: How to create only one thumbnail per video, it appears that you could:UseAmazon Elastic Transcoderto convert the video into thumbnailsSpecify a huge thumbnail interval to force it to output only one thumbnail per videoShareFollowansweredMay 18, 2018 at 2:23John RotensteinJohn Rotenstein254k2626 gold badges408408 silver badges498498 bronze badges2I'm looking for a AWSSDK reference or javascript reference for this job.–Kubilay BayraktarMay 18, 2018 at 3:47Create a pipeline that will sit there always awaiting jobs (Settings that You Specify When You Create an Elastic Transcoder Job - Amazon Elastic Transcoder) — you can do this in the console or withCreatePipeline(). Then submit a job withCreateJob(). If you are using Node.js, see:Class: AWS.ElasticTranscoder — AWS SDK for JavaScript–John RotensteinMay 18, 2018 at 3:57Add a comment|
We have a .NET project.We're uploading video files directly to S3.How can we create a thumbnail of the video which is located in S3 Storage.Which service should we use, and can we do that in javasript or using AWSSDK library.Since the video is not uploaded to our servers we need to find a way using services.
How to get thumbnail of video which is uploaded on Amazon S3 Storage
No, it isn't. The fragment is only available to JS running on the browser -- it's never sent to any web server.There's an examplehereof one way to get it, as mentioned inAuthorization@Edge – How to Use Lambda@Edge and JSON Web Tokens to Enhance Web Application Security, which uses Lambda@Edge rather than API Gateway (the two services have some overlapping functionality).ShareFollowansweredMay 15, 2018 at 23:12Michael - sqlbotMichael - sqlbot174k2727 gold badges367367 silver badges440440 bronze badges2Thanks for your help. Oh yeah I saw this as well, but it seems to use a separate index.html to parse out the fragment before it goes to the Lambda function. So there's no way to just do it in the Lambda function itself?–syimMay 16, 2018 at 12:39Specifically, the line of code that grabs it is:github.com/aws-samples/authorization-lambda-at-edge/blob/master/…@var results = new RegExp('[\?&#]access_token=([^&#]*)').exec(location.href);–Marc TamskyJan 18, 2022 at 5:45Add a comment|
I have an S3 website that I'm trying to password-protect using a Lambda function and CloudFront. When a user tries to access the site, the Lambda function will redirect them to my Cognito login page, then redirect back to the site with a token.When redirecting back, the access token is in the fragment (after "#"). Is it possible to obtain this token in the Lambda function using Node.js?
Access URL fragment in Lambda Node.js function
If you use any of the AWS SDK'S to build your on-premise application, you give the application the IAM access keys (the access key id and the secret access key), (often these end up in your ~/.aws subdirectory but it might vary for each language) and then each time your on-premise application calls any of the AWS functions the the AWSSDK, the app will provide the necessary keys.These keys should only be given the bare minimum of rights to do only what you want, for example, in you case, the would have only rights to post messages to a particular SQS queue.ShareFollowansweredMay 10, 2018 at 18:16E.J. BrennanE.J. Brennan46.2k88 gold badges9191 silver badges118118 bronze badgesAdd a comment|
I am trying to post messages to SQS queue from on-prem servers. When I run it locally , I use AWS secret id and key to post messages to SQS. But this is something that I need to generate every few hours. If I want to deploy this solution to a server and not have to refresh the token every few hours , what is the solution that I must adopt?
Access SQS from On-Prem Servers
I managed to solve the problem by using approach with NAT gateway - I'm not sure why it did not work earlier, I changed approach to first create VPC and then create ECS cluster and associate it with previously created VPC.Created Elastic IP, NAT gateway, VPC with private and public subnets as described in this article:https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-public-private-vpc.htmlNAT gateway is associated with private subnetCreated ECS cluster in private subnetCreated load balancer and associated it with public networkModified security group for RDS to allow traffic from Elastic IP configured on previously created NAT gateway.With this setup any traffic from application to RDS goes via NAT so I can setup security group rules to allow for this traffic. On another hand, load balancer in public subnet is able to communicate with cluster that sits in private network.ShareFolloweditedMay 10, 2018 at 15:26answeredMay 9, 2018 at 14:49michal.slocinskimichal.slocinski50511 gold badge99 silver badges2222 bronze badgesAdd a comment|
I have following setup:ECS (Fargate) cluster in VPC-1RDS in VPC-2My application running in ECS uses DNS name to connect to RDS however instead of private IP DNS resolves public IP address.In RDS I want to configure strict security rules to prevent connections from the outside world - I would like to limit it to only accept connections from VPC-1.I tried following things:peering both VPC-1 and VPC-2 - doesn't help, app running in ECS still resolves public IProuting all outbound traffic (0.0.0.0/0) from ECS cluster to a NAT gateway (instead internet gateway) and configuring security group in RDS to accept connections from elastic IP configured for NAT gateway - in this case my app doesn't even want to start, I suspect this is due to the fact that provisioning process fails due to the fact that outbound traffic is routed via NATall VPCs have "DNS resolution" and "DNS hostnames" set to "yes"I'm running out of ideas how to configure it correctly. As soon as I allow all inbound traffic (0.0.0.0/0) for my RDS everything starts to work fine but I don't want that.What am I missing here? Maybe I should use entirely different approach to secure access to my RDS?
AWS Fargate connection to RDS in a different VPC
Instead of returning a String value, you can return an Object or something like a Map (using Map an example here)public Map<String, String> handleRequest(Map<String, Object> input, Context context) { String brs = "42"; String rsm = "123"; Map<String, String> output = new HashMap<>(); output.put("brs", brs); output.put("rsm", rsm); return output; }This way if you include (in the state definition)"ResultPath": "$.taskresult",You will get this result in the output (along with other elements):"taskresult": { "brs": "42", "rsm": "123" }ShareFollowansweredSep 5, 2018 at 8:25lost in binarylost in binary55211 gold badge55 silver badges1111 bronze badgesAdd a comment|
I am using a Lambda Function within Step Functions. The function is written in Java. I want to pass multiple values similiar as inthis question.The code of the function looks like so:public String handleRequest(S3Event event, Context context) { //other code... String eventJsonString = event.toJson(); JsonParser parser = new JsonParser(); JsonObject eventJson = parser.parse(eventJsonString).getAsJsonObject(); eventJson.addProperty("value1", value1); eventJson.addProperty("value2", value2); String output = eventJson.toString().replace("\\", ""); logger.log("output: " + output); return output; }The log in CloudWatch is as expected:output: { "Records": [ { "awsRegion": "eu-west-1", "eventName": "ObjectCreated:Put", "eventSource": "aws:s3", "eventTime": "1970-01-01T00:00:00.000Z", "eventVersion": "2.0", . . . }But when i go into the Step Function Console, i can see that the result was passed with escaped quotes:"{\"Records\":[{\"awsRegion\":\"eu-west-1\",\"eventName\":\"ObjectCreated:Put\",\"eventSource\":\"aws:s3\",\"eventTime\":\"1970-01-01T00:00:00.000Z\",\"eventVersion\":\"2.0\",...}This output can not be handled by the State Machine. I am not sure wether is this a only-java problem or also related to aws.How can i pass the json-string with unescaped quotes, so it can be used by following components?
Passing Json as result path in aws Step Functions using java without escaping quotes
I'm not sure this answers your question but, if you need to retrieve the application list for an object with a specific id, that's what you would doimport boto3 DyDB = boto3.resource('dynamodb') table = DyDB.Table('YourTableName') response = table.query( ProjectionExpression='application', KeyConditionExpression=Key('id').eq(4) # where 4 is the id you need to query ) # this is just to test the result for app in response['Items'][0]['application']: print(app['id'])The response will give you back (in the list Items), the application attribute with the list of applications inside itShareFolloweditedMay 15, 2018 at 15:27answeredMay 15, 2018 at 14:40GabrielGabriel9633 bronze badgesAdd a comment|
I need some help to get an item from a nested JSONschema in DynamoDB. I will explain to you the schema and you can tell me if it's possible.The schema is:{ "updated_at": "2018/05/02 08:32:10", "created_at": "2018/05/02 08:32:10", "updated_by": "igor", "created_by": "igor", "application": [ { "name": "driver app", "features": [ { "name": "passenger list", "settings": [], "description": "feature for passenger list", "id": 2 } ], "id": 1, "url": "play store" }, { "name": "passsenger app", "features": [], "id": 2, "url": "play store" } ], "address": "New York", "id": 4, "url": "https://airlink.moovex.com", "name": "airlink", "service_locations": [ { "title": "IL", "latitude": 32, "longitude": 35 } ] }I need to fetch from my application list the object by id with a query.
How to search for an object in a list inside an object DynamoDB and Python
It's because this role didn't add "ssm.amazonaws.com" in its "Trust Relationship". After adding ssm in trust relationship, it works. referthisShareFollowansweredMay 3, 2018 at 9:38Kimi WuKimi Wu30911 gold badge44 silver badges1717 bronze badgesAdd a comment|
I'd like to install thhe AWS SSM agent to my server to be monitored by CloudWatch and found that I have to create a managed-instance activation first as this article,Create a Managed-Instance Activation for a Hybrid Environment.It always shows an error message:"Not existing role: arn:aws:iam::75....:role/service-role/AmazonEC2RunCommandRoleForManagedInstances".It has the same error even I use my existing IAM role.Anything I need to do before creating activation? Or do I have to create a special role for this?
Error message "Not existing role" when creating activation on AWS
This is a known issue with AWS CLI.stack-create-completewaits until stack status isCREATE_COMPLETE. It will poll every 5 seconds (Not 30!) until a successful state has been reached. This will exit with a return code of 255 after 120 failed checks.It was fixed in here.https://github.com/aws/aws-cli/pull/2816ShareFolloweditedApr 26, 2018 at 4:19answeredApr 26, 2018 at 4:11Udara JayawardanaUdara Jayawardana1,12222 gold badges1515 silver badges2929 bronze badges9you mean i can use it likeaws cloudformation deploy stack-create-complete?–JeffApr 26, 2018 at 4:43Whats your aws version? Try upgrading the awscli and trying again.pip install --upgrade awscli–Udara JayawardanaApr 26, 2018 at 5:44this is my aws cli versionaws-cli/1.11.126 Python/2.7.9 Windows/8 botocore/1.5.89–JeffApr 26, 2018 at 10:33Upgrade the CLI and try again. It should fix the issue–Udara JayawardanaApr 26, 2018 at 10:36did upgrade my awscli. what happened is my module dependencies got all messed up. my lambda functions throwing different sorts of errors after that. had to fix it all night till morning to get my module versions back. im good for now with the timeout i think, as long as my deployments will work.–JeffApr 27, 2018 at 4:03|Show4more comments
I am having issue with one of my cloudformation sam template. In that template, i have aAWS::CloudFront::Distributionblock, that takes more than 10 mins to complete.It seems that theaws cloudformation deploycommand just times out everytime its being run, it could be its default timeout. But how do i increase the timeout or somehow wait for the stack to be completed without exiting the cli command.On the cloudformation web console, the stack gets completed though, its just that the cli exits before it actually gets completed.
aws cloudformation deploy - how to increase wait time
ELB is using Listeners. Every listener has: inbound port - in which you can connect to the ELB target port - the host on the machine you transfer traffic toIf you're ABC.com can use a different port (let's say 8081) than DEF.com, XYZ.com it will be possible to create a listener that listens on port 443 - SSL and configured to send the traffic to port 8081.ShareFollowansweredApr 18, 2018 at 12:31TomerTomer1,0581111 silver badges1818 bronze badges1I noticed this that i can configure it for a different port yesterday. i will give it a try today. thumbs up:)–Rasikh MashhadiApr 19, 2018 at 3:17Add a comment|
I have an EC2 Instance which is having multiple virtual hosts and serve different websites on different domains. (Let say ABC.com, DEF.com, XYZ.com)For one specific domain let say ABC.com, its running on HTTP. I have been given free credit from AWS. Now I want to run this ABC.com on https without spending any money.So I have decided to use ELB as it will come with a free SSL. And I want to target that to ABC.com on my EC2 instance.I know that with ELB I can target to my instance or my IP. Is it possible to target just one virtual host somehow as this website is not my primary website on a server?
AWS ELB to target one virtual host
Revoking privileges off schema does not automatically revoke the privileges granted on tables in that schema. Try executingREVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA tbl FROM etlglue.along with your other REVOKE statements.ShareFolloweditedApr 18, 2018 at 9:34answeredApr 18, 2018 at 9:26user_defaultuser_default40677 silver badges2020 bronze badges5I had to go and revoke permissions in every object where I granted any access before drop it–Andres Urrego AngelApr 30, 2018 at 19:521Often 'DEFAULT PRIVILEGES' can prevent a user being dropped. These are privileges applied by default to objects a user creates in a schema. The ALTER DEFAULT PRIVILEGES command can be used to remove them:docs.aws.amazon.com/redshift/latest/dg/…–Vince HillOct 9, 2020 at 2:23How can I revoke all default privileges for a user? @VinceHill–DametimeOct 20, 2020 at 16:02You would need to find them all in the Redshfit system table PG_DEFAULT_ACL, and generate the SQL you need.docs.aws.amazon.com/redshift/latest/dg/r_PG_DEFAULT_ACL.html–Vince HillOct 21, 2020 at 23:48@Dametime, use the syntax described here, noting that you need to have your user retain privileges on the schema in order for it to succeed.docs.aws.amazon.com/redshift/latest/dg/…–Vince HillNov 3, 2022 at 5:13Add a comment|
I've created a user in redshift for a database, then I granted fewSELECTpermissions in a schema. Now I need to delete but I can't because the system insists that even after revoking all permissions the user can't be dropped because it has still access to some object.When I create the user:CREATE USER etlglue WITH PASSWORD '******'; grant select on all tables in schema tbl to etlglue;Now when I try to drop:REVOKE ALL PRIVILEGES ON SCHEMA tbl FROM etlglue; REVOKE ALL PRIVILEGES ON SCHEMA public FROM etlglue; REVOKE ALL PRIVILEGES ON DATABASE db FROM etlglue; DROP USER etlglue;I have tried to apply even a CASCADE in the REVOKE command but neither, I went to the documentationhere. But the output:
drop a user in aws redshift
Well, that's embarrassing... Of course, you need to runmakebefore running ansls deploy... If you don't do that, you'll always be deploying stale code. I'll forgive myself, because it's only my second day at Go, but it's silly all the same.I have updated myMakefileby addingdeployandinstall, like so:build: dep ensure env GOOS=linux go build -ldflags="-s -w" -o bin/hello hello/main.go env GOOS=linux go build -ldflags="-s -w" -o bin/world world/main.go deploy: sls deploy install: build deploymake installnow builds, then deploys, preventing this issue from happening again.ShareFollowansweredApr 13, 2018 at 9:17SandymanSandyman14322 silver badges1212 bronze badgesAdd a comment|
I created a simple Go Lambda to play with, using the Serverless framework. I expected (as per the documentation) that all output fromfmt.Printlnorlog.Printlnwould show up in Cloudwatch. But I don't see it.Here's an example of a line I put in purely for testing purposes:func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { fmt.Println("Hello from lambda") (...)I'm certain the permissions are correct, because I see the log-group, and there are Cloudwatch entries to view for this lambda. I can actually see the log-group being created in CloudFormation, so I'm sure that's not the issue. But I just don't see the output from anyPrintlnstatements in CloudWatch. This is what Idosee:START RequestId: fd48461b-3ecd-11e8-9e32-594932db04f2 Version: $LATEST END RequestId: fd48461b-3ecd-11e8-9e32-594932db04f2 REPORT RequestId: fd48461b-3ecd-11e8-9e32-594932db04f2 Duration: 13.82 ms Billed Duration: 100 ms Memory Size: 256 MB Max Memory Used: 21 MBI've tried various otherPrintmethods (likePrintf), but you won't be surprised that didn't change anything.What am I missing?
fmt.Println output does not show up in CloudWatch logs
It's actually not a simple problem to solve. We've been usinglambda layersfor a while, that is designed to solve that issue, so you can share common code. The problem with lambda layers is that you have to re-deploy twice when you change something inside your layer (the layer + your lambda function). It's rapidly a pain in the neck, and in terms of CICD you might also have issues.We tried this for some time, now we're back to packaging code and including the code inside the lambda. Not efficient if you want to avoid code duplicates but at least you don't have all the bugs related to the fact you forgot to deploy the dependency function.ShareFollowansweredAug 30, 2019 at 10:40TiboTibo67922 gold badges88 silver badges2525 bronze badgesAdd a comment|
I have several different python APIs (i.e python scripts) that run using AWS lambda. The standard approach is to generate a zip file including all the external libraries that are necessary for thelambdafunction and then upload it to AWS. Now, I have some functions that are in common between different APIs (e.g. custom utils functions such as parse text files or dates). Currently, I am simpling duplicating the fileutils.pyin every zip file. However, this approach is quite inefficient (I don't like to duplicate code). I'd like to have aS3bucket that contains all my.pyshared files and have my APIs directly loading those. Is this possible? A simple approach would be to download the files to atmpfolder and load them, but I am not sure this is the best/fastest way:import boto3 client_s3 = boto3.client("s3") client_s3.download_file("mybucket", "utils.py", "/tmp/utils.py")Can this be done in a more elegant way?
Shared python libraries between multiple APIs on AWS
To run RDS on dedicated hardware, you need tocreate a dedicated VPCand then launch the RDS instance into that VPC. You also need to choose a DB instance class that is an approved EC2 dedicated instance type e.g. db.m3.medium.For more, seeWorking with a DB Instance in a VPC.ShareFollowansweredApr 3, 2018 at 21:46jarmodjarmod75k1616 gold badges124124 silver badges128128 bronze badgesAdd a comment|
AWS offers the option to run VMs on hardware hosts that are dedicated to a single customer (for compliance purposes, added security, etc).This is available when using theirAmazon EC2 Dedicated InstancesMy question is: Do they offer similarhardware-level single-tenancyin their managed DB services ? (AWS RDS. For example using Oracle, or MySQL)I looked for that option but cannot find it anywhere.
Can AWS RDS services run on hardware hosts dedicated to a single customer?
I heard back from AWS support and got a limit increase, but not as high as I'd like to fit my use case.It seems as if you should use filter policies very judicially.ShareFollowansweredMay 22, 2018 at 15:17DMac the DestroyerDMac the Destroyer5,28277 gold badges3636 silver badges5656 bronze badgesAdd a comment|
According to AWS documentation onSNS limits, you can only have 100 message filter policies per account per region.I can ask for an increase to this, but I can't find any information on the web about what the costs or upper limits would be for increasing the filter policy count, similar to the question answeredhereon topic count limits.If I'm hoping to increase this limit by A LOT, what should I expect in terms of pricing or if it'll even be allowed?I plan to ask for the increase, but was disappointed in not finding any available info on the web from others. I also would have posted this question in their forums, but I don't like their interface as much :)
SNS FilterPolicy Limit increase
there are 3 things to considerThe first run of any query causes the query to be "compiled" by redshift . this can take 2-20 seconds depending on how big it is. subsequent executions of the same query use the same compiled code, even if the where clause parameters change there is no re-compile.Data is measured as marked as "hot" when a query has been run against it, and is cached in redshift memory. you cannot (reliably) manually clear this in any way EXCEPT a restart of the cluster.Redshift will "results cache", depending on your redshift parameters (enabled by default) redshift will quickly return the same result for the exact same query, if the underlying data has not changed. if your query includes current_timestamp or similar, then this will stop if from caching. This can be turned off withSET enable_result_cache_for_session TO OFF;.Considering your issue, you may need to run some example queries to pre compile or redesign your queries ( i guess you have some dynamic query building going on that changes the shape of the query a lot). In my experience, more nodes will increase the compile time. this process happens on the master node not the data nodes, and is made more complex by having more data nodes to consider.ShareFolloweditedMar 30, 2018 at 8:46answeredMar 29, 2018 at 18:56Jon ScottJon Scott4,2541818 silver badges3232 bronze badgesAdd a comment|
Our Redshift queries areextremelyslow during their first execution. Subsequent executions are much faster (e.g., 45 seconds -> 2 seconds). After investigating this problem, the query compilation appears to be the culprit. This is a known issue and is even referenced on the AWSQuery Planning And Execution WorkflowandFactors Affecting Query Performancepages. Amazon itself is quitetight lippedabout how the query cache works (tl;dr it's a magic black box that you shouldn't worry about).One of the things that we tried was increasing the number of nodes we had, however we didn't expect it to solve anything seeing as how query compilation is a single-node operation anyway. It did not solve anything but it was a fun diversion for a bit.As noted, this is a known issue, however, anywhere it is discussed online, the only takeaway is either "this is just something you have to live with using Redshift" or "here's a super kludgy workaround that only works part of the time because we don't know how the query cache works".Is there anything we can do to speed up the compilation process or otherwise deal with this? So far about the best solution that's been found is "pre-run every query you might expect to run in a given day on a schedule" which is....not great, especially given how little we know about how the query cache works.
First-run of queries are extremely slow
UsereplSetReconfigReplication Command.The replSetReconfig command modifies the configuration of an existing replica set. You can use this command to add and remove members, and to alter the options set on existing members. Use the following syntax:result = db.command('replSetGetConfig') config = result['config'] max_member_id = max(member['_id'] for member in config['members']) config['members'].append( {'_id': max_member_id + 1, 'host': 'mongodbd4.example.net:27017'} ) config['version'] += 1 # update config version db.command('replSetReconfig', config)ShareFolloweditedMar 24, 2018 at 12:55answeredMar 24, 2018 at 12:35Oluwafemi SuleOluwafemi Sule37.2k11 gold badge5959 silver badges8585 bronze badgesAdd a comment|
I am creating a replicaset using pymongo with this code sample :client = MongoClient(allIps[0]+':27017',username='mongo-admin', password='${mongo_password}', authSource='admin') db=client.admin config = {'_id': 'Harmony-demo', 'members': [ {'_id': 0, 'host': allIps[0]+':27017'}, {'_id': 1, 'host': allIps[1]+':27017'}, {'_id': 2, 'host': allIps[2]+':27017'}]} db.command("replSetInitiate",config)Now in future if my one node goes down and I want to add a new host in this replicaset again using pymongo , but I am unable to do so as this is giving me an error that replicaset already initilzed . I can do it with mongo shell using thisrs.add( { host: "mongodbd4.example.net:27017" } )but I want to do the same in python and haven't found anything in the documentation of pymongo .
How to add a new node in mongo in an already initilized replicaset using pymongo?
It looks like the issue was that I needed to tell AWS I wanted toREPLACEthe metadata. Adding the following line finally allowed me to change the metadata:copyObjectInput.SetMetadataDirective("REPLACE")ShareFollowansweredMar 20, 2018 at 15:23SearcherMainSearcherMain6155 bronze badges2This is what stackoverflow is about, thanks for taking the time to document this @SearcherMain–chimFeb 14, 2020 at 12:09This just saved me about 5 hours and my fist from going through my monitor.–Paul GordonOct 16, 2022 at 12:41Add a comment|
The title says most of it. I have the following code:copySource := bucket + "/" + sourcePath + "/" + filenameIn destPath := lambdaParams.DestinationPath + "/" + filenameIn copyObjectInput := s3.CopyObjectInput{ CopySource: aws.String(copySource), Bucket: aws.String(bucket), Key: aws.String(destPath), } if filepath.Ext(filenameIn) == ".pdf" { copyObjectInput.SetContentType("application/pdf").SetContentDisposition("inline; filename=\"" + filenameIn + "\"") } _, err := svc.CopyObject(&copyObjectInput) if err != nil { logErrorAndInformGFS(err, "S3 copy error.", c, log, filenameIn) return err }I am setting both theContent-Typeand theContent-Dispositionwith the hope of having the copied object have the new values fromContent-TypeandContent-Disposition. However, I can see in AWS that the copied file has the same metadata as the original file. What am I leaving out?
Setting Content Disposition and Content Type in AWS SDK (golang) has no effect
TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)you can use $()version: 0.2 phases: install: commands: - echo Entered the install phase... - TAG=$(echo "This is test") pre_build: commands: - echo $TAG build: commands: - echo Entered the build phase... - echo Build started on $TAGLogs:[Container] 2018/03/17 16:15:31 Running command TAG=$(echo "This is test") [Container] 2018/03/17 16:15:31 Entering phase PRE_BUILD [Container] 2018/03/17 16:15:31 Running command echo $TAG This is testShareFolloweditedMar 18, 2018 at 16:40answeredMar 17, 2018 at 16:28Sudharsan SivasankaranSudharsan Sivasankaran5,7512323 silver badges1919 bronze badges2Hi Sudarshan, I tried this today but no luck. I have added echo $CODEBUILD_RESOLVED_SOURCE_VERSION command in the pre-build stage, it resulted in nothing. Could you please provide me complete buildspec.yml–Naveen KeratiMar 18, 2018 at 16:33Thanks, Sudharsan.–Naveen KeratiMar 18, 2018 at 17:36Add a comment|
I am trying to build a docker image whenever there is push to my source code and move the docker image to the ECR( EC2 Container Registry).I have tried with the following build-spec fileversion: 0.2 env: variables: IMG: "app" REPO: "<<zzzzzzzz>>.dkr.ecr.us-east-1.amazonaws.com/app" phases: pre_build: commands: - echo Logging in to Amazon ECR... - aws ecr get-login --region us-east-1 - TAG=echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8 build: commands: - echo $TAG - docker build -t $IMG:$TAG . - docker tag $IMG:$TAG $REPO:$TAG post_build: commands: - docker push $REPO:$TAG - printf Image":"%s:%s" $REPO $TAG > build.json artifacts: files: build.json discard-paths: yeswhen I build this I am receiving the errorinvalid reference formatat docker build -tI looked into the document and found no help.
How to assign output of a command to a variable in code build
(From mycomment): This sounds like a bug in the EBCLI. The EBCLI somehow overwrote the platform name in the .config.yml file in an incorrect form.You can easily fix this by replacing thedefault_platformfield in the.elasticbeanstalk/config.ymlfile with "node.js", or "64bit Amazon Linux 2017.09 v4.4.5 running Node.js".ShareFollowansweredMar 12, 2018 at 19:13progfanprogfan2,48633 gold badges2323 silver badges2828 bronze badgesAdd a comment|
I'm setting up an elastic beanstalk for a node server. When running the 'eb create' command I'm getting the following error:ERROR: NotFoundError - Platform Node.js running on 64bit Amazon Linux does not appear to be validI can't find much about it online and the node.js running on 64bit is the way my instance is set so I'm not sure why it is marked as invalid.
Elasctic Beanstalk platform error on eb create
I cross checked all other policies attached to this user and apparently there was a Deny policy attached to this user which was explicitly denying the access. Removed this policy and it worked!ShareFollowansweredMar 10, 2018 at 20:06PaladinPaladin60033 silver badges1313 bronze badgesAdd a comment|
I am trying to create a Firehose delivery stream from an EC2 micro instance.AWS CLI is configured with the access keys of an IAM user ABC. This user has AWS policies attached with full access to firehose (policy copied below).Still the stream creation fails with the errorAccessDeniedException: iam user ABC not authorized to perform: firehose:CreateDeliveryStream on resource xxxx with an explicit deny{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "firehose:*", "firehose:CreateDeliveryStream" ], "Resource": [ "arn:aws:firehose:us-east-1:<ACC_ID>:deliverystream/*", "arn:aws:firehose:us-east-1:<ACC_ID>:*", "arn:aws:firehose:*:<ACC_ID>:*", "arn:aws:firehose:*:<ACC_ID>:deliverystream/*" ] } ] }Do I need to add more permissions to this IAM user to allow it to create delivery streams?
Iam user not authorized to perform: firehose:CreateDeliveryStream on resource xxxx with an explicit deny
First things first:NAT Gateway in Public Subnet allows Instances from Private Subnet to reach internet for software updates etc via Internet Gateway.NAT Gateway doesn't play any role in SSHing into an Instance.Try this to test:Attach Default NACL(allows all inbound and outbound) on Public and Private Subnet where your EC2 Instances resides.Create 2 security groups for public(lets say Pub-SG) and private subnets(Prv-SG).Allow SSH from everywhere/specific ip on Pub-SG.On Prv-SG allow SSH from Pub-SG as source for better security reasons.If both instances are launched using same key pair then with SSH-Agent Forwarding You can connect Private Instance through Public Instance.ShareFollowansweredFeb 3, 2019 at 7:25jestadijestadi9666 bronze badges3How to you create security group for a subnet?–Eric Xin ZhangJul 6, 2019 at 3:58Subnets do not use security groups, they use "Network Access Control Lists" (NACL)–AlexJun 1, 2021 at 19:01I think when @jestadi said create security groups, he means the SG for EC2 instance in public subnet needs to have a rule to allow you to SSH into the box. The SG for EC2 instance in private subnet must have a rule to allow the SG above SSH access. The idea is for you to SSH into the EC2 instance in the public subnet, then through that box, SSH into the EC2 instance in private subnet. If the PEM keys are the same, you should be able to forward the key to the private subnet EC2 instance and thus gain seamless access.–AlexJun 1, 2021 at 19:12Add a comment|
I have set up below VPC configuration but the SSH to the instance is not happening at the moment:Created a new VPCCreated a public and private subnetLaunched an ec2 instance to the public subnet updated route tables for internet gatewayLaunched ec2 instance to private subnet.Associated a natgateway to the public subnet with in EIPRoute table updated for private subnet with natgatewaySSH from public instance to private instance is not happening with keypair. Can you let me know what have I missed here.
AWS : SSH to private subnet EC2 instance from public subnet EC2 instance via NAT GATEWAY is not happening
According tothe documentation!Ref YourQueueLogicalResourceNamereturns the queue URL.ShareFolloweditedMar 1 at 15:36answeredNov 6, 2018 at 15:56sparrowtsparrowt2,7182626 silver badges2525 bronze badges21The URL provided is not available now. We can viewthis reference from AWSinstead. It said: When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the queue URL. For example:{ "Ref": "https://sqs.us-east-2.amazonaws.com/123456789012/ab1-MyQueue-A2BCDEF3GHI4" }–ShaLalalaMar 1 at 6:50Thanks! I'll update the link in my answer–sparrowtMar 1 at 15:36Add a comment|
How can I set up SQS url into the lambda Environment Variables? For example we can get URN for DynamoDB streams: !GetAtt SuperTable.StreamArn . I want to do something like that for SQS uri..
AWS: get SQS url in SAM template.yml file
According to the documentation, these lines can be removed.https://docs.aws.amazon.com/firehose/latest/dev/create-configure.htmlShareFollowansweredMar 3, 2018 at 13:22Robert ChevallierRobert Chevallier4844 bronze badges21...to be complete, nt have read the question, ... I have one question you should answer by modifying your answer .... "which lines" are you talking about? And using referral is fine..but shortly explain to what you are referring. Links have expiration dates set by webmasters, clients or wordpress execution style!–ZF007Mar 3, 2018 at 13:49AWS has quietly updated their documentation and this weird string is no longer mentioned in their docs. But it looks like a placeholder value that their console used to insert and, yes, it was meant to be replaced or removed. (Hereis an old, non-AWS-hosted version of the doc that mentions it, for context.) Wow, what a really poor choice for a placeholder value. It sure looks like a variable.–nofinatorDec 18, 2023 at 21:26Add a comment|
when I use firehose and enable logging it automatically generate the following lines for IAM policy:Statement: - Sid: '' Effect: Allow Action: - s3:AbortMultipartUpload - s3:GetBucketLocation - s3:GetObject - s3:ListBucket - s3:ListBucketMultipartUploads - s3:PutObject Resource: - arn:aws:s3:::%FIREHOSE_BUCKET_NAME% - arn:aws:s3:::%FIREHOSE_BUCKET_NAME%/*What I cannot understand is what % means in the above? I mean this%FIREHOSE_BUCKET_NAME%. Can anyone explain it?
what does Percentage symbol (%) in the IAM policy mean
Verifying queue existence is a good use case forGetQueueUrl; it'll return either the queue url if it exists, or an error if it doesn't.ShareFollowansweredFeb 15, 2018 at 3:50KreaseKrease16k88 gold badges5555 silver badges8787 bronze badgesAdd a comment|
I've got a C# application which uses AWS SQS. I'm using SQS inside a wrapper/adapter component to hide away the messy details, and this component has an initialization hook that I'd like to fill in to make sure that the connection is valid. I don't see anything in the API for this besides sending a message. I'd like to verify that the credentials are valid and that the queue exists at the specified URL at application startup, if possible, and without sending a message.I can think of some hacky ways to possibly do this (for instance, try sending an invalid message, such as one over the size limit) but would rather not go that route. What am I missing?
How to test an AWS SQS connection?