Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You can use this to obtain a list of objects with aLastModifiedbefore a given date:aws s3api list-objects --bucket my-bucket --query "Contents[?LastModified<='2019-03-13'].[Key]" --output textNote that it usess3apirather thans3, which has access to more information.You could then take the results and pump them intoaws s3 rmto delete the objects.Frankly, if you wish to get fine-grained like this, I would recommend using Python instead of bash. It would be something like:import boto3 s3 = boto3.client('s3', region_name='ap-southeast-2') response = s3.list_objects_v2(Bucket='my-bucket') keys_to_delete = [{'Key': object['Key']} for object in response['Contents'] if object['LastModified'] < datetime(2019, 3, 13)] s3.delete_objects(Bucket='my-bucket', Delete={'Objects': keys_to_delete})
How do you delete multiple S3 files withLast Modifieddate condition?I have this folder structure on s3.dentca-lab-dev-sample2019-03-13file1 Last modified: Mar 13, 2019 2:34:06 PM GMT-0700file2 Last modified: Mar 13, 2019 3:18:01 PM GMT-0700file3 Last modified: Mar 13, 2019 2:34:30 PM GMT-0700file4 Last modified: Mar 13, 2019 2:32:40 PM GMT-0700and wanted to delete a file (this is just a sample) less thanMar 13, 2019 2:34:30 PMand so I made this bash script but its not working.aws s3 ls --recursive s3://dentca-lab-dev-sample/2019-03-13/ | awk '$1 <= "2019-03-13 14:34:30" {print $4}'**lsis just for testing. will change it tormI also have this script for testingaws s3 ls --recursive s3://dentca-lab-dev-sample/2019-03-13/output:2019-03-13 14:34:06 11656584 2019-03-13/mandibular.stl 2019-03-13 15:18:01 11969184 2019-03-13/maxillary.stl 2019-03-13 14:34:30 9169657 2019-03-13/obj.obj 2019-03-13 14:32:40 15690284 2019-03-13/upperAIO_50005.stlbut when I do theawkto make condition doesn't work. Maybe because$1only catches this arugment2019-03-13and im compering it to2019-03-13 14:34:30also tried doing this.awk '$1 $2 <= "2019-03-13 14:34:30" {print $4}'to catch the second argument but still got nothing. Its my first to make a bash btw.Thank you! I have this as reference btw.aws cli s3 bucket remove object with date condition
Delete multiple s3 bucket files with Last Modified date condition
I would suggest Web sockets as a method to push the notification back to the browser, instead of let the browser polls (i.e, periodically sending GetObject API call) for the PDF file in S3. This approach will help you to notify the browser if an error occurred when PDF generation.For more details could you please watchhttps://www.youtube.com/watch?v=3SCdzzD0PdQ(from 6:40).At 10:27 you will find a diagram that matches to what you are trying to achieve (replace DynamoDB component with S3).I also think the Websocket based approach is cheaper compared to the polling approach by comparing S3 pricing [1] vs Web socket pricing [2]. But you will need to conduct a test (which reflects a production workload) and validate this.[1]https://aws.amazon.com/s3/pricing/#Request_pricing[2] "WebSocket APIs" inhttps://aws.amazon.com/api-gateway/pricing/
I would like to know if AWS SQS is the right service for doing browser polling.For example :1) User acesses application through browser, and requests a large PDF to be generated2) API responds back with "OK" to user and forwards the request to SQS3) SQS queue is being read by a lambda which generates the PDF and stores it to S3.Now, at some point between steps 2 and 3, the user browser wants to know when the PDF is done (no email), it could do this by polling SQS for a specific message ID (is this possible even?), but I have some questions :a) Is it "okay" for both the user and lambda to be reading the same message from SQS? And what about too many users overloading SQS with polling requests?b) Can a SQS message be edited/updated? How would the user know that lambda finished the PDF and get the download link? Can lambda edit the message to that it contains the link to S3? If not what would be the recommended way/AWS service for the user to know when the PDF is done without wasting too much resources?And preferably without needing a database just for this... We really don't have too much users but we're trying to make things right and future proof.Tagging boto as i'm doing all this in Python... eventually.
Amazon SQS for browser polling?
Make sure you use the latest version of boto3 if you use it at all. Anyways Add"Rotate": "AUTO"toVideoSelectorin inputs. In this case, EMC will try to automatically rotate the video based on metadata if it's available.These links were really useful for me:Listhttps://www.mandsconsulting.com/lambda-functions-with-newer-version-of-boto3-than-available-by-default/https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mediaconvert.html
I'm using AWS 'Elemental MediaConvert' service to get the HLS format of the uploaded video. We are using this as Video-On-Demand service. Everything works fine. Video that is been uploaded in 's3-input' bucket will taken by lambda service and processed by boto3 elemental mediaconvert client. Out of the video will be stored in 's3-output' bucket. One problem is Portrait videos are appearing in Landscape mode in 's3-output' bucket and also when HLS url is played in mobile/browser.
Portrait video converted to Landscape in AWS Elemental Mediaconvert
I don't think anyone knows besides AWS but it does make sense for Lex to use the power behindAWS Transcribe(andAWS Pollyfor returning speech from Lex). Speaking for personal experience, till about a month before Transcribe was announced at reInvent 17, I was usingLexto perform STT (speech to text). This was then possible asintentscould beignored& passed on to theLambdahandler. The JSON packet given toLambdahandler contained the recognized speech (as text), and I returned that back to the caller. However, sometime after they announcedTranscribe, this stopped working, as in theintentscould no longer beignored. Any input besides those inintentswould return the configured error response. My guess is they stopped this as they launched Transcribe.Addendum:AWS Transcribeis pure ASR (auto speech recognition or speech to text). It returns the recognized speech, and meta-data (confidence etc.).WithAWS Lexyou can design your own bots to auto-respond to queries (like in Alexa)
I am trying to determine if AWS Lex uses AWS Transcribe for prompt confirmations. For example, Lex asks "What's your phone number?", the user responds with "1-2-3-4". Lex then asks, "Did you say 1-2-3-4?". What does Lex use behind the scenes as an ASR to determine the user said "1-2-3-4"? Is it AWS Transcribe or something different?
Does AWS Lex use AWS Transcribe as the ASR for prompt recognition?
You can useput_objectwith file-like objects. It returnsVersionIdin the response dictionary.
I want to upload a file to a version-enabled S3 bucket and need its version number. Ideally, without a separate API call to avoid any possibility of a race condition. I'm using the following code snippet for upload (which is working fine):s3 = boto3.client("s3") s3.upload_fileobj(file_handle, bucket_name, key)The response of this function isNoneand I can't really see how it is defined in boto3, so it's hard to dive any deeper into it.The official S3 documentation mentions that the version id is included in the header of the response after upload. However, I can't see how I can access this header withboto3.Is this possible at all? If yes: how? If no: How can I hack boto3 so I can access this response header?Fyi, I'm usingboto3==1.9.64Thanks for your help!EDIT:Here the linkto S3 documentation that talks about thex-amz-version-idheader
Python boto3: receive versionId after uploading a file to S3
When you say ec2 configuration I am sure you mean AWS Elasticbeanstalk application configuration. There is no ec2 configuration that you can save in AWS EB.When you save AWS EB configuration its scope is within the APP. You won't see saved configuration ofApplication-AinApplication-Band it does make sense. If you want to see it then you will need to copy configuration ofApplication-AtoApplication-Bin S3.AWS EB configurations are saved in your S3. So if you want to copy configuration of Application A into Application B, here are the steps (replace region and account id with yours):Go to Application-A environment and save configurationGo to S3 bucket of your AWS EB s3://elasticbeanstalk-eu-west-1-598636547766/resources/templates/APPLICATION-A/Copy the configuration you savedGo to s3://elasticbeanstalk-eu-west-1-598636547766/resources/templates/APPLICATION-B/ (if no APPLICATIOn-B folder, create one)Paste the configurationGo to Application-B environment, click Saved Configuration, now you should see the config.
In AWS - Elastic Beanstalk, I am trying to move an ec2 instance from one application into another. I saved the environment configuration for the ec2 instance. Then I went to the application that I want to move it to, but when I go to saved configurations, it doesn't show up. Did I miss a step somewhere? I thought I was able to launch new environments from saved configurations.On the original ec2 instance, I clicked on "Actions" then "Save Configuration". Then I clicked on the application that I want to move the ec2 instance to, and clicked on "Saved Configurations". The saved configuration doesn't show up. I also clicked on Load, but it doesn't give me the option for the Environment I'm trying to move.
AWS - How do I load a saved environment configuration from one application to another application
This issue once caused me a lot of pain to get around.All I had to do was add a header to the Postman request.Header:Content-Type = binary/octet-streamOnce I changed this, the file uploads successfully.Hope this saves someone a lot of trouble down the road.
How do I upload a file to S3 with a signed URL?I tried the following:const AWS = require('aws-sdk'); const s3 = new AWS.S3({ accessKeyId: "", secretAccessKey: "" }); const url = s3.getSignedUrl("putObject", { Bucket: "SomeBucketHere", Key: "SomeNameHere", ContentType: "binary/octet-stream", Expires: 600 });But when I try uploading with Postman using the following steps, I get theSignatureDoesNotMatcherror.PUT method with URL from the above codeBody:binary(radio button), choose file, select a file to uploadHit SendI can confirm that the IAM permissions are not the problem here. I have complete access to the Bucket.What's wrong and how do I test my signed URL?
Can't upload file to S3 with Postman using pre-signed URL. Error: SignatureDoesNotMatch
The assigneddyyyexample.cloudfront.netanddzzzexample.cloudfront.nethostnames that route traffic to your CloudFront distributions go to the same place. CloudFront can't see your DNS alias entries, so it is unaware of which alias was followed.Instead, it looks at the TLS SNI and the HTTPHostheader the browser sends. It uses this information to match with the Alternate Domain Name for your distribution -- with no change to the DNS.Your site's hostname,example.com, is only configured as the Alternate Domain Name on one of your distributions, because CloudFront does not allow you to provision the same value on more than one distribution.If you swap that Alternate Domain Name entry to the other distribution, all traffic will move go the other distribution.In short, CloudFront does not directly and natively support Blue/Green or Canary.The workaround is to use a Lambda@Edge trigger and a cookie to latch each viewer to oneoriginor another. Lambda@Edge origin request trigger allows the origin to be changed while the request is in flight.There is anA/B testing examplein the docs, but that example swaps out the path. See theDynamic Origin Selection examplesfor how to swap out the origin. Combining the logic of these two allows A/B testing across two buckets (or any two alternate back-ends).
I am currently implementing Canary Release and Blue Green Deployment on my Static Website on AWS S3. Basically, I created two S3 bucket (v1 and v2) and 2 cloud front (I didn't append the CNAME). Then, I create 2 A alias records in Route 53 with 50% each weight routing policy. However, I was being routed to v1 only using both laptop and mobile to access my domain. I even ask my colleague to open my domain and they're being routed to v1 as well.It really puzzled me why there's no user being routed to v2?AWS Static Web in S3
Canary Release and Blue Green Deployment on AWS
It looks like it isn't possible to specify differentOutputPathfor one state.The solution with proxy states doesn't look graceful.I have solved this issue in another way in the state beforeChoiceStateX. I am setting instances of different types inoutputproperty and only route it onChoiceStateXstate.My input ofChoiceStateXstate looks like:{ "value": value, "output": value==0 ? object1 : object2 }End final version ofChoiceStateXstate:"ChoiceStateX": { "Type": "Choice", "Choices": [ { "Variable": "$.value", "NumericEquals": 0, "Next": "ValueIsZero" } ], "OutputPath": "$.output", "Default": "DefaultState" }It is still isn't perfect, because I implement the same logic in two places.
Let say part of my Step Function looks like next:"ChoiceStateX": { "Type": "Choice", "Choices": [ { "Variable": "$.value", "NumericEquals": 0, "Next": "ValueIsZero" } ], "Default": "DefaultState" }, "ValueIsZero": { "Type" : "Task", "Resource": "arn:aws:lambda:******:function:Zero", "Next": "NextState" }, "DefaultState": { "Type" : "Task", "Resource": "arn:aws:lambda:******:function:NotZero", "Next": "NextState" }Let assume that input to this state is:{ "value": 0, "output1": object1, "output2": object2, }My issue is that I have to passoutput1toValueIsZerostate andoutput2toDefaultState. I know that it is possible to changeInputPathinValueIsZeroandDefaultStatestates. But this way isn't acceptable for me because I am calling these states from some other states also.I tried to modifyChoiceStateXstate like next:"ChoiceStateX": { "Type": "Choice", "Choices": [ { "Variable": "$.value", "NumericEquals": 0, "OutputPath": "$.output1", "Next": "ValueIsZero" } ], "Default": "DefaultState" }I got next error in this case:Field OutputPath is not supported.How is it possible to implement this functionality?PS:In the current moment I am using 'proxy' states between ChoiceStateX and ValueIsZero/DefaultState where modifying the output.I have checked:Input and Output ProcessingChoicebut haven't found a solution yet.
How to pass different output from Choice state in AWS Step Function?
One of the core concepts behind signed URLs is that they are not vulnerable to tampering -- you can't change a signed URL and have it remain valid.CloudFront uses the public key to validate the signature and confirm that the URL hasn't been tampered with. If the signature is invalid, the request is rejected....Signed CloudFront URLs cannot contain extra query string arguments. If you add a query string to a signed URL after you create it, the URL returns an HTTP 403 status.https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.htmlTo add a query string parameter to a CloudFront signed URL, you need to add itbeforesigning the URL... because the addition will change the signature.
I would like to add a query string parameter to my Cloudfront Url to be able to get some additional info into the Cloudfront log. I have two distributions, one is signed and one is not signed, pointing to two different S3 buckets (one with audio, one with images). Access to both distributions works fine without added query strings, but if I add a query parameter like the test one below:https://x.cloudfront.net/audio.m4a?li=...62&Expires=1544430879&Signature=...QTQ__&Key-Pair-Id=xxx&test=fail https://y.cloudfront.net/image.jpg?test=allgoodThe first one fails (Access Denied) but the second one works fine. Neither one of the distributions forwards the query string to S3.The signed audio distribution has logging enabled while the image distribution doesn't have logging. Besides this, their setups are the same.What do I need to do in order to get the audio distribution to accept my custom query parameter? Thanks /o
How to add a query string parameter to Cloudfront?
As you clearly understood, AWS Roles serves the purpose of authentication (with IAM policies for authorization) for AWS services. In contrast, AWS IAM users directly maps towards human user who obtains credentials to login to the AWS Management Console.However, when granting access to an User outside the AWS Account (e.g; Cross Account Access, AD Authentication Federation) it will require an IAM Role to Assume the permission.Referring to the documentation you shared, its not a direct IAM User who is getting permission, rather an Active Directory user (External) assuming an IAM Role (Not direct IAM User) to get access to the AWS Resources.
According to the offical AWSdocumentation, IAM Roles canalsobe attached to IAM Users, and not only services.What would be a valid use case to assign an IAM Role to an IAM User?Aren't all the cases covered by directly granting (allow/deny)IAM Policiesto the users?TBH my initial impression was thar IAM Roles served the purpose of authorization for the AWS services (so that they can interact withotherservices), since the latter cannot be addressed in the User context
AWS: Assinging IAM roles to IAM users
The solution is at your hands. I suggest that you do the following:Create a separate template for the global resources (yes, I know that you don't like it, but it works well in my experience)Store references to the shared global resources in SSM usingAWS::SSM::ParameterDeploy regional stacks and de-reference the global resources (either usingParameters, such as theAWS::SSM::Parameter::Value<String>ordynamic reference, e.g.{{resolve:ssm:S3AccessControl:2}})You can use eitherStackSetsfor your regional stack deployments or create a parameterized build script that deploys the regional stacks one at the time (to be executed either locally or preferably by your CI/CD server).
I have a cloudformation stack template that includes regional resources (lambdas, api, topics, etc.) and global resources (user, policies, route53, cloudfront, dynamodb global tables, etc.) and want to deploy it to multiple region in the same AWS account.I can't directly deploy this stack template in multiple region because global resources will already exist after the first creation.I know I could split everything in two separate stack templates but I would prefer to avoid this and keep everything in the same single stack template.I saw that I could probably use CFConditions+ Parameters to toggle global resource creation only on first creation but that doesn't look very good...I was wondering if I could leverage some CloudFormation feature like StackSets or something else to achieve that.Any idea on what would be the proper way to do this?
What is the proper way to deploy a multi-region CloudFormation stack that includes global resources?
import io import boto3 import xlsxwriter import pandas as pd bucket = 'your-s3-bucketname' filepath = 'path/to/your/file.format' df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]}) with io.BytesIO() as output: with pd.ExcelWriter(output, engine='xlsxwriter') as writer: df.to_excel(writer, 'sheet_name') data = output.getvalue() s3 = boto3.resource('s3') s3.Bucket(bucket).put_object(Key=filepath, Body=data)
I have a project where i need to write dataframes to xlsx in an s3 bucket. It's quite simple to load a file from s3 with pandas quite simply by: df= pd.read_excel('s3://path/file.xlsx')But writing a file to s3 gives me problems.import pandas as pd # Create a Pandas dataframe from the data. df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]}) # Create a Pandas Excel writer using XlsxWriter as the engine. writer = pd.ExcelWriter('s3://path/', engine='xlsxwriter') df.to_excel(writer, sheet_name='Sheet1') writer.save() FileNotFoundError: [Errno 2] No such file or directory: 's3://path'So how can i write xlsx files to s3 with pandas, preferably with tabs?
xlsx pandas write to s3 (with tabs)
I once had the same issue and had to read through AWS docs.Configure a CRL:Configure a certificate revocation list (CRL) if you want ACM PCA to maintain one for the certificates revoked by your private CA.If you want to create a CRL, do the following:Choose Enable CRL distributionTo create a new S3 bucket for your CRL entries, choose Yes for the Create a new S3 bucket option and enter a unique bucket name. Otherwise, choose No and select an existing bucket from the list.If you chooseYes, ACM PCA creates the necessary bucket policy for you. If you chooseNo, make sure the following policy is attached to your bucket.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "acm-pca.amazonaws.com" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetBucketAcl", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::your-bucket-name/*", "arn:aws:s3:::your-bucket-name" ] } ] }AWS doc
In order to update the SSL certificate on AWS, CA is required for the CSR.When I try to configure and create the CA, I get this massage:ValidationException The ACM Private CA Service Principal 'acm-pca.amazonaws.com' requires 's3:GetBucketLocation' permissions for your S3 bucket 'MyBucket'. Check your S3 bucket permissions and try againTo move forward with this, permission settings on Amazon S3 >MyBucket> Permissions > Bucket Policy:{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::MyBucket/*" } ] }According to the documentation, found here:https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.htmlLocationConstraintis required.How to solve the "s3:GetBucketLocation" issue and create the CA?
How to allow GetBucketLocation promission on s3 bucket in order to create CA
If it's only regex validation without having to check the input against data in a data source, then you can prepend some validation logic inside the resolver request mapping template.See below an example for checking if the input field matches is an email from themyvaliddomain.com. If it doesn't validate, we just abort and error the field.#set($valid = $util.matches("^[a-zA-Z0-9_.+-]+@(?:(?:[a-zA-Z0-9-]+\.)?[a-zA-Z]+\.)?(myvaliddomain)\.com", $ctx.args.input)) #if (!$valid) $util.error("$ctx.args.input is not a valid email.", "ValidationError") #end ## Rest of your request mapping template below
Is it possible to do input validation with AWS AppSync without adding another "layer" of interaction?I feel like adding a lambda function will defeat the purpose of it.What I would like to accomplish is at least some regexp validation on strings.And if not, then how do people that use AppSync or similar solutions (firebase) do so?
Input validation with AWS AppSync
) Your Lambda RoleCreate your Lambda function, rather than use a template, choose to create a new role and give it a name (lets say MyLambdaRole).Go into IAM, go to the Roles menu option and find the MyLambdaRole role. Attach the following policy: AmazonDynamoDBFullAccessNote that the Lambda will already have CloudWatch access by default. The AmazonDynamoDBFullAccess has more permissions than you strictly need but its not unreasonable. If you did want finer grained permission I would at least get it working with AmazonDynamoDBFullAccess and go from there. Also note you shouldn't need any Cognito permissions at all, as all you will be doing is parsing the data sent to you by Cognito.Make sure you import the relevant DynamoDB library in your script.For example:exports.handler = (event, context, callback) => { // Load the AWS SDK for Node.js var AWS = require('aws-sdk'); // Set the region AWS.config.update({region: 'us-east-1'}); // Create the DynamoDB service object ddb = new AWS.DynamoDB({apiVersion: '2012-10-08'}); };2) Accessing the individual Cognito user dataAssuming you are using NodeJs, like this:event.request.userAttributes.emailMore detailshereOne final thing, you don't need to assign a trigger to the Lambda function. All you need to do is go into Cognito, and assign the Lambda function in the triggers section. That way Cognito calls the Lambda directly - Lambda doesn't need to listen out for any special events.
I'm having trouble implementing a post-confirmation lambda function in which I take the user submitted credentials from the sign-up process and write those to a 'Users' DynamoDB table. The specific entries that I'm trying to write to the table are - user name - email - actual nameIn order to distinguish users from one another I need the primary key to be the 'sub' value since users can change their usernames and that could cause problems in the future.The main points of confusion I'm having are the following1) What roles should I begin my Lambda function with?When creating a lambda function I need to give it a starting role and I'm not really sure what starter template I should use. I know I'm going to need DynamoDB write access but I don't see a template for that.2) How do I access the individual fields of the Cognito user?As far as I'm aware these values should be stored in the 'event' parameter of the handler function, but I can't find any documentation or sample events that show how to access the individual fields for things like 'sub', 'email', etc.
Writing Cognito user info to DynamoDB through a post-confirmation lambda function?
tkausl answered the question in the comments:Looks like it returns a dict, so you need to json encode it manually before passing it to put_objectupdate:import boto3 import json def allwork(): client = boto3.client('route53') hostzone = client.list_hosted_zones() bucket_name = "testlambda" file_name = "r53data.txt" lambda_path = "/tmp/" + file_name s3_path = "10102018/" + file_name hostzone2=json.dumps(hostzone, ensure_ascii=False) s3 = boto3.resource("s3") s3.Bucket(bucket_name).put_object(Key=s3_path, Body=hostzone2) allwork()
I have a hosted zone in route 53 and would like to have the contents of thehostzoneobject stored in S3 but I am getting an error. I am thinking Body is the correct parameter but maybe this is because the object is in JSON format?import boto3 import json def allwork(): client = boto3.client('route53') hostzone = client.list_hosted_zones() bucket_name = "testlambda" file_name = "r53data.txt" lambda_path = "/tmp/" + file_name s3_path = "10102018/" + file_name s3 = boto3.resource("s3") s3.Bucket(bucket_name).put_object(Key=s3_path, Body=hostzone) allwork()Here is the error:module initialization error: Parameter validation failed: Invalid type for parameter Body, value: {u'HostedZones': [{u'ResourceRecordSetCount': 7, u'CallerReference': '814E3.........
AWS Lambda - S3 put_object Invalid type for parameter Body
They are the same times, but in different timezones.The listObjectsV2 response is giving you Zulu times (UTC or Greenwich Mean Time), which appears to be 6 hours ahead of you.
I'm developing a node.js function that lists the objects in an S3 Bucket via the listObjectsV2 call. In the returned json results, the date is not the same as the date shown in the S3 bucket nor in a aws cli s3 list. In fact, they are different days. I'm not sure how this is happening?Any thoughts?aws cli lsaws s3 ls s3://mybucket2018-11-08 19:38:55 24294 Thought1.mp3S3 Page on AWSJSON results
AWS LastModified S3 Bucket different
There you have a simple snippet. In short you have to iterate over files to find the last modified date in all files. Then you have print files with this date (might be more than one).from datetime import datetime import boto3 s3 = boto3.resource('s3',aws_access_key_id='demo', aws_secret_access_key='demo') my_bucket = s3.Bucket('demo') last_modified_date = datetime(1939, 9, 1).replace(tzinfo=None) for file in my_bucket.objects.all(): file_date = file.last_modified.replace(tzinfo=None) if last_modified_date < file_date: last_modified_date = file_date print(last_modified_date) # you can have more than one file with this date, so you must iterate again for file in my_bucket.objects.all(): if file.last_modified.replace(tzinfo=None) == last_modified_date: print(file.key) print(last_modified_date)
I want to get the last modified file from an Amazon S3 directory. I tried to only print that file's date only now, but I am getting the error:TypeError: 'datetime.datetime' object is not iterableimport boto3 s3 = boto3.resource('s3',aws_access_key_id='demo', aws_secret_access_key='demo') my_bucket = s3.Bucket('demo') for file in my_bucket.objects.all(): # print(file.key) print(max(file.last_modified))
how to get last modified filename using boto3 from s3
You should create a separate AWS account for each client. If you are handling the AWS payments, then you could use AWS Organizations to combine the accounts into a single bill. You will be able to split the billing report into accounts to see exactly what each client owes you for AWS services.This will also allow you to hand over an AWS account to a client, or provide their developers with access if they need it, without compromising your other clients in any way.
Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed4 years ago.Improve this questionI am new to Amazon AWS and as a freelancer I am not clear on how I would facilitate dozens of clients using AWS. I average 5 clients per month. How would I do billing and set up instances for multiple clients? I have been using godaddy for a long time and they have a pro user dashboard that manages all of that.
How do I manage multiple clients on AWS? [closed]
You need theTagSpecificationsargument with'ResourceType'set to'instance':TagSpecifications=[ { 'ResourceType': 'instance', 'Tags': [ { 'Key': 'name', 'Value': 'foobar' }, { 'Key': 'owner', 'Value': 'me' }, ] }, ],It is in the docs but you do need to know what you're looking for...
I am new toBoto3, and wanted to create a VPC, subnets, and some ec2 instances. The basic architecture is having a VPC, 2 subnets within 2 different availability zones (us-east-1a and b), and applying a security group which allowsSSHandping.My problem is how to specifyadditional optionsfor each resources. The Python SDK (unlike howJavadocworks) doesn't show the required arguments and example options, so I'm confused.How can I specifytagsfor resources? (e.g. ec2 instance). I need to setname,owner, etc.instances2 = ec2.create_instances(ImageId='ami-095575c1a372d21db', InstanceType='t2.micro', MaxCount=1, MinCount=1, NetworkInterfaces=[{'SubnetId': subnet2.id, 'DeviceIndex': 0, 'AssociatePublicIpAddress': True, 'Groups': [sec_group.group_id]}]) instances2[0].wait_until_running() print(instances1[0].id)
How to set tags for AWS EC2 instance in boto3
UsingCREATE VIEWis the closest thing to analias.It also gives you the ability to present a subset of columns and even differently-named columns, which can be handy when migrating to a new schema.
I have the requirement to change several table names to adjust to a convention (it's just in Dev). However, there are several consumers already using those tables (directly, then again it's just Dev and it will not be kept that way). Is there a way to change the name and keep the old one as an alias, for a transition period? I have browsed Redshift documentation but I haven't found anything like that. Thank you!
Is there a way to create an alias to a Redshift table?
Example to useaws-sdkconst AWS = require('aws-sdk'); const cognitoIdentity = new AWS.CognitoIdentity(); cognitoIdentity.getOpenIdTokenForDeveloperIdentity(params, function (err, data) { //handle error and data });
I am a beginner at Node JS and trying to follow examples online and learning fairly well from AWS documentation. So far I am only using the web based lambda editor that AWS provides. Following code gives me trouble and states that, "errorMessage": "AmazonCognitoIdentity is not defined",Could someone please advise how I can successfully start using Cognito using web editor only?var aws = require('aws-sdk'); var CognitoUserPool = AmazonCognitoIdentity.CognitoUserPool; exports.handler = (event, context, callback) => { console.log("Do something here..."); }
Trouble importing Cognito, "AmazonCognitoIdentity is not defined"
S3 IP addresses are consumed from a AWS-owned network range that differs based on the geographical location. Your our subnet IP's won't be affected by your S3 endpoints.Indeed, the article below describes how to find the IP range for such a service,https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
When we create a VPC, we specify an address range to be used within our VPC as a CIDR block(mostly as 10.0.0.0/16). Then we launch instances within that VPC. And they will get private IP addresses from that address range. And when we create a S3 bucket and add some resources to it, we get URLs for those resources. if there is a URL, then there must be an IP address. Where does the IP address for those come from? from our VPC address range? Or is there a set of IP addresses within AWS to be used for S3 services without concerning about VPCs?
Resource IP addresses of S3 buckets?
You can usecom.google.common.util.concurrent.AbstractScheduledServiceto create a consumer thread and add it to the dropwizard's environment lifecycle asManagedTask. Following is the pseudocode -public class YourSQSConsumer extends AbstractScheduledService { @Override protected void startUp() { // may be print something } @Override protected void shutDown() { // may be print something } @Override protected void runOneIteration() { // code to poll on SQS } @Override protected Scheduler scheduler() { return newFixedRateSchedule(5, 1, SECONDS); } }InMaindo this -YourSQSConsumer consumer = new YourSQSConsumer(); Managed managedTask = new ManagedTask(consumer); environment.lifecycle().manage(managedTask);
What I am trying to achieve:I want to make a dropwizard client that polls Amazon SQS. Whenever a message is found in the queue, it is processed and stored.Some information about the processed messages will be available through an API.Why I chose Dropwizard:Seemed like a good choice to make a REST client. I need to have metrics, DB connections and integrate with some Java services.What I need help with:It is not very clear how and where the SQS polling will fit in a typical dropwizard application.Should it be a managed resource? Or a console reporterconsole-reporter? Or something else.
Polling SQS using dropwizard
You will need to work around the given circumstances since the commands are executed after files section, create atemplatethat will be renamed only for the leader:files: "/tmp/mycron.template": mode: "000644" owner: root group: root content: | #to keep the segments current. container_commands: enable_cron: command: "mv /tmp/mycron.template /etc/cron.d/mycron" leader_only: true
I wanted to know how can I defineleader_onlyat files level, if I have to create that file on the leader only. Consider the following code, for example:files: "/etc/cron.d/mycron": mode: "000644" owner: root group: root content: | #to keep the segments current. commands: remove_old_cron: command: "rm -f /etc/cron.d/*.bak"What I know from the documentation is I can only defineleader_only: trueatcontainer_commandslevel, consider this for example on thedocs page:container_commands: collectstatic: command: "django-admin.py collectstatic --noinput" 01syncdb: command: "django-admin.py syncdb --noinput" leader_only: true 02migrate: command: "django-admin.py migrate" leader_only: true 99customize: command: "scripts/customize.sh"
Elastic Beanstalk leader_only at files level
At the time you asked this question it was not possible to grant permissions to specific databases and tables, but it is now.The permissions need to be granted on three levels: the catalog, the database, and the table (for table permissions, database permissions only need the two first).Here is an example of how to grant create, update, and delete permissions to all tables in a database calledmy_dbin account 1234567890 in us-east-1:{ "Effect": "Allow", "Action": [ "glue:CreateTable", "glue:UpdateTable", "glue:DeleteTable" ], "Resource": [ "arn:aws:glue:us-east-1:1234567890:catalog", "arn:aws:glue:us-east-1:1234567890:database/my_db", "arn:aws:glue:us-east-1:1234567890:table/my_db/*" ] }It's possible to use wildcards for the database name, or partial database name (for example if you want to grant some permissions to all databases with a specific prefix), and the same with table names.
I have a group to which I'd like to grantCreateTablepermissions on one database in Athena, while applying lesser permissions such asRunQueryon all databases to the same group. Is it possible to apply permissions to Athena databases on a case by case basis?For example, in the below IAM policy I'd like to give this group the ability to create and delete tables in the test database.From theAWS documentation:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "athena:RunQuery", "athena:StartQueryExecution", "athena:StopQueryExecution" ], "Resource": [ "*" // Apply these permissions on all schemas ] }, { "Effect": "Allow", "Action": [ "glue:CreateTable", "glue:DeleteTable" ], "Resource": [ "test" // Apply these permissions to only the test database ] } ] }Thanks!
IAM CreateTable permission for one database only on Athena
An ARN for an Amazon S3 object is in the form:arn:aws:s3:::BUCKET-NAME/filename-including-pathFor example:arn:aws:s3:::acme-inc/staff_photos/bob.jpgThe generic format for an ARN is:arn:aws:SERVICE-NAME:REGION:ACCOUNT:RESOURCEGiven the name of an Amazon S3 bucket (which is globally unique), the system can determine the Region and Account, so those fields can be left blank when referring to an S3 bucket/object.
How would I be able to getArnString from as3objectI could probably do this myself, but want to use if some methods already exists to do this.
Get ARN string from s3Object
Just a little correction from botchniaque answer, you actually have to do BOTH ResolveChoice and then ApplyMapping to ensure the correct type conversion.ResolveChoice will make sure you just have one type in your column. If you do not make this step and the ambiguity is not resolved, the column will become a struct and Redshift will show this as null in the end.So apply ResolveChoice to make sure all your data is one type (int, for ie)df2 = ResolveChoice.apply(datasource0, specs = [("col1", "cast:int"), ("col2", "cast:int")])Finally, use ApplyMapping to change type for what you wantdf3 = ApplyMapping.apply( frame = df2, mappings = [ ("col1","int","first_column_name","string"), ("col2","int","second_column_name","string") ], transformation_ctx = "applymapping1")Hope this helps (:
I'm having a bit of a frustrating issues with a Glue Job.I have a table which I have created from a crawler. It's gone through some CSV data and created a schema. Some elements of the schema need to be modified, e.g. numbers to strings and apply a header.I seem to be running into some problems here - the schema for some fields appears to be have picked up as a double. When I try and convert this into a string which is what I require, it includes some empty precision e.g. 1234 --> 1234.0.The mapping code I have is something like:applymapping1 = ApplyMapping.apply( frame = datasource0, mappings = [ ("col1","double","first_column_name","string"), ("col2","double","second_column_name","string") ], transformation_ctx = "applymapping1" )And the resulting table I get after I've crawled the data is something like:first_column_name second_column_name 1234.0 4321.0 5678.0 8765.0as opposed tofirst_column_name second_column_name 1234 4321 5678 8765Is there a good way to work around this? I've tried changing the schema in the table that is initially created by the crawler to a bigint as opposed to a double, but when I update the mapping code to ("col1","bigint","first_column_name","string") the table just ends up being null.
AWS Glue ApplyMapping from double to string
The lambda will be invoked with the data that you send with the POST request.For example, let's say that you make a POST request to your API gateway with this JSON:{"data": "some data"}The lambda function will receive in the event argument a proper Python dictionary:{'data': 'some data'}Then you can do something like that:def lambda_handler(event, context): data = event.get('data') # this will avoid raising an error if event doesn't contain the data key # do whatever you like with Data
So i have setup a lambda function to upload a txt file to S3. How do i send data to the function using API Gateway?Ive setup API Gateway to have a POST method.here is my Lambda functionimport boto3 s3 = boto3.resource('s3') def lambda_handler(event, context): data = 'Totally awesome sword design' #event['data'] filename = 'awesomeSword2' #event['filename'] object = s3.Object(BUCKET_NAME, KEY + filename + '.txt') object.put(Body=data)I just need to know how to send data and filename to the function (and read it)
Sending and reading data to AWS Lambda function
No, You can't add a new column to struct in Athena. You can delete Schema and then create a new Table with required columns. Deleting schema or database won't affect your data because Athena doesn't store data itself, it just points to data in S3.
I have a table that tracks user actions on a high-throughput site that is defined as (irrelevant fields, etc removed):CREATE EXTERNAL TABLE `actions`( `uuid` string COMMENT 'from deserializer', `action` string COMMENT 'from deserializer', `user` struct<id:int,username:string,country:string,created_at:string> COMMENT 'from deserializer') PARTITIONED BY ( `ingestdatetime` string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://<path_to_bucket>' TBLPROPERTIES ( 'transient_lastDdlTime'='1506104792')And want to add some more fields to the user data (e.g.level:intto track what level the user was when they performed the action).Is it possible to alter the table definition to include these new properties, and if so, is it possible to configure default values in the event that they aren't in the source data files?
Is it possible to add fields to struct in an existing AWS Athena table?
I know your question is specific to boto3 so you might not like my answer, but it will achieve the same outcome as what you would like to achieve and the aws-cli also makes use of boto3.See here:http://bigdatums.net/2016/09/17/copy-local-files-to-s3-aws-cli/This example is from the site and could easily be used in a script:#!/bin/bash #copy all files in my-data-dir into the "data" directory located in my-s3-bucket aws s3 cp my-data-dir/ s3://my-s3-bucket/data/ --recursive
I am uploading images to a folder currently on local . like in site/uploads. And After searching I got that for uploading images to Amazon S3, I have to do like thisimport boto3 s3 = boto3.resource('s3') # Get list of objects for indexing images=[('image01.jpeg','Albert Einstein'), ('image02.jpeg','Candy'), ('image03.jpeg','Armstrong'), ('image04.jpeg','Ram'), ('image05.jpeg','Peter'), ('image06.jpeg','Shashank') ] # Iterate through list to upload objects to S3 for image in images: file = open(image[0],'rb') object = s3.Object('rekognition-pictures','index/'+ image[0]) ret = object.put(Body=file, Metadata={'FullName':image[1]} )ClarificationIts my code to send images and name to S3 . But I dont know how to get image in this line of codeimages=[('image01.jpeg','Albert Einstein'),like how can I get this image in this code from /upload/image01.jpeg . and 2ndly how can I get images from s3 and show in my website image page ?
How I can upload batch images with name on Amazon S3 using boto?
If you will look into AWSCognitoIdentityUser in method getSessionWithUserName andPassword you will see that there is a ternary operator switching migration auth that is driven by migrationEnabled Boolean value. In order to switch auth type just configure identity pool like so:let userPoolConfiguration = AWSCognitoIdentityUserPoolConfiguration ( clientId: clientId, clientSecret: nil, poolId: userPoolId, shouldProvideCognitoValidationData: false, pinpointAppId: nil, migrationEnabled: true )
I’m usingAWS Cognito to perform login authentication. When login is successful we get below request body :Request body:> {"UserContextData":{"EncodedData":"eyJ..9”},”ClientMetadata":{"cognito:deviceName":"MacBookPro12-01","cognito:bundleShortV":"1.0.0", > "cognito:idForVendor":"A6FD46FBB205","cognito:bundleVersion":"207", > "cognito:bundleId":"com.abc.Project-Dev","cognito:model":"iPhone", "cognito:systemName":"iOS","cognito:iOSVersion":"11.3"}, > "AuthParameters":{"SRP_A":"a6..627","SECRET_HASH":"vr..Oo=", "USERNAME":"[email protected]”},**”AuthFlow":"USER_SRP_AUTH"**, > "ClientId”:”123”}Now, there is a scenario wherein I’ve toset “AuthFlow” value to “USER_PASSWORD_AUTH”.How can this be done?The headache with this is thatall these values are set in Pods.Below code prints the request body that is added above :passwordAuthenticationCompletion?.set(result: AWSCognitoIdentityPasswordAuthenticationDetails(username: username, password: password))
AWS cognito login set AuthFlow to USER_PASSWORD_AUTH in iOS
It would be safer to usemailparserpackage for parsing.const simpleParser = require('mailparser').simpleParser; simpleParser(data, (err, mail)=>{ console.log(mail.text); })
Here's the section of the Node Lambda function that gets the email stored in S3. How do I just get the 'text/plain' content from the returned data object?Do I need to include an NPM email parsing dependency with the lambda function (uploaded as .zip) or should I use some regex in the lambda to get the section I want? If so what would that look like?exports.handler = function(event, context, callback) { var sesNotification = event.Records[0].ses; // Retrieve the email from your bucket s3.getObject({ Bucket: bucketName, Key: "ses/"+sesNotification.mail.messageId }, function(err, data) { if (err) { console.log(err, err.stack); callback(err); } else { data } }); };
Use a Node Lambda function to parse an email stored in an AWS S3 Bucket by SES
Faking the SDK like this works:main_test.gotype fakeDynamoDBClient struct { dynamodbiface.DynamoDBAPI } func (m *fakeDynamoDBClient) GetItemRequest(input *dynamodb.GetItemInput) dynamodb.GetItemRequest { return dynamodb.GetItemRequest{ Request: &aws.Request{ Data: &dynamodb.GetItemOutput{ Item: map[string]dynamodb.AttributeValue{ "count": dynamodb.AttributeValue{ N: aws.String("10"), }, }, }, }, } } func (m *fakeDynamoDBClient) PutItemRequest(input *dynamodb.PutItemInput) dynamodb.PutItemRequest { return dynamodb.PutItemRequest{ Request: &aws.Request{ Data: &dynamodb.PutItemOutput{}, }, } } func TestUpdateCount(t *testing.T) { err := UpdateCount(10, &fakeDynamoDBClient{}) if err != nil { t.Error("Failed to update badge count on dynamodb", err) } }main.gofunc UpdateCount(count int, client dynamodbiface.DynamoDBAPI) error { ... }
I am still grasping go-interfaces and I can mock theWaitUntilTableExistsfunc. But unable to mockPutItemRequest.Here's mymain.gosnippetfunc MyPutItem(d mydata, client dynamodbiface.DynamoDBAPI) error { input := &dynamodb.PutItemInput{ .... } req := client.PutItemRequest(input) result, err := req.Send() log.Println(result) return err }main_test.gosnippettype mockDynamoDBClient struct { dynamodbiface.DynamoDBAPI } func (m *mockDynamoDBClient) PutItemRequest(input *dynamodb.PutItemInput) dynamodb.PutItemRequest { // Most probably this is where I need your help } func TestStoreInDynamoDB(t *testing.T) { var mockClient = new(mockDynamoDBClient) d := mydata{} result := DynampDBPutItem(d, mockClient) t.Log(result) }
How do I write unit test aws-sdk-go-v2 dynamodb implementation
Personally I code in Java, and the JavaDynamoDBMapperis the best DynamoDB SDK by a distance. It provides object modelling, optimistic locking and more. The only other supported high level SDK at the moment is the.Net Object Persistence Model, which is frankly not even close to being as good as DynamoDBMapper.If you are using Lambda I personally wouldn't use Java, the functions take too long to run.The AWS supportedJavascript SDKdoes not provide object modelling.I've seen a few projects that try and fill the gap for a Javascript DynamoDB object mapping SDK, such asdynamooseanddynogels. Personally I wouldn't use these as you simply end up losing functionality offered by DynamoDB. But I'm sure they are good in some circumstances, like prototyping applications rapidly.I must admit I've not used the newAWS dynamodb-data-mapper(Javascript Object SDK). However its being developed by AWS and itspretty clear they are serious about it.Clearly using the SDK depends on your project and appetite to risk. I get a huge amount of value using DynamoDBMapper (the equivalent Java SDK). My code is massively more clean and simple than it would be in a low level SDK.
I am planning to use AWS Dynamo-Data-Mapper for ORM mapping while creating lambda functions in NodeJS with DynamoDB storage. This library is still under developer preview. Does anyone has experience in using this library and is there a risk of using this library since it is still under developer preview? Is there any other better NodeJS library to use for ORM with Dynamo DB.
AWS DynamoDB-Data-Mapper NodeJS
Internet access is required when calling an AWS API.There are two ways to give a Lambda function access to the Internet:Do not attach the Lambda function to a VPC, orAttach the Lambda function to a private subnet and configure the private subnet to route Internet-bound traffic through a NAT Gateway (or NAT instance) in a public subnetSo, if the Lambda function does not need to access any resources in the VPC, simplyremove it from the VPC. If itdoesneed access, thenadd a NAT Gateway.
I have a AWSLambda(java) and I try to do a test in order to retrieve a password stored on Parameter Store. Here is my piece of code:GetParameterRequest parameterRequest = new GetParameterRequest(); AWSSimpleSystemsManagement client = AWSSimpleSystemsManagementClientBuilder.defaultClient(); parameterRequest.withName("my-password-key") .setWithDecryption(true); GetParameterResult parameterResult = client.getParameter(parameterRequest); password = parameterResult.getParameter().toString();Thesecurity group(and theNACL) associated with my lambda has all inbound and outbound open (any port and any IP address).My lambda run inside a private subnet.When I execute the lambda (triggered by an API Gateway event) I have the following error:Unable to execute HTTP request: Connect to ssm.eu-central-1.amazonaws.com:443 [ssm.eu-central-1.amazonaws.com] failed: connect timed out: com.amazonaws.SdkClientExceptionSince the error is about an timeout error, I think it's not a role problem.I have no idea where to look. Any help is appreciated.Thanks.C.C.
AWS Lambda cannot connect to Parameter Store
When youreceivemessages from the queue, they are marked as "in flight." After you successfully process them, you send a call to the queue todeletethem. This call will include IDs of each of the messages.When the queue is empty, the next read will have an emptyMessagesarray.Usually when I do this I wrap my call to read the queue in a loop (awhileloop) and only keep processing if I haveMessagesafter doing a read.It shouldn't make any difference if it's a FIFO queue or a standard one.
I want to get all the messages in the queue to process them. However the property for MaxNumberOfMessages is 10 (based on documentation)https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.htmlHow can I read in all messages so I can process them? Or how would I know when queue is empty?thanks
AWS SQS Receive Messages -- How to Know when Queue is Empty
You are missing to passCallerReferenceas an argument.const cloudfront = new aws.CloudFront(); async function invalidateFiles() { await cloudfront.createInvalidation({ DistributionId: 'xxxxxxxxxxx', InvalidationBatch: { CallerReference: `SOME-UNIQUE-STRING-${new Date().getTime()}`, Paths: { Quantity: 1, Items: ['test.js'], }, }, }).promise(); }
AWS SDK for JavaScript allows to use promises instead of callbacks, when calling the methods of AWS Service classes. Following is an example for S3. (I'm using TypeScript along with the Serverless framework for development)const s3 = new S3({ apiVersion: '2006-03-01' }); async function putFiles () { await s3.putObject({ Bucket: 'my-bucket', Key: `test.js`, Body: Buffer.from(file, 'binary') // assume that the file variable was defined above. }).promise(); }The above function works perfectly fine, where we are passing the bucket parameters as the only argument to the method.But, when I'm trying to do a similar operation by calling the createInvalidation() method on the AWS CloudFront class, it gives me an error saying that the arguments do not match.Following is my code and the error I get.const cloudfront = new aws.CloudFront(); async function invalidateFiles() { await this.cloudfront.createInvalidation({ DistributionId: 'xxxxxxxxxxx', InvalidationBatch: { Paths: { Quantity: 1, Items: [`test.js`], }, }, }).promise(); }Can someone help with this issue please?
aws - CloudFront createInvalidation() method arguments error when using promises
DynamoDb is cool. However, before you use it you have to know your data usage patterns. For your case, if you're only every going to query the DynamoDb table by ID then it is great. If you need to query by any one or combination of columns then well there are solutions for that:Elastisearch in conjunction with DynamoDb (which can be expensive), secondary indexes on the DynamoDb table (understand that each secondary index is creating a full copy of your DynamoDb table with the columns you choose to store in the index),Elasticache in conjunction with DynamoDb (for tying searches back to the ID column),RDS instead of DynamoDb ('cause a sql-ish db is better when you don't know your data usage patterns and you just don't want to think about it),etc.It really depends on how much data you have and how you'll query the data that should define your architecture. For me it would come down to weighing cost and performance of each of the options available.In terms of getting the data into your DynamoDb or RDS table:AWS Glue may be able to work for youAWS Lambda to programmatically get the data into your data store(s)perhaps others
firstly I'm very new to DynamoDB, and AWS services in general - so I'm finding it hard when bombarded with all the details.My problem is that I have an excel file with my data in CSV format, and I'm looking to add said data to a DynamoDB table, for easy access for the Alexa function I'm looking to build. The format of the table is as follows:ID, Name, Email, Number, Room 1534234, Dr Neesh Patel,[email protected], +44 (0)3424 111111, HW101Some of the rows have empty fields.But everywhere I look online, there doesn't appear to be an easy way to actually achieve this - and I can't find any official means either. So with my limited knowledge of this area - I am questioning whether I'm going about this all the entirely wrong way. So firstly, am I thinking about this wrong? Should I be looking at a completely different solution for a backend database? I would have thought this would be a common task but with the lack of support or easy solutions - am I wrong?Secondly, if I'm going about this all fine - how can it be done? I understand that the DynamoDB requires a specific JSON format - and again there doesn't appear to be a straightforward way to convert my CSV into said format.Thanks, guys.
How to populate DynamoDB tables
Terraform uses the state file to keep track of resources it manages. If it does not have a particular resource (in this case probably youraws_cloudfront_distribution.primary_domainresource), it will create a new one and store the ID of that new resource in your state file.It looks like you did aterraform applywith your local state file, changed the backend to s3 without porting the state to s3, then ranterraform applyagain. This second S3-backed run has a blank state, so it tried to recreate youraws_cloudfront_distributionresources again. Looks like the errorindicates a conflict in using the same CNAME for two distributions, which is what would happen if you ran Terraform twice without keeping track of state in between.You have a couple of options to fix this:Go back to using your existing local state file,terraform destroyto remove the resources it created, switch back to s3, thenterraform applyto start anew. Be aware that this will actually delete resources.Properly change your backend andreinitialize, then answer "yes" to copying your remote state to S3.terraform importthe resources you created with your local state file into your S3 backend. Do this withterraform import aws_cloudfront_distribution.primary_domain <EXISTING CLOUDFRONT DIST. ID>.
I'm trying out terraform to set up an S3 + Cloudfront static site. Initially, I set up the site successfully, following the steps fromhttps://alimac.io/static-websites-with-s3-and-hugo-part-1/However, afterwards I changed the terraform state backend fromlocaltos3Now, when I performterraform applyI get the following error:Error: Error applying plan: 2 error(s) occurred: * aws_cloudfront_distribution.primary_domain: 1 error(s) occurred: * aws_cloudfront_distribution.primary_domain: CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource. status code: 409, request id: <removed> * aws_cloudfront_distribution.secondary_domain: 1 error(s) occurred: * aws_cloudfront_distribution.secondary_domain: CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource. status code: 409, request id: <removed>Any ideas about why this might be happening and what can I do to fix this issue?
Terraform: AWS Cloudfront distribution gives CNAMEAlreadyExists error after changing terraform state backend from local to s3
To the best of my knowledge you can attach a deny portion to any policy or create a deny policy and attach it to any group.For example you have "Administrators" group that has many roles added as well as "MultifactorAuthForce" policy:Example of "MultifactorAuthForce":{ "Version": "2012-10-17", "Statement": [ { "Sid": "DenyAllWithoutMFA", "Effect": "Deny", "Action": "*", "Resource": "*", "Condition": { "BoolIfExists": { "aws:MultiFactorAuthPresent": "false" } } } ] }Update:Just tested it on my account and the policy works. Created an account without MFA, added password and assigned to the group above. When logged as that user I was denied all actions on all resources. After, I added MFA to the user and logged in again. I was able to see the resources.
Is it possible to create an IAM rule or an SCP (organization rule) to enforce MFA for all users in a certain group or with certain rights (e.g. administrators or power user)?
Is it possible on AWS to enforce MFA on group level (e.g. for all with administrator rights)?
Modify your function like this:const response = { status: '302', statusDescription: 'Found', headers: { location: [{ key: 'Location', value: 'http://<domainname>/something/root.html', }], 'x-lae-region': [ { key: 'x-lae-region', value: process.env.AWS_REGION } ], }, };What this does is capture the region where your lambda function is running -- it will show us-east-1 during test, but show an accurate value once deployed.Responses captured by your browser, curl, etc., will now includex-lae-region: some-aws-regionto indicate the region linked to the edge where your specific requests are being handled. Check the logs for that specific region -- you should see logs and invocations there.Also note that for an Origin Request (but not Viewer Request) trigger, CloudFront caches the response generated by Lambda, so the function will only be invoked when there is a cache miss. If CloudFront has cached the response, the trigger will not fire -- the cached response is served without contacting the origin. If you make this function live and you don't see the change in the responses, then you are almost certainly looking at cached responses, and will want to do an invalidation.
I have created the cloud front distribution and attached the lambda with the trigger`Event type: viewer-requestPath pattern: something/index.html` Event type: origin-requestPath pattern: something/index.htmlWhen i hit the endpoint it's redirecting to the page where i want to redirect, according to my lambda.but i was not able to see my lambda logs in any region.it was not showing the invocation count also.Did anyone faced this issue ??Here is my lambda code'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ console.log('event',event); const response = { status: '302', statusDescription: 'Found', headers: { location: [{ key: 'Location', value: 'http://<domainname>/something/root.html', }], }, }; callback(null, response); };
lambda@edge logs and invocation counts are not showing?
What kind of application are you using (web server, application server, ...)? Maybe ALB would be more suitable for you as it works on layer 7 of OSI model, therefore it is able to proccess HTTP headers, for example.Back to your question; To be able to forward traffic to your EC2 instances, that runs application on port 8001,you have to set port on your target group to 8001. Auto-scaling group knows nothing about what application is running on EC2 it provisions, nor about ports that are used by that application.So the final flow is like:LB listens on port 80 and forwards traffic to target group on port 8001. This target group then sends traffic to its targets (your EC2 instances) on port 8001.
Recently started using Network Load Balancer which listens on port 80 and forwards traffic to my target group. My autoscaling group is configured to add any new targets to this target group.However, my application on the target EC2 instances runs on port 8001, not 80. So my targets should register under port 8001 in the target group. The auto-scaling configuration doesn't seem to support that. All new instances created by auto scaling are added as targets with port 80 and there is no way to auto specify which port that should be used instead (8001 for me).Any ideas how to implement this?
AWS auto scaling targets in target groups for Network Load Balancers
SageMaker is designed to solve deployment problems in scale, where you want to have thousands of model invocations per seconds. For such use cases, you want to have multiple tasks of the same model on each instance, and often multiple instances for the same model behind a load balancer and an auto scaling group to allow to scale up and down as needed.If you don’t need such scale and having even a single instance for a single model is not economic for the request per second that you need to handle, you can take the models that were trained in SageMaker and host them yourself behind some serving framework such as MXNet serving (https://github.com/awslabs/mxnet-model-server) or TensorFlow serving (https://www.tensorflow.org/serving/).Please also note that you have control over the instance type that you are using for the hosting, and you can choose a smaller instance for smaller loads. Here is a list of the various instance types that you can choose from:https://aws.amazon.com/sagemaker/pricing/instance-types/
I am able to host the models developed inSageMakerby using the deploy functionality. Currently, I see that the different models that I have developed needs to deployed on different ML compute instances.Is there a way to deploy all models on the same instance, using separate instances seems to be very expensive option. If it is possible to deploy multiple models on the same instance, will that create different endpoints for the models?
AWS SageMaker hosting multiple models on the same machine (ML compute instance)
This issue seems to be a shortcoming from Amazon. Amazon should have given me a proper error message. Anyway, I noticed that if I go to AWS Directory Service and try to delete the directory, it explains me the root of the issue:So I went to AWS WorkDocs Service and deleted it:
I want to delete an aws directory but when I try, it gives me this error:An Error Has OccurredCannot delete the directory because it still has authorized applications.Please deregister the directory beforeproceeding.As it has said in the error message, I have toderegisterit to be able to delete it. However, the directory is not even registered! and the 'Deregister' command is grayed out:I also went into the 'Applications' section (in the left pane) and found no application.What's going on here?
Cannot Delete AWS Directory
If you create the EIP in another stack, you can export both the allocation ID and the IP address, and import them into your other template.To create the EIP:Resources: MyEIP: Type: AWS::EC2::EIP Outputs: MyEIPAllocationId: Value: !GetAtt MyEIP.AllocationId Export: Name: "MyEIP::AllocationId" MyEIPAddress: Value: !Ref MyEIP Export: Name: "MyEIP::Address"Then in your other template you can use them like this:!ImportValue MyEIP::AllocationId !ImportValue MyEIP::Address
I am creating a Cloudformation template that take as an input parameter the Allocation ID of and existing Elastic IP address. I have code that requires the actual IP address associated with the Allocation ID.How do I get the IP address using the Allocation ID of EIP in the template?If this is not possible, can we go the other way? That is, change the input parameter to the IP address of the existing EIP and somehow get the Allocation ID associated with EIP?I require both the IP and allocation ID of the EIP within the template and I'm trying to avoid passing both in as parameters and instead determine one from the other.
Get Elastic IP from Allocation ID Parameter
Sam Martinhas a PowerShell module in Github with some PowerShell helper functions for AWS that you can find here:https://github.com/Sam-Martin/AWSWindowsHelpers/His approach to this problem can be seen in hisWait-AWSWindowsHelperInstanceToStopandWait-AWSWindowsHelperInstanceReadycmdlets, and is (as you've already suggested) simply to run a loop with a start-sleep until the instance is in the state you expect. E.g:While((Get-EC2Instance -InstanceId $InstanceID -Region $Region).Instances[0].State.Name -ne 'stopped'){ Write-Verbose "Waiting for instance to stop" Start-Sleep -s 10 }
I understand that it is quite easy to start and stop instances through Powershell:$awsCreds = Get-AWSAutomationCreds Set-AWSCredentials -AccessKey $awsCreds.AccessKey -SecretKey $awsCreds.SecretKey Set-DefaultAWSRegion -Region us-east-1 $instances = Get-EC2Instance -Filter @{name="tag:Name"; values="SERVERNAMES"} | Select -ExpandProperty Instances $instances.InstanceId | foreach {Stop-EC2Instance $_ -ErrorAction SilentlyContinue}Is there a quick and dirty way that I am just not seeing through the AWS Powershell Cmdlets or even the .NET SDK that would allow me to either wait until the action is complete. And/Or update the collection of instance objects I gathered?Or am I stuck with running the:$instances = Get-EC2Instance -Filter @{name="tag:Name"; values="SERVERNAMES"} | Select -ExpandProperty InstancesCommand over and over until the state completely changes?
Start/Stop Instances in AWS and Wait with powershell
The AWS APIs use reflection to figure out what AWS service they're connecting to based on the class name. If that's the case you might try calling your class SqsClient, eguse Aws\Sqs\SqsClient as BaseSqsClient; class SqsClient extends BaseSqsClient { //... }
I'm extending my custom Sqs class like this:class Sqs extends SqsClient { public function __construct() { parent::__construct(array( 'credentials' => array( 'key' => $_ENV['AWS_ACCESS_KEY_ID'], 'secret' => $_ENV['AWS_SECRET_ACCESS_KEY'], ), 'region' => 'us-west-1', 'version' => 'latest' )); } }And then I instantiate and use it like this:$sqs = new Sqs(); $sqs->sendMessage([ 'QueueUrl' => $_ENV['AWS_SQS_URL_PREFIX'] . '/' . $_ENV['AWS_SQS_READER_USER_CREATE'], 'MessageBody' => $user_json, 'MessageGroupId' => 1, 'MessageDeduplicationId' => uniqid(), ]);But I'm getting a weird error:The service \"\" is not provided by the AWS SDK for PHP.
Error on instantiating class extended from AwsClient subclass
That's because you can't. As described in theAmazon's S3 Documentation:You cannot specify GLACIER as the storage class at the time that you create an object. You create GLACIER objects by first uploading objects using STANDARD, RRS, or STANDARD_IA as the storage class. Then, you transition these objects to the GLACIER storage class using lifecycle management.
I can't find a command example for archiving a set of files from a given prefix in S3 into a given vault in Glacier using ONLY COMMAND LINE, i.e. no Lifecycles, no python+boto. Thanks.This doc has a lot of examples but none fit my request:https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html
AWS : how to archive files from S3 to Glacier using only command line
Just an extract from the AmazoneUsage Instruction:The default password is your EC2 instance idSo the password is not neo4j ...
I just deployedNeo4J from Amazon Marketplaceon Amazon ECSI can browse to the GUI but the default credentials (neo4j/neo4j) seems to not work.The error message reads:Neo.ClientError.Security.Unauthorized: The client is unauthorized due to authentication failure.What am I missing?
Neo4j on Amazon Marketplace: Unauthorized with default credentials
You can always try using shorthand syntax:--user-attributes Name="custom:roles",Value="ROLE1,ROLE2"If you really want to use the JSON syntax, try this:--user-attributes '[{"Name" : "custom:roles","Value" : "ROLE1,ROLE2"}]'Ensure that the user-attributes list is enclosed in single quotes
Consider the example:aws cognito-idp admin-update-user-attributes --user-pool-id myUserPollId --username myUser --user-attributes [{"Name": "custom:roles","Value": "ROLE1,ROLE2"}] --region us-east-1This gets me error:Invalid JSON: [{Name:
How pass json value for admin-update-user-attributes operation via cli in aws?
The YAML list isn't valid. Need a space between-and the Policy namesTryResources: Get: Type: AWS::Serverless::Function Properties: FunctionName: fnStores Handler: handler.get Runtime: nodejs6.10 Policies: - AmazonDynamoDBReadOnlyAccess - AmazonS3ReadOnlyAccess
I have the following AWS SAM file (showing extract) for a lambda function. The problem is that I'm trying to specify multiple policies and this does not work, I get an errorResources: Get: Type: AWS::Serverless::Function Properties: FunctionName: fnStores Handler: handler.get Runtime: nodejs6.10 Policies: -AmazonDynamoDBReadOnlyAccess -AmazonS3ReadOnlyAccessThis is the error I get"ARN -AmazonDynamoDBReadOnlyAccess -AmazonS3ReadOnlyAccess is not valid.On a side note, is it possible to create a custom policy that combines the above two and then use that? If so please provide an example.
AWS Lambda SAM, specify multiple policies
withEndpointConfiguration() is used with S3 clones (either on your localhost, Minio, etc.). It is also used with DynamoDB when installed on your local system.Here is an example using Minio. The region "us-east-1" is just emulated for this API call.EndpointConfiguration endpointConfiguration = new EndpointConfiguration( "http://192.168.178.84:9000", "us-east-1");
what is difference betweenwithRegion()andwithEndpointConfiguration()method inaws S3orSQSclient.UsingEndpointConfiguration needsendPointandsigningRegion. Is thissigningRegionsame as of s3 bucket? If yes, then why we need to specify it twice as region will be part of endpoint also.Example:us-west-2ins3-us-west-2.amazonaws.com
difference between withRegion() and withEndpointConfiguration() method in aws s3 or sqs client
It turned out that I usedS3://with a capitalSinstead of smalls.
I am trying to download certain files from S3 to local machine by running the following code:import subprocess, os ec2_root = '/home/' s3_root_path = "S3://bucket-name/" s3_download_command = ["aws", "s3", "cp", os.path.join(s3_root_path, 'my_video.mp4'), os.path.join(local_root)] p = subprocess.Popen(s3_download_command) p.communicate()But I get the following error:usage: aws s3 cp <LocalPath> <S3Uri> or <S3Uri> <LocalPath> or <S3Uri> <S3Uri> Error: Invalid argument type
Invalid argument type when trying to download specific files from S3 bucket to EC2 using subprocess
When you launch the new instances you can provide theuser-dataat that time, in the same AWS SDK/API call. That's the best place to put any server initialization code.The only other way to kick off a script on the instance via the SDK is via the SSM service's Run Command feature. But that requires the instance to already have the AWS SSM agent installed. This is great for remote server administration, butuser-datais more appropriate for initializing an instance on first boot.
I'm currently using AWS's Javascript SDK to launch custom EC2 instances and so far so good.But now, I need these instances to be able to run some tasks when they are created, for example, clone a repo from Github, install a software stack and configure some services.This is meant to emulate a similar behaviour I have for local virtual machine deployment. In this case, I run some provisioning scripts with Ansible that get the job done.For my use case, which would be the best option amongst AWS's different services to achieve this using AWS's Javascript SDK?Is there anyway I could maybe have a template script to which I passed along some runtime obtained variables to execute some tasks in the instance I just created? I read aboutuser-databut I can't figure out how that wraps with AWS's SDK. Also, it doesn't seem to be customisable.At the end of the day, I think I need a way to use the SDK to do this:"On the newly created instance, run this script that is stored in such place, replacing these placeholder values in the script with these I'm giving you now"Any hints?
How to run a script on a newly created EC2 instance via AWS SDK?
OSX uses a different kernel base and even though it works on VMWare on windows it'll simply never work on AWS, you can run other Linux unix distros but sierra will simply just never work on aws, it will not boot. Sorry but a bunch of people have tried this and it sucks, but it be insane if it would.Even though AWS DOES boot VM's with windows and Linux, any distro macOS requires extra features required to boot, AWS simply doesn't have these, they work on your desktop, so your only option would be to run this in your PC.If you need a cloud base etc, best bet is to get, use a hackintosh.By the way AWS CAN run the VM but Amazon will never allow it, at least. They purposely block this from their service so people wouldn't use it.The Hardware that Amazon usescan run anything,even a hackintosh, but the mac part and the kernel required to run it is blocked from a developer level so it wouldn't function, they allow everything else, Linux and windows.Sorry. Cheers.
Can you run a VM image of sierra on AWS? Since AWS import supports Linux and Unix Kernels.
macOS Sierra on AWS with VM image
This is totally possible (and a good approach).Step 1.Create custom CloudWatch metric for "Requests in queue". You will have to write your own agent that runspassenger-status, extracts the value and sends it to CloudWatch. You can use any AWS SDK or just AWS CLI:http://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.htmlStep 2.Create alarms for scale up and scale down based on your custom metric.Step 3.Modify scaling policy for your Auto Scaling Group to use your custom alarms to scale up/down.
I am surprised to find little information regarding EC2 autoscaling with Phusion Passenger.I actually discovered not so long ago a metric "Requests in queue" being exposed upon runningpassenger-statusI am wondering whether this stat would make a nice metric to help with autoscaling.Right now most AWS EC2 Autoscaling guides mention using CPU and Memory to write autoscaling rules but I find this insufficient. When I think about the problem autoscaling should solve, that is being able to scale up to the demand, I'd rather base those rules on the number of pending/completed requests to report a node health or a cluster congestion, and Passenger "Requests in queue" (and also for each process, the "Last Used" and "Processed" count) seems to useful.I am wondering it it would be possible to report this "Requests in queue" stat (and eventually others) periodically as an AWS metric. I was thinking the following rule would be ideal for autoscaling : If the average number of "requests in queue" on the autoscaled instances is to exceed a threshold value, this would trigger spawning a new machine from the autoscaling group.Is this possible ? Has anyone ever tried to implement autoscaling rules based on number of requests in queue this way ?
Passenger - Using "Requests in queue" as an AWS metric for autoscaling
OK, figured this out. The sequence is as follows:Create an emptyTarget GroupCreate aNetwork Load Balancer. Associate with the emptyTarget GroupCreate anAutoscaling Groupwith your desiredLaunch Config, desired counts, andTarget Groupfrom above. Leave theLoad Balancerempty.Click onNetwork Interfaces(left side nav bar in the EC2 services area) and find those associated with your NLB (you can search for the NLB name). The found entry(s) will show the static IP of the NLB.
I need a ELB that has a static IP and fronts an auto scaling group.Looking at the recent announcement, Network Load Balancers can do both of these things. However, when I try setting up a NLB I don't see where to set/get the static IP, nor do I see a way to associate it with an auto scaling group.When I edit my auto scaling group I search for the NLB previously created in its list of ELBs and the NLB isn't present as a choice.1) How do I associate an auto scaling group to a NLB?I'm not sure I understand the concept of target groups with regards to a NLB and auto scaler. If I create a target group, it wants specific instance names or IP's of EC2 instances.2) Given that those names/IPs change when auto scaler adds/removes instances, how do I know?3) How/where do I get a static IP for my NLB?
AWS Network Load Balancer questions
If you need to run these operations once or twice a day, you may want to look into the new AWS Batch service, which will let you run batch jobs without having to worry about DevOps.If you have enough jobs to keep up the computer busy for most of the day, I believe the best solution is to run a Docker based solution, which will allow you to more easily manage your image and be able to test on your local host ( and more easily move to another cloud if you ever have to). AWS ECS makes this as easy as Elastic beanstalk.I have my front end running on Elastic beanstalk and my back end workers running on ECS. In my case, my python workers are running on an infinite loop checking for SQS messages so the server can communicate with them via SQS messages. But I also have CloudWatch rules ( as cron jobs ) that wake up and call Lambda functions which then post SQS messages for the workers to handle. I can then have three worker containers running on the same t2.small ECS instance. If one of the workers ever fails, ECS will recreate one.To summarize, use python on Docker on AWS ECS.
I am fairly new using AWS and I need to run a batch process (daily ) and store the data in a MySQL database. It would take approximately 30 minutes for extraction and transformation. As a side note, I need to run pandas.I was reading that lambda functions are limited to 5 minutes.http://docs.aws.amazon.com/lambda/latest/dg/limits.htmlI was thinking of using an EC2 micro instance with Ubuntu or an Elastic Beanstalk instance. And Amazon RDS for a MySQL DB.Am I on the right path? Where is the best place to run my python code in AWS?
Where is the best place to run a Python script in AWS?
Use--db-subnet-group-nameto point to aDBSubnetGroup, which contains a list of subnets where the database is permitted to launch.The subnets belong to a VPC.Therefore, the order is:Create a DBSubnetGroup pointing to subnets in your VPCLaunch the RDS Instance into the DBSubnetGroup
I am creating an RDS usingcreate-db-instanceHow do I assign it to VPC group? I don't see any tag to achieve that. It is picking default VPC group and assigning to the RDS.Here is the script I am using to create the RDS (I am passing variable in the tags which are defined in the bash script).>>aws rds create-db-instance \ --db-name WIND \ --vpc-security-group-ids $sgGroup_id \ --db-instance-identifier $Instance_Identifier \ --allocated-storage 100 \ --copy-tags-to-snapshot \ --db-instance-class ${arrDbClass[$iDbClass]} \ --engine oracle-ee \ --engine-version ${arrEngVer[$iEngVer]} \ --license-model bring-your-own-license \ --master-username oraadmin \ --master-user-password $oraadminPassword \ --no-auto-minor-version-upgrade \ --no-publicly-accessible \ --backup-retention-period $backup_Retention_Period \ --no-storage-encrypted \ --storage-type gp2 \ --no-enable-iam-database-authentication \ $multi_Az \
Assign RDS to a VPC using AWS CLI
You can create a condition, i.e.AddEDrivethat checks if the parameterEDriveSizeis specified. If it is, then it creates the BlockDeviceMapping, otherwise, do nothing.Per thedocumentation:/dev/sda1is the recommendedDeviceNamefor theC:\/dev/xvd[f-z]is recommendedDeviceNamefor all other additional drives.AWSTemplateFormatVersion: '2010-09-09' Conditions: AddCDrive: !Not [!Equals [!Ref CDriveSize, '']] AddDDrive: !Not [!Equals [!Ref DDriveSize, '']] AddEDrive: !Not [!Equals [!Ref EDriveSize, '']] Parameters: CDriveSize: {Default: '', Type: String} DDriveSize: {Default: '', Type: String} EDriveSize: {Default: '', Type: String} Resources: Instance: Properties: BlockDeviceMappings: - !If - AddCDrive - DeviceName: '/dev/sda1' Ebs: VolumeSize: !Ref CDriveSize VolumeType: gp2 - !Ref AWS::NoValue - !If - AddDDrive - DeviceName: '/dev/xvdf' Ebs: VolumeSize: !Ref DDriveSize VolumeType: gp2 - !Ref AWS::NoValue - !If - AddEDrive - DeviceName: '/dev/xvdg' Ebs: VolumeSize: !Ref EDriveSize VolumeType: gp2 - !Ref AWS::NoValue
I am trying to put "if" condition to skip the addition of the new EBS volumes in launch configuration , if there are already "available" volumes. So logic which I am trying to achieve is that if below check variable is Null then add the new volume else skip because I am going to add "available" volume from user data. $check = Get-EC2Volume -Filter @{ Name="status"; Values="available" }BlockDeviceMappings: - DeviceName: /dev/sda1 Ebs: VolumeType: gp2 VolumeSize: '100' !if $check --> not sure how to put if condition here - DeviceName: /dev/sdb Ebs: DeleteOnTermination: "false" VolumeSize: '50' VolumeType: gp2 - DeviceName: /dev/sdc Ebs: DeleteOnTermination: "false" VolumeSize: '50' VolumeType: gp2
if condition in BlockdeviceMapping
You can useFn::Jointo combine the output of Intrinsic functions (likeRef) with strings. For example:CloudWatchDashboardHOSTNAME: Type: "AWS::CloudWatch::Dashboard" DependsOn: Ec2InstanceHOSTNAME Properties: DashboardName: HOSTNAME DashboardBody: { "Fn::Join": [ "", ['{"widgets":[ { "type":"metric", "properties":{ "metrics":[ ["AWS/EC2","CPUUtilization","InstanceId", "', { Ref: Ec2InstanceHOSTNAME }, '"] ], "title":"CPU Utilization", "period":60, "region":"us-east-1" } }]}' ] ] }Documentation:Fn::Join - AWS CloudFormationRef - AWS CloudFormationAWS::CloudWatch::Dashboard - AWS CloudFormationDashboard Body Structure and Syntax - Amazon CloudWatch
I'm trying to confgure a dashboard with a basic widget to expose CpUUtilization metric. I cannot reference the previous created EC2 instance, since it seems that in the json that describe the dashboard the !Ref function is not interpreted.metrics": [ "AWS/EC2", "CPUUtilization", "InstanceId", "!Ref Ec2Instance" ]Any idea how to reference it by logical name?
AWS CloudWatch dashboard CloudFormation configuration
What I'd suggest is to have your environment variables stored in EC2 Parameter Store which you can reference in your CodeBuild buildspec.yml.To use CodePipeline in your case, you also need different pipelines and different CodeBuild projects for each environment.For example, say you store the following variables in EC2 Parameter Store (or AWS SSM),DEVELOPMENT_DB_PASSWORD='helloworld' STAGING_DB_PASSWORD='helloworld' PRODUCTION_DB_PASSWORD='helloworld'In your CodeBuild project, you have to specify the environment as a variable (e.g.$ENVIRONMENT=DEVELOPMENT). Don't usebuildspecfor this. You can use AWS Console or CloudFormation.Then, yourbuildspec.ymlcan look like this:env: parameter-store: DEVELOPMENT_DB_PASS: "DEVELOPMENT_DB_PASSWORD" STAGING_DB_PASS: "DEVELOPMENT_DB_PASSWORD" PRODUCTION_DB_PASS: "DEVELOPMENT_DB_PASSWORD"These variables are then accessible in your serverless.yml using${env:ENVIRONMENT}_DB_PASSlike so:provider: environment: DB_PASS: ${env:${env:ENVIRONMENT}_DB_PASS}All you have to do now is to create those three CodePipelines each having their own CodeBuild project (with each project using a different$ENVIRONMENT).
I am on a team of developers using Git as our version control.We want to have a minimum of 3 stages of our development process: staging, dev, and production.The only thing that should change between these stages is a single config file, to tell the Serverless framework what to name the lambda functions, S3 buckets, and any other resource that needs to be created for the CloudFormation stack.However, this makes source control a bit harder. If we put the config files directly in the source code, then we have to make sure that those files don't get overridden when we commit/push to origin. But the CodeBuild has to have access to it somehow, and it has to be sure to grab the right config file for the specified stage.I would prefer a solution to this issue that is a part of the AWS ecosystem.
How do you handle config files for AWS CodePipelines?
There is no official API call to retrieveInstance Typesavailable in each region.However, you can retrieve and parse theAWS Price List API, which is really a set ofstatic JSON/CSV filesthat contain pricing for each Instance Type in each Region.You'll be amazed at how many pricing combinations there are -- the master EC2 price file is 130MB!
I need to list all the instance types that are provided by AWS using boto. I could've used a static list of instances but it doesn't seem a good solution because the instance types might change in future.
List AWS instance types using boto3
Inspecting the network requests sent from the DynamoDB console to CloudWatch revealed that the metrics in the graph are:Average(ProvisionedReadCapacityUnits)Sum(ConsumedReadCapacityUnits)But as @Shiplu Mokaddim has noticed in a comment on the other answer, plotting those two in CloudWatch does not result in a graph matching what you see in the DynamoDB console.It turns out, that the DynamoDB console uses theSum(ConsumedReadCapacityUnits)to compute an average to show in the graph. This is done by dividing the values with the period in seconds, and it ca be replicated in the CloudWatch console using a math expression.DynamoDB consoleCloudWatch consoleBonus: after realizing how to pull these numbers, I was able to write a script that produces alist of provisioned and consumed capacity for all DynamoDB tablesin my AWS account.
I can see two graphs for write capacity: one thru CloudWatch alarms and the other thru the DynamoDB console. Here is what CloudWatch shows me:Looks like the write capacity spikes up to almost 8,000 write capacity units.Then I go to the Dynamo console and this is what I see:Not even close to that high and not over the capacity allocated.Why don't these two agree? Why does the CloudWatch alarm go off?
AWS Dynamo Write Capacity Graphs don't agree
Unfortunately and unbelievably, I think the answer is still,NO. The Web UI (AWS Console Code-commit) allows you to navigate to any file, view it, and even edit it, but it doesn't allow you to view the raw (and ultimately)downloadit. So frustrating. This would be a simple feature for them to add via their GetFile API.
In github, you can click the Raw button and you can get the file's url. Does CodeCommit have that feature?
Is there a way to get the raw url of a file in AWS CodeCommit?
What AWS Lambda does when it encounters an exception depends on how it got invoked. In short: If it got invoked synchronously an error is returned to the caller, if it got invoked asynchronously retries happen. For more details please check outhttps://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.htmlAs AWS Lambda's execution model is a stateless an exception only affects the current invocation. Following invocations are handled properly as if there was no exception.(Disclaimer: AWS Lambda is only stateless to a certain extend, as it reuses existing containers. I believe that's not relevant to your question, but if you want to learn more about it I suggest the following article:https://aws.amazon.com/de/blogs/compute/container-reuse-in-lambda/)
Lambda AWS shut down when I throw an exception?In my code, I throw an exception when an illegal state happens. I want to know how Lambda deals with it if the service shut down or not.I can't find any reference to it, in their documentation, it's all about handling the errors/exception. But I want to know if a unhandled exception should shut down my Lambda service.
Lambda AWS - Exception
After going through AWS documentation, I found that for classic load balancers we should provide the following details (loadBalancerName):--load-balancers loadBalancerName=bwce-lb,containerName=launch-test-app,containerPort=8080And for application load balancers (which is my case), we should provide following details (targetGroupArn):--load-balancers targetGroupArn=arn:aws:elasticloadbalancing:us-east-1:750037626691:targetgroup/default/85fd830384028e21,containerName=launch-test-app,containerPort=8080The problem in my previous input values was that, I was providing the LoadBalancer ARN in the 'targetGroupArn' field instead of providing the TargetGroupARN. Once I fixed the traget group ARN issue, it started working fine.
I am trying to add an AWS ELB to a ECS Cluster Service using AWS CLI. I am using the following command:aws ecs create-service --service-name ${SERVICE_NAME} --desired-count 1 --task-definition launch-test-app --load-balancers targetGroupArn=arn:aws:elasticloadbalancing:us-east-1:NNNNNNNNNNNN:loadbalancer/app/bw-test/edfe7f7c15e40d56,containerName=launch-test-app,containerPort=8080 --role arn:aws:iam::NNNNNNNNNNNN:role/service-role/bw-metering-role --cluster ${CLUSTER} --region ${REGION}The Role 'bw-metering-role' has following policies attached:AmazonEC2ContainerServiceFullAccessAmazonEC2ContainerServiceforEC2RoleAnd the Role also has following Trust Relationship:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "ec2.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }But still I am getting following error while executing the above AWS CLI command:An error occurred (InvalidParameterException) when calling the CreateService operation: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.I have searched and found some solutions, but with no positive result.
Adding AWS LoadBalancer to Service using AWS CLI
Yes, it is safe to install Windows updates (either automatically or manually).Actually, it is recommended that you always update your Amazon EC2 instances to maintain the latest security patches.They're just normal Windows machines. No need to handle them any differently to how you would normally maintain a Windows server.
I have an AWS EC2 windows machine running Windows Server 2012 R2.I am having an issue with one application and I am suspecting that the machine does not have the latest .Net patches.I looked into Windows Update and noticed it's turned off by default. Can I turn it on and update the machine? Right mow there are 20 important updates waiting...
Is it safe to turn on windows update for AWS EC2 machine?
I guess that's not possible.It's async and also there's theAPI Gateway TimeoutYou don't need get the results by polling, you can combine Lambda, Step Functions, SNS and Websockets to get your results real time.If you want to push a notification to a client (web browser) and you don't want to manage your own infra structure (scaling socket servers and etc) you could use AWS IOT. This tutorial may help you to get started:http://gettechtalent.com/blog/tutorial-real-time-frontend-updates-with-react-serverless-and-websockets-on-aws-iot.htmlIf you only need to send the result to a backend (a web service endpoint for example), SNS should be fine.
Is it possible to invoke a AWS Step function by API Gateway endpoint and listen for the response (Until the workflow completes and return the results from end step)?Currently I was able to find from the documentation that step functions are asynchronous by nature and has a final callback at the end. I have the need for the API invocation response getting the end results from step function flow without polling.
Invoke a AWS Step functions by API Gateway and wait for the execution results
DynamoDB doesn't have aggregate function like GROUP BY in RDBMS. The sorting can be performed on SORT KEY attribute only. For all other attributes, you may need to do it in client side.The alternate approach would be to create Global Secondary Index (GSI) with SORT Key. However, GSI would incur additional cost for read and write capacity units.Query results are always sorted by the sort key value. If the data type of the sort key is Number, the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By default, the sort order is ascending. To reverse the order, set the ScanIndexForward parameter to false.
I'm looking to request a dynamodb table but I can not figure out how to do a GROUP BY and a ORDER BY with the python 2.7 sdk boto3.I have read and read again the aws documentation and most of the stacks topics but I haven't find clearly answer because my request is quite a bit complex.I would get all the items from my tables which have a 'permaname' GROUP BY 'source' and I would have the result ORDER BY created_on DESC.This is how I'm doing my request my simple request for the moment :response = self.table.query(KeyConditionExpression = key('permaname').eq(self.permaname))I hope someone has the answer
aws DynamoDB boto3 Query GROUP BY and ORDER BY
No, you don't. You can login using aws-sdk like this:const cognito = new aws.CognitoIdentityServiceProvider({ region }); cognito.adminInitiateAuth({ AuthFlow: 'ADMIN_NO_SRP_AUTH', ClientId: clientId, UserPoolId: poolId, AuthParameters: { USERNAME: email, PASSWORD: password, }, });
I have a javascript project where I use the aws-sdk. No I want to useamazon-cognito-identity-js. On the page it says:Note that the Amazon Cognito AWS SDK for JavaScript is just a slimmed down version of the AWS Javascript SDK namespaced as AWSCognito instead of AWS. It references only the Amazon Cognito Identity service.and indeed, I can for example create CognitoIdentityServiceProvider with:CognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider();But how do I do thinks like authenticate a user? According to theamazon-cognito-identity-jsdocumentation:authenticationDetails = new CognitoIdentityServiceProvider.AuthenticationDetails({Userame: ..., Password: ...}); cognitoUser.authenticateUser(authenticationDetails, ...)But the CognitoIdentityServiceProvider object does not have a AuthenticationDetails property.Do I have to do something different when I use the aws-sdk instead of amazon-cognito-identity-js?Or is my assumption wrong, and I need both, the aws-sdk and amazon-cognito-identity-js?
Do I need amazon-cognito-identity-js if I already have the aws-sdk (javascript)?
Your code example is working perfectly well for me! (With my Account ID.)Find the date on a snapshot, then put that date in the query -- one day before and then run it again for one day after. That should help you track down the strange behaviour.$ aws ec2 describe-snapshots --query 'Snapshots[?StartTime >= `2016-08-30`].{id:SnapshotId}' --owner-ids 123456789012 [ { "id": "snap-e044d613" }, { "id": "snap-f4444506" } ] $ aws ec2 describe-snapshots --query 'Snapshots[?StartTime >= `2016-08-31`].{id:SnapshotId}' --owner-ids 123456789012 []
I'm trying to query the snapshots created after a specific date and it is returning no results. The query I am trying is below:aws ec2 describe-snapshots --query 'Snapshots[?StartTime >= `2017-06-01`].{id:SnapshotId}' --owner-ids nnnnnnnnnnnIf I remove the --query section, all snapshots are returned, so I know it's something to do with the query.I tried checking theJMESPath docsbut there isn't much there on date manipulation. I also tried replicating the syntax in the examplehereto no avail.Thanks,
AWS CLI - How to query snapshots created after a specific date
The AWS Application Load Balancer saves log files into Amazon S3.Amazon Athenacan then be used to query the files saved in S3. The important part is knowing the file format.See this excellent article:Athena & ALB Log AnalysisThey use this query to create the table:CREATE EXTERNAL TABLE IF NOT EXISTS logs.web_alb ( type string, time string, elb string, client_ip string, client_port string, target string, request_processing_time int, target_processing_time int, response_processing_time int, elb_status_code int, target_status_code string, received_bytes int, sent_bytes int, request_verb string, request_url string, request_proto string, user_agent string, ssl_cipher string, ssl_protocol string, target_group_arn string, trace_id string ) PARTITIONED BY(year string, month string, day string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = '1', 'input.regex' = '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*) ([-0-9]*) ([-0-9]*) ([-0-9]*) ([-0-9]*) ([^ ]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) ([^ ]*) ([^ ]*)\" \"([^\"]*)\" ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*)' ) LOCATION 's3://{{BUCKET}}/AWSLogs/{{ACCOUNT}}/elasticloadbalancing/us-east-1/';
I have access logging configured for my AWS ALB. It dumps these logs into an S3 bucket on an interval.To view them you have to download then unzip the file and look through the text.I'd like to see a list of the ALB HTTP requests in one place without having to go through the process mentioned above.Does AWS offer anything like this?
View AWS ALB access logs in one place
On 28 JUN 2018AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources.
AWS Lambda functions have a bunch ofevent sourcesbut SQS ain't one.Why is that? I would have thought it was a good fit.
Why isn't SQS an event source for lambda?
I got it.It was just a stupid typo in the DynamoDB Action.I did wrote Putitem but it needs to be PutItemHave a nice Day
I´ve tried to set up a very simple Table just like in this example but it does not work. When I test it in the AWS Console of API Gateway I always get the following response:Endpoint response body before transformations: {"__type":"com.amazon.coral.service#UnknownOperationException"}My Mapping Table looks like the following:#set($inputRoot = $input.path('$')) { "TableName": "Subscriptions", "Item": { "subscriptionId": { "S": "$inputRoot.subscriptionId" }, "userId": { "S": "$inputRoot.userId" }, "durationInMonth": { "S": "$inputRoot.durationInMonth" }, "sku": { "S": "$inputRoot.sku" } } }And my Requestbody looks like this.{ "userId": "4", "subscriptionId": "5", "sku": "12345", "durationInMonth": "1" }What am I doing wrong?Thanks for helping. Have a nice Weekend.Nathalie
AWS DynamoDB UnknownOperationException
Elastic Load Balancers do not support static IP addresses. They only support DNS CNAMEs (or Aliases if you are using Route 53). This is because ELB DNS entries will resolve to different IP addresses depending on how it is scaling between availability zones. Also, over time, the IP addresses will/may change.The AWS documentation also specifically states to create CNAME-records only when mapping custom DNS entries to your ELB. If you are using Route 53, you can create an Alias record, which look like an A-record to the outside world.If you need a static IP address, then you cannot use ELB.Instead, you will need to manage your own load balancer (HAProxy, nginx, etc.) on an EC2 instance using an Elastic IP address.
I have a load balancer configured to have an IPV4 Ip address. However, the provided IP is a DNS mapped IP address to the load balancer of the format *.ap-south-1.elb.amazonaws.com.I need to configure IOT devices to send data to the load balancer and they do not support DNS. How can I assign a static IP address like...to my load balancer so that I can configure my IOT devices to send data to it.The Elastic IPs section does not provide a facility to allocate it to a load balancer and only supports ec2 instances.Conclusion:I have found a way to use DNS on my IOT device and working on this was vital. I am now aware of the option of manually hosting a load-balancer on an EC2 instance. A simper alternative is forwarding all requests at an elastic IP addressed EC2 instance to the load balancer. However, this will cause a bottleneck at the transparent proxy. Hence, I think using the DNS feature on the IOT device is the best option.
Adding a public static ipv4 address to an AWS load balancer
The response fromsts:AssumeRoleincludes a property calledExpiration:{ "AssumedRoleUser": { "AssumedRoleId": "AROA3XFRBF535PLBIFPI4:s3-access-example", "Arn": "arn:aws:sts::123456789012:assumed-role/xaccounts3access/s3-access-example" }, "Credentials": { "SecretAccessKey": "9drTJvcXLB89EXAMPLELB8923FB892xMFI", "SessionToken": "AQoXdzELDDY//////////wEaoAK1wvxJY12r2IrDFT2IvAzTCn3zHoZ7YNtpiQLF0MqZye/qwjzP2iEXAMPLEbw/m3hsj8VBTkPORGvr9jM5sgP+w9IZWZnU+LWhmg+a5fDi2oTGUYcdg9uexQ4mtCHIHfi4citgqZTgco40Yqr4lIlo4V2b2Dyauk0eYFNebHtYlFVgAUj+7Indz3LU0aTWk1WKIjHmmMCIoTkyYp/k7kUG7moeEYKSitwQIi6Gjn+nyzM+PtoA3685ixzv0R7i5rjQi0YE0lf1oeie3bDiNHncmzosRM6SFiPzSvp6h/32xQuZsjcypmwsPSDtTPYcs0+YN/8BRi2/IcrxSpnWEXAMPLEXSDFTAQAM6Dl9zR0tXoybnlrZIwMLlMi1Kcgo5OytwU=", "Expiration": "2016-03-15T00:05:07Z", "AccessKeyId": "ASIAJEXAMPLEXEG2JICEA" } }TheExpirationvalue is anISO 8601 formatted date. This means, that the date can be in any timezone, but the timezone is specified in the date itself. The example above is UTC due to the "Z" at the end of the date value.To be 100% correct, you should probably anticipate the value could be non-UTC value, which you may need to timezone-shift the value. However, in practice, most likely, the value will be UTC.
I imagine this is likely, but I haven't found any explicit information saying that it's true.When receiving a Credentials object from AssumeRole, is the Expiration in UTC time?
AWS temporary credentials — is the Expiration time in UTC?
Yes, every AWS Lambda function has a setting for defining maximum duration. The default is a few seconds, but this can be expanded to 5 minutes.AWS also has the ability to defineBudgets and Forecastsso that you can set a budget per service, per AZ, per region, etc. You can then receive notifications at intervals such as 50%, 80% and 100% of budget.You can also createBilling Alarmsto be notified when expenditure passes a threshold.AWS Lambda comes with amonthly free usage tierthat includes 3 million seconds of time (at 128MB of memory).It is unlikely that you will experience high bills with AWS Lambda it is being used for its correct purpose, which is running many small functions (rather than for long-running purposes, for which EC2 is better).
AWS Lambda seems nice for running stress tests.I understand that is it should be able scale up to 1000 instances, and you are charged by 0.1s rather than per hour, which is handy for short stress tests. On the other hand, automatically scaling up gives you even less control over costs than EC2. For development having explicit budget would be nice. I understand that Amazon doesn't allow for explicit budgets since they can bring down websites in their moment of fame. However, for development having explicit budget would be nice.Is there a workaround, or best practices for managing cost of AWS Lambda services during development? (For example, reducing the maximum time per request)
Limit AWS-Lambda budget
The lex-runtime is accessible from the Javascript SDKs. AWS documentation is here:http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/LexRuntime.htmlThe trickiest part is authentication. The recommendation from Amazon is usually to route your Lex requests through a Lambda function in front of an API gateway. An alternative is to have a Cognito unauthenticated role that has permissions to call Lex and then have the clients call it directly.The getting started guide may be of use if you are unfamiliar with calling AWS from the browser:http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-browser.html
I am able to create Amazon lex chat bot. I am also able to publish the same in Facebook messenger. Also I found sdk's for iOS and Android.What I want is to publish lex bot as a webservice which can be called from any rest client, so that it can be integrated to any user interface with rest calls.I heard of Javascript sdk's for publishing lex bots as service, but I am not able to find any proper documentation on this.
How to publish Amazon Lex Chatbot as webservice
AWS S3 API has a limit of maximum 1000 keys per response.You will have to do multiple requests to retrieve all of your objects.You can take a look at the API here:http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.htmlI have found a example to retrieve all your objects:How to list all AWS S3 objects in a bucket using Java
I have a problem with retrieving the data from S3 using Amazon SDK. The problem is that it retrieves only 1000 elements, while indeed I have 10,000 elements in theaws_bucket_data->currentDataDirectory. I do not usesetMaxKeys(...), so the result seems to be weird.BasicAWSCredentials credentials = new BasicAWSCredentials("...", "..."); client = new AmazonS3Client(credentials); ListObjectsRequest listObjectsRequest = new ListObjectsRequest() .withBucketName(aws_bucket_data) .withPrefix(currentDataDirectory); ObjectListing objectListing = client.listObjects(listObjectsRequest); System.out.println(objectListing.getObjectSummaries().size());How can I solve this problem?
Not all data (only 1000 elements) retrieved from S3 using Amazon SDK
The latency metric on ELB is comparable to theTargetResponseTimemetric on ALB.ELB Latency definition:(source)The total time elapsed, in seconds, from the time the load balancer sent the request to a registered instance until the instance started to send the response headers.ALB TargetResponseTime definition:(source)The time elapsed, in seconds, after the request leaves the load balancer until a response from the target is received. This is equivalent to the target_processing_time field in the access logs.Further ReadingAWS Documentation - CloudWatch Metrics for Your Application Load Balancer
Is there any way to get latency from AWS/ApplicationELB namespace? I know it is available in the AWS/ELB namespace, but I need it for AWS/ApplicationELB, as this is what I use.
How to get latency metric from AWS CloudWatch Application ELB?
) Your AWS CLI should be something like below, referthis documentation:aws rekognition search-faces-by-image \ --image '{"S3Object":{"Bucket":"bucket-name","Name":"Example.jpg"}}' \ --collection-id "collection-id" \ --region us-east-1 \ --profile adminuser2) If your AWS CLI installed on windows box, make sure you change"the single quotes to double quotes and the double quotes to escaped quotes"
I have the AWS CLI installed on Windows and am using the Windows command prompt.I am trying to use Rekognition but I cannot seem to get any commands working. The closest I have gotten is with:aws rekognition detect-faces --image S3Object=\{Bucket=innovation-bucket,Name=image.jpg,Version=1\} --attributes "ALL" --region us-east-1This results in:Error parsing parameter '--image': Expected: ',', received: '}' for input: S3Object={Bucket=innovation-bucket,Name=image.jpg,Version=1}Why is it expecting a comma?EDIT:When I try the format from the documentation I also get errors:aws rekognition detect-faces --image '{"S3Object":{"Bucket":"innovation-bucket","Name":"image.jpg"}}' --attributes "ALL" --region us-east-1Error parsing parameter '--image': Expected: '=', received ''' for input: '{"S3Object":{"Bucket":"innovation-bucket","Name":"image.jpg‌​"}}'
Calling Rekognition using AWS CLI
I haven't been able to remove theStageStageName, but when I deploy using SAM, I set a dynamic StageName in my GatewayAPI deployment using:Properties: StageName: !Ref "STAGE_VARIABLE"I have a different stack for each environment, so there is aprodAPI with aprodstage and adevAPI with adevstage. I found this easier than have multiple stage deployments of the same GatewayAPI
I'm using AWS SAM (Serverless Application Model) to create a lambda with an API endpoint.In my SAM template.yaml I have a getUser lambda with a /user endpoint.template.yamlResources: GetUser: Type: AWS::Serverless::Function Properties: CodeUri: ./src Handler: handler.getUser Timeout: 300 Runtime: nodejs6.10 Events: GetUser: Type: Api Properties: Path: /user Method: getWhen I deploy this using AWS CLI it successfully creates the lambda and endpoint, but with an API Gateway Stage confusingly named "Stage". I want to change stage name to something else, like "Prod". How do I change stage name?Here's where stage name is defined in the cloudformation template after it is deployed. I want "StageName": "Stage" to be something like "StageName": "Prod"."ServerlessRestApiDeployment": { "Type": "AWS::ApiGateway::Deployment", "Properties": { "RestApiId": { "Ref": "ServerlessRestApi" }, "StageName": "Stage" }
AWS Serverless Application Model (SAM) -- How do I change StageName?
triggersis not a valid argument for anaws_instanceresource. The usual way to pass configuration tocloud-initis via theuser_dataargument, like this:resource "aws_instance" "bootstrap2" { ami = "${var.aws_centos_ami}" availability_zone = "eu-west-1b" instance_type = "t2.micro" key_name = "${var.aws_key_name}" security_groups = ["${aws_security_group.bastion.id}"] associate_public_ip_address = true private_ip = "10.0.0.12" source_dest_check = false subnet_id = "${aws_subnet.eu-west-1b-public.id}" # Pass templated configuration to cloud-init user_data = "${data.template_file.test.rendered}" tags { Name = "bootstrap2" } }
I am trying to initialize AWS instances using cloud-init, I test with terraform code:variable "hostname" {} variable "domain_name" {} variable "filename" { default = "cloud-config.cfg" } data "template_file" "test" { template = <<EOF #cloud-config hostname: $${hostname} fqdn: $${fqdn} mounts: - [ ephemeral, null ] output: all: '| tee -a /var/log/cloud-init-output.log' EOF vars { hostname = "${var.hostname}" fqdn = "${format("%s.%s", var.hostname, var.domain_name)}" } } data "template_cloudinit_config" "test" { gzip = false base64_encode = false part { filename = "${var.filename}" content_type = "text/cloud-config" content = "${data.template_file.test.rendered}" } } resource "aws_instance" "bootstrap2" { ami = "${var.aws_centos_ami}" availability_zone = "eu-west-1b" instance_type = "t2.micro" key_name = "${var.aws_key_name}" security_groups = ["${aws_security_group.bastion.id}"] associate_public_ip_address = true private_ip = "10.0.0.12" source_dest_check = false subnet_id = "${aws_subnet.eu-west-1b-public.id}" triggers { template = "${data.template_file.test.rendered}" } tags { Name = "bootstrap2" } }But it is failing the triggers inside the "bootstrap" resource. So how can I aprovisione this instance with the cloud-config I defined up?
How can I initialize an instance with cloud-init on Terraform
The format for adding these via the CLI is a little non-intuitive.aws apigateway update-rest-api --rest-api-id [ID] --patch-operations "op=add,path=/binaryMediaTypes/image~1jpg" aws apigateway update-rest-api --rest-api-id [ID] --patch-operations "op=replace,path=/binaryMediaTypes/image~1jpg,value='image/gif'"
I'm attempting to configure and update the binary support options of an AWS API Gateway. I can do this through the web UI without issue, but I would like to script this.Using the CLI Command Reference pages:http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-rest-api.htmlhttp://docs.aws.amazon.com/cli/latest/reference/apigateway/update-rest-api.htmlAble to issue a get-rest-api command just fine:C:\> aws apigateway get-rest-api --rest-api-id [ID] { "id": "[ID]", "createdDate": 1490723884, "name": "testbinarymediatypes" }But when attempting to update the binaryMediaTypes:PS C:\> aws apigateway update-rest-api --rest-api-id [ID] --patch-operations op=add,path=binaryMediaTypes,value='image/jpg'An error occurred (BadRequestException) when calling the UpdateRestApi operation: Invalid patch path binaryMediaTypesCan this be done or am I stuck manually adding the types in the web UI every time?
Updateing aws apigateway binaryMediaTypes
Please read the KopsSSH docs:When using the default images, the SSH username will be admin, and the SSH private key is be the private key corresponding to the public key in kops get secrets --type sshpublickey admin. When creating a new cluster, the SSH public key can be specified with the --ssh-public-key option, and it defaults to ~/.ssh/id_rsa.pub.So to answer your questions:Yes, you can set the key using--ssh-public-keyWhen--ssh-public-keyis not specified Kops does not autogenerate a key, but rather uses the key found in~.ssh/id_rsa.pub
I'm using KOPs to launch a Kubernetes cluster in the AWS environment.Is there a way to set a predefined SSH key when callingcreate cluster?If KOPs autogenerates the SSH key when runningcreate cluster, is there a way to download this key to access the cluster nodes?
Define custom SSH key or find autogenerated SSH key in KOPs
TL;DR: Prometheus (usually) works by pulling metrics off a server, so I dont see how you could apply it directly onto S3, unless you generate a dynamic page with the number of png's on S3.in details: The way Prometheus works is by pulling metrics, available as HTTP pages, from servers. Your server will need to publish this special page called /metrics and Prometheus will go there and get its contents.If you can generate adynamicpublicpage on S3 that would export the current number of .pngs in your bucket, that this should work. just point Prometheus to it.
I'm designing an architecture that similar to what's describedhere. The diagram is:My question is how do you monitor such an architecture where independent pieces compose into a logical unit? It's almost as if we need a monitoring system that checks S3 for .zip files and then polls S3 for the corresponding png files. IfafterX hours no png files are found then alert.Is there a tool that does timeseries analysis? Does Prometheus do this?
Monitoring a microservice architecture
Add an inbound rule for the security group attached to your server for the specific port you're using.
I am building an app in node.js and I’m using AWS EC2 to host it. However, my HTTP requests are not working.My app is split into two repositories:app-uiandapp-server.app-servercontains all of my server side code/API’s. In app-ui, I am making simple POST requests such as:$.ajax({ type: "POST", url: "http://ec2-xx-xxx-xx/api/users", success: function(data) { console.log(data); }, error: function(a) { console.log(a); } });However, I keep getting thenet::ERR_CONNECTION_TIMED_OUTerror.Does anyone know what might be happening?
HTTP requests not working on aws ec2
setVersionIdwould be something the SDK library itself uses to populate the versionId returned by the service when the object is created, so that you can retrieve it if you want to know what it is.Version IDs in S3 are system-generated opaque strings that uniquely identify a specific version of an object. You can't assign them.Thedocumentationuses some unfortunate examples like "111111" and "222222," which do not resemble real version-ids. There's a better example further down the page, where you'll find this:Unique version IDs are randomly generated, Unicode, UTF-8 encoded, URL-ready, opaque strings that are at most 1024 bytes long. An example version ID is3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo.Only Amazon S3 generates version IDs.They cannot be edited.You don't get an error here because all this method does is set the versionId inside the PutObjectResult object in local memory after the upload has finished. It succeeds, but serves no purpose.To store user-defined metadata with objects, such as your release/version-id, you'd need to use object metadata (x-amz-meta-*) or the new object tagging feature in S3.
I want to upload an object to Amazon versioned bucket (using Java AWS SDK) and set a custom version to this object (goal is to set the same version to all objects, uploaded at once)PutObjectResult por = amazonS3Client.putObject(...); por.setVersionId("custom_version");So, is it the right way to set a version to the uploaded object?Does this code lead to 2 separate requests to Amazon?What if Internet is broken while por.setVersionId(..) is being called?Why por.setVersionId(..) does not throw an exception such as SdkClientException if this method really is trying to set a version ID on Amazon server?
Upload an object to Amazon S3 with custom version id
Best would be to usemvwith--recursiveparameter for multiple filesWhen passed with the parameter--recursive, the followingmvcommand recursively moves all files under a specified directory to a specified bucket and prefix while excluding some files by using an--excludeparameter. In this example, the directory myDir has the files test1.txt and test2.jpg:aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"Output:move: myDir/test1.txt to s3://mybucket2/test1.txtHope this helps.
I have backup files in different directories in one drive. Files in those directories can be quite big up to 800GB or so. So I have a batch file with a set of scripts which upload/syncs files to S3.See example below:aws s3 sync R:\DB_Backups3\System s3://usa-daily/System/ --exclude "*" --include "*/*/Diff/*"The upload time can vary but so far so good.My question is, how do I edit the script or create a new one which checks in the s3 bucket that the files have been uploaded and ONLY if they have been uploaded then deleted them from the local drive, if not leave them on the drive?(Ideally it would check each file)I'm not familiar with aws s3, or aws cli command that can do that? Please let me know if I made myself clear or if you need more details.Any help will be very appreciated.
AWS S3, Deleting files from local directory after upload
It is totally based on the processing requirements and frequency of processing.You can use Amazon EMR for parsing the file and run the algorithm, and based on the requirement you can terminate the cluster or keep it alive for frequent processing.https://aws.amazon.com/emr/getting-started/You can try using Amazon Athena (Recently launched) service, that will help you for parsing and processing files stored in S3. The infrastructure need will be taken care by Amazon.http://docs.aws.amazon.com/athena/latest/ug/getting-started.htmlFor Complex Processing flow requirements, you can use combinations of AWS services like AWS DataPipeline - for managing the flow and AWS EMR or EC2 - to run the processing task.https://aws.amazon.com/datapipeline/Hope this helps, thanks
I have more than 30GB file stored in s3,and I want to write an Lambda function which will access that file, parse it and then run some algorithm on the same. I am not sure if my lambda function can take that big file and work on it as Max execution time for Lambda function is 300 sec(5 min). I found AWS S3 feature regarding faster acceleration, but will it help?Considering the scenario other than lambda function can any one suggest any other service to host my code as micro service and parse the file?Thanks in Advance
Accessing Large files stored in AWS s3 using AWS Lambda functions
You should be surprised only if you get no response when you ping instance B'sprivateaddress.Your subnet's routing table will route the public address to outside your VPC. When it goes out of the VPC and the traffic comes back in,the source address will be the public IP (or NAT IP), not the private IP. The routing table comes first before the security group. When you send traffic to another machine:The DNS name is resolved to an IPIn your case you specify an IP, so no DNS resolution takes placeIf the address falls in one of the routing table rules, it will be routed accordinglyWhen you specify the private IP, it is most likely routed internally and the security group allows the traffic and you are able to pingWhen you specify a public IP, it is most likely routed out. Without looking at the subnet routing table, it is hard to guess where the traffic goes. Show us the routing table and I can tell you what exactly is happening.In this case, the source address will be public IP (or NAT IP), not the private IP.Usetracerouteorlftto track the network hops
I have two instances (instance A and instance B) both are part of the same security group (say sg-1).The security group has the following inbound rule set:Type: All trafficProtocol: AllPort Range: AllSource: sg-1I can ping instance B from instance A using instance B's private IP but I get no response when I use instance B's public address.What am i missing?Edit:If I change the source in the above security configuration to "Anywhere", ping to the public IP works.
Why can't I ping an aws instance by public IP even though I can ping the private IP just fine?
You need to deploy the API first which will create the deployment and ask you to create a stage. This step is not totally clear in my opinion.
I would like to know why the deplyment zone is still grey i've tried a lot of things but it's still grey ....I'am trying to make an api from a lambdathanks and regards
Why is the deployment zone is greyed out in AWS api gateway deployment?
Seems like you've granted permissions to objects, not the bucket. Your policy should allow listing the bucket. Try specifying the bucket name in policy:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:*" ], "Effect": "Allow", "Resource": "arn:aws:s3:::MY_BUCKET" } ] }NoteMY_BUCKETinstead ofMY_BUCKET/*.
I use theAmazonS3Clientfrom the AWS SDK for Java in version 1.11.66 to check for existence of key in S3:s3client.doesObjectExist(bucketName, key);If I give it an existing key name, it properly returnstrue. For non-existing keys I always get anAmazonS3Exceptioninforming me about a 403 coming back from the API.What do I have to change to make it returnfalse?The IAM policy for the service looks like this:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:*" ], "Effect": "Allow", "Resource": "arn:aws:s3:::MY_BUCKET/*" } ] }
AWS SDK throws 403 when checking for non-existing S3 key, but returns true for existing key
There are already Utils that do this, no need to roll your own:ec2sshis a python script to do it but a google forec2sshwill turn up numerous similar tools in multiple languages that will do the job.Personally, I setup a bastion with an EIP, and jump from there to all the other hosts. This way you don't need to give your Instances public IPs just for admin access. If your not transferring large files you can get away with a t2.nano as the bastion instance, which with a reservation costs you peanuts a month.Ec2ssh has bastion support so the config overhead is minimal.
I would like to turn on/off some of my Amazon EC2 instance, but this causes the IP and all DNS names to change. Therefore when I boot my machine again, all my SSH configurations are lost since I was connecting using the previous DNS name.Is there a simple way to resolve the dns of the target machine (with no or as low as possible cost) using only its instance-id (or any other parameter that do not change over shutdowns-restarts) ? Do I have to use the AWS-CLI ? What if I want to provide an access to an EC2 machine to someone who doesn't have AWS credentials ?Not sure if tags like "service-discovery", "broker", or "proxy" would really make sense here, but for the sake of references I'm adding them in my post.I do not want to pay for elastic IPs.
SSH to Amazon EC2 using instance ID only
It is a list. So add one more to that list."SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0" }, { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0" } ]
I'm trying to open multiple ports for EC2 instances within a security group using CloudFormation. However, I can't find documentation for proper syntax for doing so (opening multiple ports). Would something like the following work?"InstanceSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Web Security Group", "SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0" } ] } }Thanks in advance!
What it is the proper syntax for opening multiple ports in security group with CloudFormation
Log in to AWS - EC2Go to "NETWORK & SECURITY" -> "Security Groups"Find the group your instance is a part ofClick on "Inbound"Add the HTTP port 80Apply the changes.Source:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html
I'm fairly new to AWS and I'm trying to create an Amazon Linux server to run PHP code. I'm following thetutorialon AWS, but I can't connect to my server using either the public DNS or IP address. The error I'm getting is "This site can't be reached.Public DNSrefused to connect". The instance is currently running, and I've successfully connected to it using puTTy and WinSCP. I double-checked the security group for my server, and I have port 80 open to all IP addresses.
AWS - Cannot Connect To EC2 Server With Browser Using Public DNS/IP
Yes, it will immediately and permanently be out of sync from the master. Once promoted, you can't undo.That is the reason you promote a replica -- to disconnect it from the master and make it an independent instance.The Read Replica, when promoted, stops receiving WAL communications and is no longer a read-only instance.http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
I am trying to understand if promoting a read replica to DB instance will maintain the replication/mirroring on the newly create DB instance? Wondering if the newly created DB instance will go out of sync with master.
Will promoting aws rds read replica to db instance maintain the replication for new DB instance?