Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You can create CloudWatch agent config files in/etc/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.d/directory.The config file should be like,{ "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "path_to_log_file/app1.log", "log_group_name": "/app/custom.log", "log_stream_name": "{instance_id}" } ] } } } }Restarting the cw agent will consider this configuration automatically.One more way is toattach config filesmanually using the command,/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/path_to_json/custom_log.jsonThis log group will be available in the CloudWatch Logs console.
We send up custom metrics to AWS using Python (see existing code below) and separately use theAWS CloudWatch Agentto send up metrics for our EC2 machine. However, we'd like to stop sending the custom metrics through a boto client and instead send them up using the AWS CloudWatch agent.I've found details on how to send up custom metrics fromStatsDandcollectd, but it's unclear how to send up your own custom metrics. I'm guessing we'll have to export our metrics in a similar data format to one of these, but it's unclear how to do that. In summary, we need to:Export the metric in Python to a log file in the right formatUpdate the AWS CloudWatch Agent to read from those log files and upload the metricDoes anyone have an example that covers that?Existing Codeimport boto3 cloudwatch = boto3.client( service_name="cloudwatch", region_name=env["AWS_DEPLOYED_REGION"], api_version="2010-08-01", ) cloudwatch.put_metric_data( Namespace="myNameSpace", MetricData=[ { "MetricName": "someName", "Dimensions": [ {"Name": "Stage", "Value": "..."}, {"Name": "Purpose", "Value": "..."}, ], "Values": values, "StorageResolution": 60, "Unit": "someUnit", }, ], )
Logging custom metrics using AWS Cloudwatch Agent and Python
I don't have a Spark Application option because I created a Core Hadoop cluster.When I created the cluster, under Software configuration, I should have chosen Spark, then I would have had the Spark application option under Step type.
According to thedocs:For Step type, choose Spark application.But in Amazon EMR -> Clusters -> mycluster -> Steps -> Add step -> Step type, the only options are:
How to add an EMR Spark Step?
This used to be a limitation due to the way data had been stored in the back end, but it doesn't apply (to the original extend, see jellycsc's comment below) anymore.The reason for this recommendation was, that in the past Amazon Simple Storage Service (S3) partitioned data using the key. With many files having the same prefix (like e.g. all starting with the same year) this could have led to reduced performance when many files needed to be loaded from the same partition.However, since 2018, hashing and random prefixing the S3 key is no longer required to see improved performance:https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/
I was told by one of the consultants from AWS itself that, while naming the folders(objects) in s3 with date. use MM-DD-YYYY for faster s3 operations like get Object, but i usually use YYYY-MM-DD. I don't understand what difference it makes, is there a difference, if yes, which one is better?
s3 bucket date path format for faster operations
I worked with AWS Support. The answer is:Add at attribute mapping from ApplicationSubjectto AWS SSO${user:subject}with Formatunspecified.Just a note, currently there is no AWSdocumentedrequirement to map theSubjectfor a Custom SAML application; however, it seems to be required.Also, currently the globalAttribute Mappings table documentationis hard to find under "Manage Your Identity Source" -> "Connect to Your Microsoft AD Directory" -> "Attribute Mappings" (even though this applies to all application types, not just Microsoft AD).
I would like to configure AWS SSO as an Enterprise SAML Connection. I tried to cobble together the proper configuration by stealing bits ofAuth0's other SAML IdP examplesbut I have not been able to get it working yet.In AWS SSO:configured a new applicationset theApplication ACS URLtohttps://<AUTH0 TENANT>.auth0.com/login/callbackset the Application SAML audience tourn:auth0:<AUTH0 TENANT>:<AUTH0 CONNECTION NAME>download the certassign a userIn Auth0:configured an Enterprise SAML Connectionchoose IdP domainsuploaded the cert, pasted the Sign In and Sign Out URLs from AWS SSOCurrently, clicking “Test” on my Auth0 SAML Connection redirects to AWS SSO, I can log in, but then I get an error “Missing nameId format of subject”.Has anyone successfully configured AWS SSO as an Auth0 Enterprise SAML Connection?Just to be clear, I’mnottrying to configure Auth0 as my AWS IdP, so theAuth0 integrations AWS sso docdoes not apply
How to set up AWS SSO as an Auth0 Enterprise SAML Connection
+50Ec2 doesn't support https like this ("out of the box").There is several way of doing it, but I suggest you should create a application load balancer (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) and then configure https on it (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html).Other solution can be using Cloudfront, or configure https directly on the instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SSL-on-amazon-linux-2.html).Hope that makes sense.
I am working on a 2-player card game. The two client facing pages are hosted on Github pages and the node server is running on AWS.Everything works fine when I view my client side pages locally, but when I try to open them on Github pages I get this error:Mixed Content: The page at '' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint ''. This request has been blocked; the content must be served over HTTPS.So then I change the connection url to include httpslike this:var socket = io.connect("https://ec2-18-191-142-129.us-east-2.compute.amazonaws.com:3000");And I get this error:index.js:83 GEThttps://ec2-18-191-142-129.us-east-2.compute.amazonaws.com:3000/socket.io/?EIO=3&transport=polling&t=N71Cs6cnet::ERR_SSL_PROTOCOL_ERRORHere are my security groups:Do I need to do something with an SSL certificate? Is it even possible with my current setup as I don't have access to the domain I am hosting on (Github Pages). If it's not possible are there any online services I can host my client code on and get an SSL certificate, or do I have to buy a domain and hosting? Any help welcome, but please try to explain it because I am very new to all this. Thank you.
Cant connect to my AWS node server through secure (https) connection
Amazon SQS will not do what you are requesting. Also, I do not recommend doing any "tricks" to force it to delay.I would recommend that you look atAWS Step Functions. It can orchestrate interaction between AWS Lambda functions and can be configured towait (sleep) for a period before invoking an AWS Lambda function.
I need a queue to process messages afterxhours delay. And I need a data-driven all event-based approach not using any schedulers and such.The scenario is I have some live data that I send to an SNS topic and from there to different SQS queues to be consumed by different AWS Lambda functions.One of the Lambda functions needs to process the messages after 3 hours delay. However, the maximum delivery delay is 15 minutes. If I read the message for the first time it will be automatically be deleted from SQS as I am using event source mapping triggers to invoke the lambda function.So, I am wondering how I could avoid deleting the message and make it invisible the first time it is processed?Any thoughts/help would be much appreciated.
How to make a SQS delay queue of x hours
You can use theincludeandexcludearguments,aws s3 cp s3://myBucket/ /Users/myName/myFolder/ --recursive --exclude "*" --include "results_*"All files will be excluded from the bucket except for files starting withresults_. The order of theexcludeandincludearguments is important.
I have an S3 bucketaws s3 ls s3://myBucket/ PRE 2020032600/ PRE 2020032700/ PRE 2020032800/ PRE results_2020011200/ PRE results_2020011300/ PRE results_2020011400/ PRE results_2020011500/I want to copy locally only the folders that start withresults_aws s3 cp s3://myBucket/*something /Users/myName/myFolder/ --recursive
Copy folders from S3 bucket with specific prefix
You can make use ofGateway Responsesin API Gateway to modify the HTTP status code and response that goes back to a client.By default, for the scenario you have described, the response is the big message you see and status code is 403. To change this -Go to "Gateway Responses" on the left column for your API.Select "Access Denied" and click on "Edit" on top right.Click on "application/json" under "Response templates".Modify the message there as {"message":"Your custom message"} in the "Response body template" section.Deploy the API and wait for a minute for changes to propagate.If you see the image below, I have changed the status code to 401 and message to "Unauthorized".
how can the default 403 body be changed from an AWS API Gateway resource policy error?{"Message":"User: anonymous is not authorized to perform: execute-api:Invoke on resource:... with an explicit deny"}
AWS API Gateway change access denied response message from resource policy
Yes, you can use lambda. Specifically you can setupEvent Source Mappingbetween your SQS queue and lambda function.In this scenario, the Lambda service will be pulling the SQS queue for you, and invoking your function whenever there are messages. You don't have to do anything, in a sense that you don't have to worry about implementing pulling procedure.Lambda service takes care of pulling. It will also remove the message from queue if your function completes successful. Thus you don't have to expliclitly delete messages from the queue.
I am using AWS SQS for Amazon MWS Order APIs. Everytime someone orders from a seller account who has added me as his developer, Amazon will send the notification to my AWS SQS Application.I can pull the notifications from there. But for this, I will have to create a scheduler to pull the notifications. Can I use any other AWS as a listener just to trigger my own service everytime a notification is pushed on my destination by Amazon? Can I use Lambda functions for it? I am new to AWS so I know only I little about it.
How to trigger an event everytime a notification is pushed on my AWS SQS destination?
The page you referenced says:Getting API VersionsTo get the API version for a service, see the Locking the API Version section on the service's reference page, such ashttps://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.htmlfor Amazon S3.Therefore, the API version for Lambda can be found on:https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
AWS themselves mention that it's important to lock the API version you use to prevent any unexpected changes to API from breaking your code (seeLocking API Versions).However, I can't seem to find the latest API version for AWS Lambda. I want to pass this to AWS Boto3 (docs here). Where do I find the latest API version?
Find latest AWS API version for locking the version
Your API calls to S3 are made using AWS credentials. If you want to invoke the HTTP HEAD (orHeadObject) operation on an S3 object then your credentials need to have permission for the S3 object in question.Check the IAM policies associated with theIAM rolethat the Lambda function is using. You need thes3:GetObjectpermission.Note one additional thing with HeadObject: if the object you request does not exist, the error that S3 returns depends on whether or not you also have thes3:ListBucketpermission:If you have the s3:ListBucket permission on the bucket, S3 returns an HTTP status code 404 ("no such key") errorIf you don’t have the s3:ListBucket permission, S3 returns an HTTP status code 403 ("access denied") errorHere's an example of an S3 policy that would allow the S3GetObjectaction against all objects inmybucketand also allowListBucketonmybucket:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::mybucket/*" ] }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": [ "arn:aws:s3:::mybucket" ] } ] }
I'm creating an AWS Lambda Function that tries to download a file (s3.download_file) to a temp dir that I create using thetempfilelibrary from Python (3.6). Then, I make some transformations to the file and I need to upload it (s3.upload_file) again. I'm confident about the life cycle from my temp dir, when the Lambda finish its job, the temp dir is going to destroy itself. The Lambda returns an error related to forbidden HeadObject operation. The exact error is:"An error occurred (403) when calling the HeadObject operation: Forbidden"How can I debug this error? I already checked several sources, some of them talk about adjusting policies, check permissions, but my question is, there is some step by step (that AWS in its documentation doesn't have), that allows me to survive to this problem?
ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
After including theLoginsproperty in the parameters togetCredentialsForIdentityit worked:async function switchRoles(region, identityId, roleArn, cognitoArn) { const user = await Auth.currentUserPoolUser(); const cognitoidentity = new AWS.CognitoIdentity({ apiVersion: '2014-06-30', region }); const params = { IdentityId: identityId, CustomRoleArn: roleArn, Logins: { [cognitoArn]: user .getSignInUserSession() .getIdToken() .getJwtToken(), }, }; return cognitoidentity .getCredentialsForIdentity(params) .promise() .then(data => { return { accessKeyId: data.Credentials.AccessKeyId, sessionToken: data.Credentials.SessionToken, secretAccessKey: data.Credentials.SecretKey, expireTime: data.Credentials.Expiration, expired: false, }; }) .catch(err => { console.log(err, err.stack); return null; }); }
I have a web application built with AWS Amplify and Cognito used for authentication/authorization. Cognito User Pools is the identity provider.Users are grouped into Cognito User Pools groups based on what permissions they should have.I want some users to be part of multiple groups (e.g. Admin users) which should have the sum of these group's permissions. But since the user can only assume one role I need to be able to switch roles in the app.I tried accomplishing this by using thegetCredentialsForIdentity:const cognito_identity = new AWS.CognitoIdentity({ apiVersion: '2014-06-30', region: 'eu-central-1' }); var params = { IdentityId: 'some_identity', CustomRoleArn: 'arn:aws:iam::<account_id>:role/editors', }; cognito_identity.getCredentialsForIdentity(params, function(err, data) { if (err) console.log(err, err.stack); else console.log(data); });When invoking the above code it fails withNotAuthorizedException: Access to Identity '`some_identity' is forbidden.What do I need to do to make it work?
How to switch IAM roles for AWS Cognito User belonging to multiple User Pool groups?
#!/bin/bash for instance in $(aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId]' --output text ) do managed=$(aws ssm describe-instance-information --filters "Key=InstanceIds,Values=$instance" --query 'InstanceInformationList[*].[AssociationStatus]' --output text) if [[ "$managed" != "Success" ]]; then managed="Not Managed"; fi aws ec2 describe-instances --instance-id $instance --output text --query 'Reservations[*].Instances[*].[InstanceId, Placement.AvailabilityZone, [Tags[?Key==`Name`].Value] [0][0], [Tags[?Key==`App`].Value] [0][0], [Tags[?Key==`Product`].Value] [0][0], [Tags[?Key==`Team`].Value] [0][0] ]' echo "$managed" doneSave and make the script executable, then runscript.sh > file.tsvAnd finally import it into excel
I need to audit a large number of AWS accounts to determine which EC2 instances are missing the SSM agent. Then I need have all those instances and their tags outputted.Runningaws ssm describe-instance-informationlists all the instances that have the agent installed and are running, but it doesn't list instances that are missing the agent or systems that might be turned off.
AWS SSM Agent - Using the aws cli, is there a way to list all the AWS instances that are missing the SSM agent?
You can install the Linux compatible package using the following:rm -rf node_modules/sharp npm install --arch=x64 --platform=linux --target=10.15.0 sharpNote that this also specifies a target NodeJS version, ensure its the same version of node you're using in your Lambda. This is straight out of the docs (seehere.)However that didn't solve my problems. My serverless configuration (usingserverless-bundleplugin) meant that my modules were being installed again in a separate folder, wiping out the platform-specific modules I just manually installed.Two choices here:useserverless-plugin-scriptsto hook into the deploy events to run the above patch; orrun serverless in docker using a Linux container with a matching node version.For my specificedgecase I had to go with Docker. The build scripts will effect every function you're deploying -- adding ~30mb of Sharp code -- and Lambda@Edge has limitations on source code size.
I'm developing a Serverless Framework application that is using the Node runtime and is deployed to AWS. One of my AWS Lambda functions uses the sharp library.When I run the AWS Lambda function, the following error occurs:'darwin-x64' binaries cannot be used on the 'linux-x64' platform. Please remove the 'node_modules/sharp/vendor' directory and run 'npm install'.I believe this error is occurring because when I run thesls deploycommand on my local computer, the application is packaged on macOS and then moved to AWS. I think the application needs to be packaged on an operating system usinglinux-x64.How can I deploy my Serverless Framework from my computer and still be able to use the sharp library?
How can I deploy a Serverless Framework application the macOS that uses the sharp library to AWS?
It looks like this is not possible.Though the ApproximateReceiveCount is likely to suffice for my use-case at least.
I'm pretty new to SQS and am sorry if I glossed over something obvious but is there a way to get the current visibility timeout for a message in SQS? I can see how to update the timeout visibilityhere. but I don't see any info on getting the current visibility timeout for a message (perhaps you can view it on receiving the message somehow).My use-case is changing the visibility timeout based on the current visibility timeout for a given message. Is this possible?(Note: I'm aware I can use the approximate received time to a similar effect and will go that route if getting the current visibility time is impossible)
Get current Visibility Timeout for a message
with kops vs1.19 you need to add--adminor--userto update your kubernetes cluster and each time you log out of your server you have to export the cluster name and the storage bucket and then update the cluster again. this will work.
I have created kops cluster and getting below error when logging to the cluster.Error log :*****INFO! KUBECONFIG env var set to /home/user/scripts/kube/kubeconfig.yaml INFO! Testing kubectl connection.... error: You must be logged in to the server (Unauthorized) ERROR! Test Failed, AWS role might not be recongized by cluster*****Using script for iam-authentication and logged in to server with proper role before connecting. I am able to login to other server which is in the same environment. tried with diff k8s version and diff configuration.KUBECONFIG doesn't have any problem and same entry and token details like other cluster. I can see the token with 'aws-iam-authenticator' commandWent through most of the articles and didn't helped
kubectl : error: You must be logged in to the server (Unauthorized)
I have been able to successfully perform a cross-account CloudFront invalidation from my CodePipeline account (TOOLS) to my application (APP) accounts. I achieve this with a Lambda Action that is executed as follows:CodePipeline starts a Deploy stage I callInvalidateThe Stage runs a Lambda function with the following UserParameters:APP account roleArn to assume when creating the Invalidation.The ID of the CloudFront distribution in the APP account.The paths to be invalidated.The Lambda function is configured to run with a role in the TOOLS account that cansts:AssumeRoleof a role from the APP account.The APP account role permits being assumed by the TOOLS account and permits the creation of Invalidations ("cloudfront:GetDistribution","cloudfront:CreateInvalidation").The Lambda function executes andassumes the APP account role. Using the credentials provided by the APP account role, the invalidation is started.When the invalidation has started, the Lambda function puts a successful Job result.It's difficult and unfortunate that cross-account invalidations are not directly supported. But it does work!
I have two AWS accounts (E.g. Account A & Account B). I have created a user with and attached a policy (Costumer Managed) Which has the following permission in account A.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "cloudfront:CreateInvalidation", "Resource": "arn:aws:cloudfront::{ACCOUNT-B_ACCOUNT-ID-WITHOUT-HYPHENS}:distribution/{ACCOUNT_B-CF-DISTRIBUTION-ID}" } ] }From AWS-CLI (Which is configured with Account A's user) I'm trying to create invalidation for the above mentioned CF distribution ID in Account B. I'm getting access denied.Do we need any other permission to create invalidation for CF distribution in different AWS account?
Create CloudFront Invalidations in Cross AWS account
+25There is no certain API inRetrofitto implement this behavior because Retrofit usesOkHttpto handle network operations. But you could achieve this by implementing theAuthenticatorinterface of theOkHttpclient that you pass to the Retrofit builder.OkHttpcallAuthenticatorforcredentialswhen the response code is401unauthorized error and then retrying to call failed request. You could implement theAuthenticatorlike so:public class TokenAuthenticator implements Authenticator { ApiService apiService; TokenAuthenticator(ApiService apiService){ this.apiService = apiService; } @Override public Request authenticate(Route route, Response response) throws IOException { // Refresh your token using a synchronous request String newToken = apiService.refreshToken().execute().body(); // Add new token to the failed request header and retry it return response.request().newBuilder() .header("Authorization", newToken) .build(); } }And then pass theTokenAuthenticatortoOkHttpClientlike this:OkHttpClient okHttpClient = new OkHttpClient().newBuilder() .authenticator(new TokenAuthenticator(apiService)) .build();And finally, pass the OkHttpClient to the Retrofit builder:Retrofit retrofit = new Retrofit.Builder() .client(okHttpClient) .baseUrl(BuildConfig.API_BASE_URL) .build();
Currently my login API is done using lambda and the response also contains one token along with it's expiring time. Is there any way like O'Auth token refresh in retrofit for lambda generated token? Thanks in advance. Any help appreciated.
How to refresh the token generated by lambda API using retrofit?
Well I was using bitnami bncert-tool on lightsail I deleted the whole tool and installed again and everything worked fine Here is how I removed any related files to the toolsudo rm -rf /opt/bitnami/bncert-tool sudo rm -rf /opt/bitnami/bncertthen I followedthese stepsto generate the certificate
I am running a web domain against a Bitnami AWS AMI image... I have just changed to an elastic i.p. address and need to set up https for the site. I am running the bncert-tool but get the below error:################################################################################################### sudo /opt/bitnami/bncert-tool ---------------------------------------------------------------------------- Welcome to the Bitnami HTTPS Configuration tool. ---------------------------------------------------------------------------- Domains Please provide a valid space-separated list of domains for which you wish to configure your web server. Domain list []: blah.com. The following domains were not included: www.blah.com.au. Do you want to add them? [Y/n]: y Warning: The domain 'www.blah.com.au' resolves to a different IP address than the one detected for this machine, which is '13.210.101.***'. Please fix its DNS entries or remove it. For more info see: https://docs.bitnami.com/general/faq/configuration/configure-custom-domain/#I have googled around and tried running:sudo /opt/bitnami/mysql/bnconfig --machine_hostname blah.comWhich does run, but makes no difference.Can anyone help?
Unable to create certificate for my AWS Bitnami wordpress website
I figured out the issue, I had to use the PUSH trigger instead of PULL_REQUEST_MERGED, and I also had corrupted webhooks in my GitHub repository.So this is how I solved it - I've deleted all webhooks in GitHub, deleted the Codebuild project, added the PUSH trigger, here's the triggers snippet:Triggers: Webhook: true FilterGroups: - - Type: EVENT Pattern: PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED, PULL_REQUEST_REOPENED - Type: BASE_REF Pattern: !Sub "refs/heads/${GithubBranchName}$" ExcludeMatchedPattern: false - - Type: EVENT Pattern: PUSH - Type: HEAD_REF Pattern: !Sub "refs/heads/${GithubBranchName}$" ExcludeMatchedPattern: false SourceVersion: !Sub ${GithubBranchName}Recreated my CodeBuild project, so it recreated the relevant webhooks, now everything works as expected.
The need- when merging a pull-request to a branch, I want CodeBuild to build the latest branch's commit,notthe pull-request. I'm using CloudFormation, here's the triggers snippet:Triggers: Webhook: true FilterGroups: - - Type: EVENT Pattern: PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED, PULL_REQUEST_REOPENED - Type: BASE_REF Pattern: !Sub "refs/heads/${GithubBranchName}$" ExcludeMatchedPattern: falseI've tried adding PULL_REQUEST_MERGED in the same CodeBuild project, but it builds the PR.I've also tried creating a new CodeBuild project with PULL_REQUEST_MERGED only, and I tweaked the BASE_REF and HEAD_REF, but still no luck, the pull-request is built, instead of the branch.Even though I'm using CloudFormation, feel free to reply with screenshots referring to AWS Console.Is it even possible?
CodeBuild+GitHub - How can I build a branch upon PULL_REQUEST_MERGED?
Short answer:Replacecallback(null, results);tocallback(null, event);Reason:You have to return the result that Cognito will use it for continue the authentication workflow. In this case, this iseventobject.
I recently started working with AWS. I have integrated AWS Amplify using cognito user pools for my user management(login&signup) and it went perfect(User pool gets updated whenever a new user registers). Now i have added an Cognito Post confirmation trigger to save the registered email into database and here is my trigger codevar mysql = require('mysql');var config = require('./config.json'); var pool = mysql.createPool({ host : config.dbhost, user : config.dbuser, password : config.dbpassword, database : config.dbname }); exports.handler = (event, context, callback) => { let inserts = [event.request.userAttributes.email]; context.callbackWaitsForEmptyEventLoop = false; //prevents duplicate entry pool.getConnection(function(error, connection) { connection.query({ sql: 'INSERT INTO users (Email) VALUES (?);', timeout: 40000, // 40s values: inserts }, function (error, results, fields) { // And done with the connection. connection.release(); // Handle error after the release. if (error) callback(error); else callback(null, results); }); }); };whenever a user registers and confirms his email this trigger invokes and throws me this error"Unrecognizable Lambda Output Cognito ". Even though it throws me this error in the background my DB is getting inserted with new registered email, but i am unable to redirect my page due to this. Any help will be appreciated. ThanksAravind
AWS Unrecognizable Lambda Output Cognito error
AWS Glue offers fine grained access only for tables/databases[1]. If you want users to restrict to only few columns then you have to use AWS Lake Formation. Refer tothiswhich has examples.For example if you want to give access to only two columns prodcode and location then you can achieve it by doing as shown below:aws lakeformation grant-permissions --principal DataLakePrincipalIdentifier=arn:aws:iam::111122223333:user/datalake_user1 --permissions "SELECT" --resource '{ "TableWithColumns": {"DatabaseName":"retail", "Name":"inventory", "ColumnNames": ["prodcode","location"]}}'
How to give column level access to particular roles in Glue Catalog? I want to give Role_A permissions to only column_1 and column_2 of Table XYZ And, Role_B to give access to all columns of Table XYZ.
AWS Glue Column Level access Control
Your escaping is wrong. this is the right escaping :mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m "{\"state\": {\"desired\": {\"temperature\": $1 }}}" -q 1Remember that variables within single quotes'are not interpolated.Regards!
I'm getting "JSON format error" from the AWS console when I try to publish a temperature value from a variable; this works correctly:mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m '{"state": {"desired": {"temperature": 1 }}}' -q 1I want to replace "1" with a variable so, I create a shell with the mosquitto_pub etc.., and I want to pass an argument to the shell, calling "./publish.sh Temperature_Value", where Temperature value is an int:Trying this I get errors from AWS console:DATA=${1} mosquitto_pub -t \$aws/things/my-xxxx/shadow/update -m '{"state": {"desired": {"temperature": $DATA }}}' -q 1What am I doing wrong? Thanks
Formatting JSON string for AWS IoT
aws cloudformation describe-stack-resources --physical-resource-id i-xxxxxxxxxxReplace i-xxxxxxxxxx by your instance-id or any other physical resource id in general.--physical-resource-id (string):The name or unique identifier that corresponds to a physical instance ID of a resource supported by AWS CloudFormation. For example, for an Amazon Elastic Compute Cloud (EC2) instance, PhysicalResourceId corresponds to the InstanceId . You can pass the EC2 InstanceId to DescribeStackResources to find which stack the instance belongs to and what other resources are part of the stack.Required: Conditional. If you do not specify PhysicalResourceId , you must specify StackName .Default: There is no default value.Reference:http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackResources.html
I have an EC2 instance and I want to know which cloud formation stack it belongs tousing AWS CLI.To do this using boto in python, referHow to determine what CloudFormation stack an AWS resource belongs to?
How to determine what CloudFormation stack an AWS resource belongs to using AWS CLI?
I had to write a post deployment script that will read the API Key value and insert into dynamodb db table. I have used a combination of bash script and aws console.# Get the api KeyName from cloudformation output awsApiGatewayKeyId=$(get_cf_output ApiGKeyName) # With the name you call get-api-key and pass the KeyName # When used with include-values it return the API key in the property value awsApiGatewayKey=$(aws apigateway get-api-keys --name-query $awsApiGatewayKeyId --include-values --query 'items[0].value' --output text ) # Insert the values into Dynamodb for API Authorizer. $(aws dynamodb put-item --table-name $tokenAuthTable --item '{"ApiKey": { "S": "'"$awsApiGatewayKey"'" }, "HmacSigningKey": { "S": "'"$csapiSecretKey"'"},"Name": { "S": "'"$stackName"'"}}' )By doing this I didn't have to do any manual task of putting the record in dynamodb.Please refer this below Url for getting api key.https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-api-key.html
I have a cloudformation template that outputs variables. One of the output variable isApiGKeyId: Description: "Api Key Id" Value: !Ref ApplicationApiGatewayApiKeyThis returns the Id of API gateway key and not the actual value. Is there a way to get the value?
Output apikey value in cloud formation
There is no direct way of monitoring IAM Users data usage in AWS. One (complex) approach to have some monitoring of this kind if you are using only S3 would be:Implement AWS CloudTrail for auditingUse CloudTrail logs to monitor user activity, such as GetObject requests to S3Implement functions to get the S3 object size and multiply by the number of get requests of each userImplement alarms related to that consumption per userIts neither a simple nor an efficient approach, but there is no direct way of doing so in AWS.
I have a number of IAM AWS users and I would like to:See when and how much data each user is accessing.Limit the amount of monthly data each user can read (to keep costs under control)Is there a way to do that from within AWS?The scenario I am trying to avoid here is having one user spam AWS S3 requests in a tight loop and generating a huge bill. I would like to be able to block access for that one particular user before they can rack up too huge a bill.
How can I monitor AWS S3 access by user?
As you have moved the domain into AWS, you need to move/create the MX(Mail exchange) records in route 53 too. Just create an MX record type entry in route 53 with name as your domain name and values with the list of mail servers that you can grab from godaddy. Here is the link to find how you can get the mail records form godaddyhttps://au.godaddy.com/help/checking-and-managing-my-mx-records-7590For more information about how to add MX records. follow this linkhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html#MXFormat
Here's my scenario:I bought a domain from goddadyI set up email on godaddy as an addonI hosted a web application on AWSIn order to secure my API calls I needed to transfer my domain from godaddy to AWS (I should have bought the domain on AWS to begin with but I didn't know I could do that)I have successfully transferred my domainNow my email (obviously) doesn't work anymore.My question is: do I have to transfer email over to AWS as well, or is there just some setting that I will have to change on godaddy to point to AWS now? Is there a similar service on AWS (hosted email) that I can use?
How to transfer email from godaddy to AWS
The other option you have and might be a better option is to use AWS DMS (Database Migration Service).See:Using a PostgreSQL Database as a Source for AWS DMS - AWS Database Migration Service
I have so much of data in my localPostgres databaseon my machine. I need to denormalize the data present in this local database and get a query set in specific format which can be loaded directly into Redshift tables directly usingPython.I do have queries that I can run on the local database and get the query set in specific format that needs to be loaded toRedshiftdirectly.But there is so much data that I need to move from local toRedshift. For now, the only better way I could think of is exporting the queryset that I got into a.csvfile which will be uploaded to anS3 bucketwhich will be directly copied into the Redshift tables using Python.I'm just wondering if there is any alternative way to do this. something like streaming directly fromPostgres databasetoAWS RedshiftPlease let me know if the uploading and dumping the.csvis a better way or is there any other efficient way to achieve this.
How to transfer data from Postgres to Amazon Redshift efficiently?
Useforeachand make a new array.#set($newInput={}) #foreach ($key in $ctx.args.input.keySet()) #if($key!="firstName") $util.qr($newInput.put($key, $ctx.args.input.get($key))) #end #end
I'm writing code for myGraphQLresolvers in AWSAppSyncwith resolver mapping template.I know that there is aputmehtod that I can use for add a field to input object or any other object. Like this (for example):$util.qr($name.put("firstName", "$ctx.args.input.firstName"))But now I want to remove a field from an object, for example, the input object. Is there any mehtod similar to theputmethod but for removing a field. something like:$util.qr($ctx.args.input.remove("firstName"))I am new to AWS andDynamoDBandAppSync.( you can consider me as an absolute beginner. )
In AWS Resolver Mapping Template, is there any method for removing a field from an object?
I followed @Michael-sqlbot suggestion and found the reason it wasn't working.The problem in this settings is in'ExpiredObjectDeleteMarker': Truethat is insideExpiration key. Inboto3 documentationthere is an observation about it.'ExpiredObjectDeleteMarker'cannot be specified with Days or Date in a Lifecycle Expiration Policy.Fixing it, the settings will be:lifecycle_config_settings = { 'Rules': [{ 'ID': 'My rule', 'Expiration': { 'Days': 30 }, 'Filter': {'Prefix': myDirectory}, 'Status': 'Enabled' } ]}
I'm trying to set the lifecycle configuration of a subdirectory in Amazon S3 bucket by usingboto3put_bucket_lifecycle_configuration. I used this code fromaws documentationas referece:lifecycle_config_settings = { 'Rules': [ {'ID': 'S3 Glacier Transition Rule', 'Filter': {'Prefix': ''}, 'Status': 'Enabled', 'Transitions': [ {'Days': 0, 'StorageClass': 'GLACIER'} ]} ]}I removedTransitionsand addedExpiration, to better fit my purpouses. Here is my code:myDirectory = 'table-data/' lifecycle_config_settings = { 'Rules': [{ 'ID': 'My rule', 'Expiration': { 'Days': 30, 'ExpiredObjectDeleteMarker': True }, 'Filter': {'Prefix': myDirectory}, 'Status': 'Enabled' } ]} s3 = boto3.client('s3') s3.put_bucket_lifecycle_configuration( Bucket=myBucket, LifecycleConfiguration=lifecycle_config_settings )The error I'm receiving is:An error occurred (MalformedXML) when calling the PutBucketLifecycleConfiguration operation: The XML you provided was not well-formed or did not validate against our published schemaWhat could be causing this error?
boto3 s3 Object expiration "MalformedXML" error
Try it like this (adjustLD_LIBRARY_PATHto your system):LD_LIBRARY_PATH=/usr/local/lib:/usr/local/cuda-10.1/targets/x86_64-linux/lib/ ./darknet detector train lamp.data yolov3-lamps.cfg darknet53.conv.74 -gpus 0,1,2,3,4,5,6,7
I'm getting trouble in yolo training in jupyter-notebook with using AWS SageMaker.I wanna darknet-model to start training, but it doesn't work well.I tried these code below, And all codes go well.! conda install cudatoolkit -y ! conda install cudnn -y ! conda install -c fragcolor cuda10.0 -y ! conda update --all -yI tried to train models...! ./darknet detector train lamp.data yolov3-lamps.cfg darknet53.conv.74 -gpus 0,1,2,3,4,5,6,7but this error happens../darknet: error while loading shared libraries: libcudart.so.10.0: cannot open shared object file: No such file or directoryHow can I solve this problem?
How to fix ----- ./darknet: error while loading shared libraries: libcudart.so.10.0: cannot open shared object file: No such file or directory
I found an answer if anyone needs it, although the documentation is not good. To get a list of files in particular S3 folder you need to useget_bucketand define aprefix. After this, search the list for extension.csvand get list of all.csvfiles in particular S3 folder.tmp = get_bucket(bucket = "my_bucket", prefix="folder/subfolder") list_csv = data.frame(tmp) csv_paths = list_csv$Key[grep(".csv", list_csv$Key)]
I want to read csv files in r that are given in s3 directory. Each file is more than 6GB in size, and every file is needed for further calculation in r. Imagine that I have 10 files in s3 folder, I need to read each of them separately beforefor loop. Firstly, I tried this and it works in a case when I know name of the csv file:library(aws.s3) Sys.setenv("AWS_ACCESS_KEY_ID" = "xyy", "AWS_SECRET_ACCESS_KEY" = "yyx") data <- s3read_using(FUN=read.csv, object="my_folder/file.csv", sep = ",",stringsAsFactors = F, header=T)However, how can I access multiple files without explicitly given their names in s3read_using function. This is neccessary beacuse I usepartition()in Spark which divides original dataset into subparts with some generic names (e.g.part1-0839709037fnfih.csv). If I can automatically list csv files from a s3 folder and used them before my calculation that would be great.get_ls_files <- .... #gives me list of all csv files in S3 folder for (i in 1:length(get_ls_files)){ filename = get_ls_files[i] tmp = s3read_using(FUN=read.csv, object=paste("my_folder/",filename), sep = ",",stringsAsFactors = F, header=T) ..... }
Read one by one file from s3 in r
Although changing from gp2 to io1 might seem like an upgrade and changing back from io1 to gp2 might seem like a downgrade, the truth is more nuanced, because each volume type¹ has certain use cases where it is the best choice.As a rule, you can change from any volume type to any other volume type, with only one class ofdocumentedvolume type-related exceptions, due to mounting and size constraints on st1 and sc1 volumes.So changing from one type to another type is generally a non-issue, but the same thing is not true of size. It is not possible to make an EBS volume smaller. They can only be made larger.EBS volume modifications are a safe operation, and can even be done "hot" with the disk still in use... but as a matter of best practice, you should always take an EBS snapshot before attempting a modification.¹each volume typeexcept standard. I can't think of any case where a standard volume would be the best choice.
I am currently using a gp2 elastic volume and i wanted to upgrade to io1 volume in AWS for high iops for a few days now i wanted to know once i upgrade it then after a few days will i be able to downgrade it back to gp2 and if it is possible,is their any loss of data or any such scenario possible please help.I know how to upgrade the volume but i am not sure about downgrading it so if anyone has ever tried it please help.current volume: gp2i used this command to upgrade:aws ec2 modify-volume --volume-type io1 --iops 10000 --size 200 --volume-id vol-1what should be the appropriate way to downgrade it.
Can we downgrade from a IO1 SSD to a GP2 SSD ebs volume in AWS?
TheUsercreated inAWS IAMwhich is configured with yourAWS CLIusing access_key and secret_key should have enough privileges to interact with AWS Lambda.I would preferAWSLambdaFullAccesspolicy attached to your User/Role. This is just for testing purpose and later you can reduce the privileges if you want.Once you have done the above then if you run the commandaws lambda update-function-code --function-name "helloworld" --zip-file "fileb://./helloworld.zip" --region "eu-west-2"it should work, note that forupdate-function-codemandatory field is just the--function-nameother fields are optional.aws cli update-fuction-codeAlso please take a note of thecreate-functioncommand it has just the following fields as mandatory and all other are optionalaws cli docscreate-function --function-name <value> --runtime <value> --role <value> --handler <value>and the--rolehere is the role required by the lambda while executing to interact with other services (not to be confused by the user above)
I want to create a lamba deployment package in python (with dependencies) using the Amazon tutorial.When I push the .zip package withaws lambda update-function-code --function-name my-function --zip-file fileb://function.zipI get the following errorAn error occurred (AccessDeniedException) when calling the UpdateFunctionCode operation: User: arn:aws:iam::<ACCOUNT-ID>:user/jeanclaude is not authorized to perform: lambda:UpdateFunctionCode on resource: arn:aws:lambda:eu-west-3:<ACCOUNT-ID>:function:my-functionWhich policy should I grant to jeanclaude to give him the correct access?
Which policy to grant to IAM user to create lambda deployment package in Python?
Looks like you've already set up AWS Config via console, so you have to delete the default delivery channel.Using the AWS CLI run:aws configservice get-status aws configservice delete-configuration-recorder --configuration-recorder-name default aws configservice delete-delivery-channel --delivery-channel-name default
I'm having issues updating/modifying AWS Config via CloudFormation. The use case is to begin streaming changes to an SNS Topic on an existing AWS Config delivery channel that currently only has an S3 bucket configured on it.Code looks like the following:"ConfigDeliveryChannel": { "Type": "AWS::Config::DeliveryChannel", "Properties": { "S3BucketName": {"Ref": "ConfigBucket"}, "SnsTopicARN": { "Ref": "ConfigTopic" } } },Pretty basic stuff. I've tried to leave off the bucket since I don't want to update that, but CloudFormation complained it was required. Ok, so I now have a parameter to take in the bucket name. The SNS topic is a resource created further down in my template. However when I execute my stack I get the following error:Failed to put delivery channel '<name>' because the maximum number of delivery channels: 1 is reached. (Service: AmazonConfig; Status Code: 400; Error Code: MaxNumberOfDeliveryChannelsExceededException; )So how can this be done? Is this an impossible task via CloudFormation and must be done by hand via the console? Thanks in advance!
Modifying existing AWS Config Delivery channel via CloudFormation
You should be able to access query string parameters in Lambda usingevent["queryStringParameters"]["zip_code"]as long asUse Lambda Proxy integrationis checked. If not, then you will need to set up custom mapping. In most of the use cases, proxy integration is the recommended option.See docsIn Lambda proxy integration, when a client submits an API request, API Gateway passes to the integrated Lambda function the raw request as-is, except that the order of the request parameters is not preserved. This request data includes the request headers, query string parameters, URL path variables, payload, and API configuration dataNon proxy integration requirescustom mapping.In Lambda non-proxy integration, in addition to the proxy integration setup steps, you also specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response
My lambda can useevents["zip_code"]when testing it in the management console and configuring the test event to have that.How can I configure my APIgateway to pass zip_code ?I've tried hundreds of different approaches over several hours. this should take about 30 seconds to figure out!Current (stripped down) attempt:I create a lambda. It can refer toevent[zip_code"]no problem. I create an API Gateway that points to it and I can call it. However every attempt I make to refer to the query string parameter in the lambda has failed.I have tried:event["zip_code"] event["query_parameters"]["zip_code"] event["queryStringParameters"]["zip_code"]but they all give nil.I've tried publishing my lambda (probably needed) and I've tried deploying my API to a particular stage, 'DEV' but neither seemed to help.
How to refer to query string param in lambda
Try out the following:Try to specify the log types as follows:RDSCluster: Type: AWS::RDS::DBCluster Properties: EnableCloudwatchLogsExports: - "error" - "general" - "slowquery" - "audit"Make sure the DB Cluster Parameter Group used has advanced auditing enabled. For example:server_audit_logginghas valueON. Reference:Using Advanced Auditing with an Amazon Aurora MySQL DB ClusterIf the above 2 approaches didn't work, try to change the MySQL version.
I've got CloudFormation template that I'm using to create RDS aurora-mysql5.7 cluster.I'm trying to add EnableCloudwatchLogsExports parameter to it:RDSCluster: Type: AWS::RDS::DBCluster Properties: EnableCloudwatchLogsExports: - StringAnd there is a question... All my attempts with 'error', 'errors', 'error log' itp. finished with rollback and error message:You cannot use the log types 'error logs' with engine version aurora-mysql 5.7.12. For supported log types, see the documentation. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: 16f5c442-6969-44aa-a67e-12f9ca524055)I'd like to publish Audit, Error, General and Slow query logs to CloudWatch. I can't find in docs what are the 'allowed' supported values for this property.
Create RDS aurora-mysql5.7 via CloudFormation with EnableCloudwatchLogsExports parameter
You will need to use the s3api to handle the query which uses theJMESPathsyntaxaws s3api list-objects-v2 --bucket BUCKET --query "Contents[?(LastModified>='2019-09-12' && LastModified<='2019-09-15')].Key"You can also specify time as wellaws s3api list-objects-v2 --bucket BUCKET --query "Contents[?(LastModified>='2019-09-12T12:00:00.00Z' && LastModified<='2019-09-15T12:00:00.00Z')].Key"The downside to this approach is that it must list every object and perform the query. For large buckets if you can limit to a prefix it will speed up your search.aws s3api list-objects-v2 --bucket BUCKET --prefix PREFIX --query "Contents[?(LastModified>='2019-09-12T12:00:00.00Z' && LastModified<='2019-09-15T12:00:00.00Z')].Key"And if your primary lookup is by date then look to store the objects in date/time sort order as you can use the prefix option to speed up your searches. A couple of examples.prefix/20190615T041019Z.json.gz 2019/06/15/T041019Z.json.gzThis will
I am able to filter a particular date's data but not the date range data. Like 12-09-2019 to 15-09-2019 using AWS CLIeg. to filter 2019 year's data i am using --recursive --exclude "*" --include "2019"
how can I download selective date range files from S3 bucket based on given date range like 08th aug to 15 Aug using AWS CLI?
As it has already been mentioned in the responses.LD_LIBRARY_PATHneeds to be set before the script starts. So, a way to avoid usingLD_LIBRARY_PATHis settingrpathin thesofile. Below are the steps needed.You will need to updaterpathin yoursofile. This can be done usingpatchelfpackage.Please also include yourlibaio.so.1in yoursofiles which you might have generated by runningsudo apt-get install libaio1Installingpatchelfsudo apt-get update sudo apt-get install patchelfTo updaterpathto your lib directorypatchelf --set-rpath <absolute_path_to_library_dir> libclntsh.soUpload thesofiles with updatedrpathto your glue env lib directory.In your script you can then load the library.from ctypes import * cdll.LoadLibrary('<absolute_path_to_library_dir>/libclntsh.so')
I am working on AWS Glue Python Shell. I want to connect python shell with Oracle. I am successful installing psycopg2 and mysql libraries but when I tried to connect Oracle using cx_Oracle, I have successfully installed the library but I am facing the errorDatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory"I have tried following thingsI have downloadedsofiles from S3 and placed it in lib folder in parallel to the code fileI have set the LD_LIBRARY_PATH, ORACLE_HOME using os.environI am using following codeimport boto3 import os import sys import site from setuptools.command import easy_install s3 = boto3.client('s3') dir_path = os.path.dirname(os.path.realpath(__file__)) #os.path.dirname(sys.modules['__main__'].__file__) install_path = os.environ['GLUE_INSTALLATION'] easy_install.main( ["--install-dir", install_path, "cx_Oracle"] ) importlib.reload(site) import cx_Oracle conn_str = u'{username}/{password}@{host}:{port}/{sid}' conn = cx_Oracle.connect(conn_str) c = conn.cursor() c.execute(u'select * from hr.countries') for row in c: print(row[0], "-", row[1]) conn.close() print('hello I am here');I should be able to connect with oracle on aws glue python shell
Connection with Oracle cx_Oracle problem with AWS Glue Python Shell
Found solution here!github issuesRelevant quote - "I found the source of my problem... My ~/.aws/config file contained entries called [default] and [profile default], which causes the symptom."So I removed the [default] and just left my [profile default] and then the amplify init went through normally!
Using amplify init, right after choosing which profile to use, I get this error and am not sure why: ✖ Root stack creation failed init failed TypeError: Cannot redefine property: defaultI tried changing the different user to be my default in my credentials file and then picking the default profile in the amplify init step for that - same error.I tried saying I didn't want to use a profile and instead putting in my access key id and secret key in manually, also didn't work.
Amplify Init Error - ✖ Root stack creation failed init failed TypeError: Cannot redefine property: default
You ask: "Is there any way...that I can get thesame message in both the terminal windows at the same time."This is not the way Amazon SQS operates. The general flow of Amazon SQS is:Messages are sent to the queueThe messages sit in the queue for up to 14 days (can be extended)A consumer callsReceiveMessages(), asking for up to 10 messages at a timeWhen a message is received, it is marked asinvisibleWhen a consumer has finished processing the message, the consumer callsDeleteMessage()to remove the message from the queueIf the consumer doesnotcallDeleteMessage()within theinvisibility timeout period, the message willreappearon the queue and will be available for a consumer to receiveThus, messages are intentionally only available to one consumer at a time. Once a message is grabbed, it is specifically not available for other consumers to receive.If your requirement is for two consumers to receive the same message, then you will need to redesign your architecture. You do not provide enough details to recommend a particular approach, but options include using multiple Amazon SQS queues or sending messages directly via Amazon SNS rather than Amazon SQS.
I have created an Amazon SNS topic. I have one Amazon SQS queue subscribed to the topic.I have created a default SQS queue (not a FIFO queue).I am usingsqs-consumerAPI for long polling theSQSqueue.const app = Consumer.create({ queueUrl: 'https://sqs.us-east-2.amazonaws.com/xxxxxxxxxxxx/xxxxxxxxxxx', handleMessage: async (message) => { console.log(message); }, sqs: sqs//new AWS.SQS({apiVersion: '2012-11-05'}) }); app.on('error', (err) => { console.error(err.message); }); app.on('processing_error', (err) => { console.error(err.message); }); app.on('timeout_error', (err) => { console.error(err.message); }); app.start();When I am running this piece ofjsfile from a single terminal by doingnode sqs_client.js, then everything is working perfectly fine and messages are coming in proper order.But, if open another terminal window and runnode sqs_client.js, then the orders of incoming messages become very random. Newer messages may come in the first terminal window or second terminal window in any order.Why is it happening so? And is there any way to prevent this so that I can get the same message in both the terminal windows at the same time.
When I am trying to poll the same Amazon SQS from two different terminal tabs, I am not getting same message in both of them
When using Lambda, your handler function receives three parameters: event, context and callback. You make use ofcallbackwhen using synchronous functions. When using async you should return a promise.const AWS = require('aws-sdk'); exports.handler = async (event, context, callback) => { const ec2 = new AWS.EC2({ region: event.instanceRegion }); return ec2.stopInstances({ InstanceIds: [event.instanceId] }).promise() .then(() => `Successfully stopped ${event.instanceId}`) .catch(err => console.log(err)); };In fact, when you useasynckeyword you are actually returning a promise, but by returning nothing, you are resolving it withnullas response, so your code will just terminate and yourstopInstanceswill not finish their work.
I am trying to start and stop an EC2 windows instance using lambda, i am using Node.js 8.10 to write the start and stop script.When i am testing the script the script is executed successfully but the EC2 instance is not effected.I am giving the instance details and script belowconst AWS = require('aws-sdk'); exports.handler = async (event) => { const ec2 = new AWS.EC2({ region: event.instanceRegion }); ec2.stopInstances({ InstanceIds: [event.instanceId] }).promise() .then(() => callback(null, `Successfully stopped ${event.instanceId}`)) .catch(err => callback(err)); };The script executed successfullyBelow is the instance detailsThis is stop script but this is not able to stop the instance , please help me i am new to aws . Thanks in advance
How to start and stop EC2 instance using Lambda
I work on the AWS AppSync team. AWS AppSync does not currently support custom timeout values for resolvers or a fallback mechanism when timeouts occur. As you mentioned, a lambda resolver is the best option right now if you require this functionality.I will pass this along as a feature request. It would be helpful to know what sort of things you are wanting to do as part of the cleanup process for HTTP and DynamoDB resolvers.
Given the scenario that I am using a simple vtl resolver for either http or dynamodb, is there a way to e.g. execute some cleanup in case of the data source timing out (e.g. the dynamodb service not responding in let's say 2s). I can't find any reference of appsync and timeout anywhere on the internet unfortunately, and I would like to be able to: 1. specify a lower threshold timeout for resolvers that is lower than the default appsync timeout of 30 seconds 2. be able to have a fallback mechanism in the case of the aforementioned timeoutI think that should be easy to do with a lambda resolver, but at the moment I am trying to avoid that because of cold starts.Thank you
AWS AppSync resolver internal timeout configuration
Users do not require any permissions to login to the AWS Management Console. (However, they won't be able to see/do anything to the services themselves.)Therefore, if you are unable to login to the console, you either have the wrong login information (Account, Username, Password) or the user does not have a Console Password enabled.In the IAM management console, go to the User and look in theSecurity credentials tabto obtain the right console sign-in link and to verify that a password has been enabled.
In chrome browser, logged into AWS account with user name(Administrator) part ofAdministratorsgroup.Created an IAM user (Bob) with Custom managed policy(Demo1) as shown below:In firefox, tried to login with userBob, below is the error:Bobis part of no group.With or without policy(Demo1) attachment to userBob, userBobcould not login...{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "ec2:*", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Deny", "Action": "ec2:RunInstances", "Resource": "*", "Condition": { "ForAllValues:StringNotEquals": { "ec2:InstanceType": "t1.*,t2.*,m3.*" } } } ] }Why userBobcannot login?
Could not access AWS through IAM user
FromAmazon RDS now supports Storage Auto Scaling:RDS Storage Auto Scaling automatically scales storage capacity in response to growing database workloads,with zero downtime.The message you highlight suggests that downtime would only be caused by other changes in the "pending modifications queue" (eg a requested change of instance type).
I want to enable storage autoscaling by the first time in a AWS RDS PostgreSQL instance.Someone knows or have some documentation to clarify if this requires downtime? i can't found any articles or documentation about explicitly "enabling autoscaling by the first time"Thanks in advance.
Enabling storage autoscaling in AWS RDS PostgreSQL can produces downtime?
I see now:AttributeNamereturns AWS's attributes like "ApproximateFirstReceiveTimestamp"MessageAttributeNamereturns message (user specified) attributes
The SQS "ReceiveMessage" endpoint has two params that seem to do the same thing and I don't understand the API docs. Can someone explain the difference:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.htmlAttributeName.N A list of attributes that need to be returned along with each messageMessageAttributeName.N The name of the message attribute, where N is the index. ... When using ReceiveMessage, you can send a list of attribute names to receive, or you can return all of the attributes by specifying AllIt seems that they both do the same thing, i.e. specifying which attributes should be returned on fetched messages. Is there any difference? If not, which is preferred?
What is the difference between "MessageAttributeName.N" and "AttributeName.N" in SQS ReceiveMessage
This changed today. ACM Private CA now supports creating roots and subordinates.https://forums.aws.amazon.com/ann.jspa?annID=6894
In the below option from AWS Certificate manager, I have an option to createsubordinate CAbut notroot CA,Goal is to first createroot CAcertificate and then createsub-ordinate CAthat is signed(issued) by root CA's private key.Documentationalso talks about creating subordinate CA but not about root CADoes AWS certification manager allow creating private root CA? if yes, How to create private root CA with AWS Certification manager?
AWS Cert Mgr - How to create root CA certificate?
The service role policy should have theservice-rolepath. For example the arn should be in the formatarn:aws:iam::{ACCOUNT_ID}:role/service-role/{role_name}The trust relationship should be:{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "cognito-idp.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "{External ID}" } } } ] }And the inline policy of the role should be{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sns:publish" ], "Resource": [ "*" ] } ] }
I am trying to create a Cognito user Pool through a lambda function, using Go lang.The IAM Role, IAM policy and the Trust relationship policy is getting created successfully.But when I try to create the Cognito pool, I am getting an error,InvalidSmsRoleTrustRelationshipException: Role does not have a trust relationship allowing Cognito to assume the role.The trust relationship policy is{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "cognito-idp.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }The Create user Pool API call is as below -newUserPoolData := &cognitoidentityprovider.CreateUserPoolInput{ PoolName: aws.String(poolName), Policies: &userPoolPolicyType, AutoVerifiedAttributes: autoVerifiedAttributes, UsernameAttributes: userNameAttributes, SmsConfiguration: &smsConfingType, }Am I missing something here?
AWS Cognito IAM : InvalidSmsRoleTrustRelationshipException: Role does not have a trust relationship allowing Cognito to assume the role
Probably, you don't reach the request rate limithttps://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.htmlbut worth trying to copy the same S3 file with another prefix.One of possible solution is to avoid querying S3 by putting the JSON file into the function code. Additionally, you may want to add it as a Lambda layer and load from /opt from your Lambda:https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.htmlIn this case you can automate the function update when the s3 file is updated by adding another lambda that will be triggered by the S3 update and callhttps://docs.aws.amazon.com/lambda/latest/dg/API_UpdateFunctionCode.htmlAs a long-term solution, check Fargatehttps://aws.amazon.com/fargate/getting-started/with which you can build a low latency container-based services and put the file into a container.
I am reading a large json file from s3 bucket. The lambda gets called a few hundred times in a second. When the concurrency is high, the lambdas start timing out.Is there a more efficient way of writing the below code, where I do not have to download the file every time from S3 or reuse the content in memory across different instances of lambda :-)The contents of the file change only once in a week!I cannot split the file (due to the json structure) and it has to be read at once.s3 = boto3.resource('s3') s3_bucket_name = get_parameter('/mys3bucketkey/') bucket = s3.Bucket(s3_bucket_name) try: bucket.download_file('myfile.json', '/tmp/' + 'myfile.json') except: print("File to be read is missing.") with open(r'/tmp/' + 'myfile.json') as file: data = json.load(file)
Increase read from s3 performance of lambda code
For anyone who ventures down this path. If you get this error then it is 99.9% caused by a typo in the AWS credentials provided. Could be an extra\nor maybe just an empty spaceat the end of a string or a missing character. But if you are smart like me then you're typo could be elusive like this:I googled for "regions aws codes" so I could get the proper code for us east region. Google provided a table with a list of codes, so I copied and pasted it into my credentials and I kept getting thisInvalid character...issue.It wasn't until I got credentials file from a coworker when it the error went away. So then I compared line by line and noticed absolutely no difference in characters......But I was wrong. I took the region code from my configuration and put them into aUnicode converterand discovered this:us‑east‑1translates tous\u{2011}east\u{2011}1Where asus-east-1translates tous-east-1You can guess which version I had copied from Google?...I've learned my lesson and I hope anyone venturing here doesn't make the mistake I had.
node s3_listbuckets.js Error TypeError [ERR_INVALID_CHAR]: Invalid character in header content ["Authorization"] at ClientRequest.setHeader (_http_outgoing.js:470:3) at new ClientRequest (_http_client.js:219:14) at Object.request (https.js:305:10) at features.constructor.handleRequest ... XXX ... { message: 'Invalid character in header content ["Authorization"]', code: 'NetworkingError', region: 'XXX', hostname: 's3.XXX.amazonaws.com', retryable: true, time: XXX }
Authorization Error Using AWS S3 Buckets
As it turns out, @Kumar Swaminathan's answer was mostly correct. The Video on Demand template from AWS does not include a MediaConvert template for portrait resolutions, and the steps leading up to conversion do not handle rotation at all. The right way to solve the problems appears to be to:Update the media-encode step to use the latest AWS SDK (by using layers), and pass theRotateflag asAUTOthrough to MediaConvert when creating the conversion ("Rotate": "AUTO")Add MediaConvert profiles for portrait resolutionsEnhance the media-profiler step to look for therotatemediainfo property, and choose one of the new portrait profiles for encodingUpdateI implemented support for portrait videos and submitted a PR to AWS.https://github.com/awslabs/video-on-demand-on-aws/pull/29
I am using the VOD (video on demand) template in AWS for media conversion. It creates a Lambda function that in turn pushes a Job into AWS MediaConvert. Recently, AWS added support for aRotateproperty, which when set toAUTOreads the meta data from the source file and applies the appropriate rotation to the video during conversion. It is rotating the video, however, it appears to shrink the video in the process. See below.You can see that rather than the overall video being rotated, it rotates it to fit inside a wide aspect ratio container. The source file is a .mov from an iPhone.Looking for help on how to get MediaConvert to rotate the full video rather than trying to rotate it, and then shrink to fit inside the original source video dimensions.
AWS MediaConvert Rotate Aspect Ratio Changed
Amazon S3 Object Lifecycle ManagementcanTransition storage classesand/orDelete(expire) objects.It can also work withVersioningsuch that different rules can apply to the 'current' version and 'all previous' versions. For example, the current version could be kept accessible while pervious versions could be transitioned to Glacier and eventually deleted.However, it does have the concept of a "monthly backup" or "weekly backup". Rather, rules are applied to all objects equally.To achieve your monthly/weekly objective, you could:Store thefirst backup of each monthin a particular directory (path)Storeother backupsin a different directoryApply Lifecycle rulesdifferently to each directoryOr, you could use the same Lifecycle rules on all backups but write some code that deletes unwanted backups at various intervals (eg every day deletes a week-old backup unless it is the first backup of the month). This code would be triggered as a daily Lambda function.
I have set up Gitlab to save a daily backup to an Amazon S3 bucket. I want to keep a monthly backup one year back on glacier and daily backups one week back on standard storage. Is this cleanup strategy viable and doable using S3 lifecycle rules? If yes, how?
Automatically delete old backups from S3 and move monthly to glacier
Check the aws server-side settings related to cache.It may be possible that you are getting the cached response from the server or network.
I useamazon-cognito-identity-jsfor cognito pool data.Please look at myforgetPassword.jscode:const response = await AwsForgetPassword(this.state.email) .then(response => { console.log(response); }) .catch(error => { console.log(error); });and below myAwsForgetPassword.jscode:const AmazonCognitoIdentity = require("amazon-cognito-identity-js"); global.navigator = () => null; export const AwsForgetPassword = email => { const poolData = { UserPoolId: "XX_XXXX-XXX", // Your user pool id here ClientId: "xxxxxxxxxxxxx" // Your client id here }; const userPool = new AmazonCognitoIdentity.CognitoUserPool(poolData); var userData = { Username: email, Pool: userPool }; //console.log(userData); var cognitoUser = new AmazonCognitoIdentity.CognitoUser(userData); return new Promise((resolve, reject) => { cognitoUser.forgotPassword({ onSuccess: function(data) { // successfully initiated reset password request // console.log(data); return resolve(data); }, onFailure: function(err) { // console.log(err); return reject(err); //] alert(err.message || JSON.stringify(err)); } }); }); };I haven't got any response inforgetPassword.jsand I get a verification code in the mail for valid email. Something missing please let me know I have to spend lot's time in it.
Cognito forget password api do not given any response for verification code sent(success) or failure
Yes that is correct, once a lambda function does not receive any traffic for a period of time (in my experience approximately 15 mins), the container is destroyed and the next request will result in a new container being started (known as a cold-start).One thing to note however, is that lambda containers can be shut down at any time between requests, so even if you have constant traffic to the lambda, you may still experience the occasional cold-start.Additionally, each lambda container will only process a single request at a time, so if you have one lambda container which is "warm" and two requests come in simultaneously, one request will be processed by the pre-warmed lambda, and the other will encounter a cold-start.
It is said that lambda shutdown the container when there is no traffic and when first request came after long time, there is a cold start problem. Is that right? Eg. If I am running a drop wizard application on AWS lambda, will the server gets shutdown if no traffic is coming and it will again starts the server on first request? Is that correct?Or it does not shutdown the server running in the container but does something else? Please explain?
Does Lambda shutdown the entire container when the traffic is zero?
This is because the content type is missing so the browser doesn't know that your file should be interpreted as HTML.Please addContentType: 'text/html'in the parameters passed tos3.upload.See also the explanations and links given inUpload Image into S3 bucket using Api Gateway, Lambda funnction
I am looking to allow public users to view HTML files located on an AWS S3 bucket on their browser. These HTML files are created and uploaded to my S3 bucket via node.js, and a URL linking to the file is generated.I am using this method to upload the HTML files:s3.upload({ Bucket: bucket, Key: "HTMLFiles/file.HTML", Body: fileStream, ACL: 'public-read' }, function (err, data) { if (err) { console.log("Error: ", err); } if (data) { console.log("Success: ", data.Location); } }).on('httpUploadProgress', event => { console.log(`Uploaded ${event.loaded} out of ${event.total}`); });When the script is run, the generated URL looks something like this:https://bucket-name.s3.region.amazonaws.com/HTMLFiles/file.html(Obviously this is only an example URL and not the actual URL)When a user goes to this URL, instead of viewing the HTML file, the browser instead downloads the file.How can I specify that this file is meant to be loaded on the browser and viewed, not downloaded?
AWS S3 - Allow public to view HTML files
You might be thinking of the new 'Lambda Layers' feature:You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package.Layers let you keep your deployment package small, which makes development easier. You can avoid errors that can occur when you install and package dependencies with your function code. For Node.js, Python, and Ruby functions, you can develop your function code in the Lambda console as long as you keep your deployment package under 3 MB.https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
During my preparation to AWS certification I found the following question on various mock-exam resources (the description is slightly re-formulated in order not to violate legal rules):We have a lambda function which uses some external libraries (which are not part of standard Lambda libraries). How to optimise the lambda compute time consumed?In all these resources the answer marked as right is like this:Install the external libraries in Lambda to be available to all Lambda functions.I am finding it "a bit" confusing. I always thought that the only one proper way to use external libraries isto include them into the deployment package. Or am I missing some new feature? Please, enlighten me.
AWS lambda and external libraries
You should remove the website configuration from your S3 bucket and use Origin Access Identity. The rest of your setup is fine.You don't need to configure your S3 bucket as a website endpoint because you are not going to serve your content directly via S3. With the Origin Access Identity, your bucket will be available only from CloudFront (unless you add something else in the bucket policy) and this is what you want.See alsohttps://medium.com/@sanamsoodan/host-a-website-using-aws-cloudfront-origin-access-identity-s3-without-static-website-hosting-43995ae2a9bd
I would like to deploy to a website to a S3 bucket and use static website hosting. However, I have some strict security restrictions:HTTPS must be usedAccess to the website must be restricted to a specific IP rangeHere is my plan:Use AWS WAF to restrict the IPs that can access the websiteUse CloudFront to leverage HTTPSFormat the CloudFront distribution to forward a custom header that will act like an access key to S3Restrict the S3 bucket security policy to only allow traffic that includes the custom header from CloudFront mentioned above.Here is a diagram:I have one big concern though: Is forwarding a custom header between CloudFrontreallythe best way to do this? AWS docs say that an Origin Access Identity can't be used for S3 buckets that act as a website (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html). The custom header seems less secure than an Origin Access Identity and much harder to maintain. It makes me uncomfortable using a random string as the only security preventing someone from bypassing my CloudFront distribution and directly accessing the S3 bucket. If a malicious party guesses theMy other option here is to just bite the bullet and move to servers where I have more control over a lot of the security but I would like to leverage the convenience of an S3 website.
S3 static website with CloudFront restricted to specific IP range using custom header
I ran into this issue as well.I ranaws sts get-caller-identityand noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in yourbash_profileorbashrc, the awscli will default to using these instead.I changed the enviornment variables inbash_profileandbashrcto the proper keys and everything started working.
I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to runaws s3 lson the instance it returnedAn error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
Access denied when trying to do AWS s3 ls using AWS cli
Spring Boot and AWS Lambda don't naturally go together IMO.Lambda is pure code, it does not present itself as a HTTP Server, it is just triggered by one of the other AWS services (API Gateway, CloudWatch, S3, DynamoDB, Kinesis, SDK, etc.). The handler receives a JSON request from the calling service, and processes the request.Here is an example.API Gateway does much of what Spring Boot provides for you. API Gateway is always online waiting for HTTP requests to come in, for which you only pay for incoming requests, you do not pay for idle (the definition of serverless IMO).Once a request comes in, API Gateway wraps the request payload with some additional environmental data and sends it to your Lambda handler, which processes the request and returns some response to the client.Saying that, if you don't want to restructure your service, there are a couple of options open to you:Wrap into a Docker image and use an AWS Container Service, either usingECSorElasticBeanstalk, neither of these are considered to be serverless.I have not tried this, but according to AWS:You can use the aws-serverless-java-container library to run a Spring Boot application in AWS Lambda. You can use the library within your Lambda handler to load your Spring Boot application and proxy events to it.See links toConvert your SpringBoot projectandDeploy it to AWS Lambda.Hope this helps.
I have developed a simple microservice, REST based using Java 8 and Spring Boot2.0. It has its own REST end points which I can call using Postman and I get the response very well. Now I have doubt in understanding the design & architecture if I want to deploy the same application on AWS cloud. I want my application to behave as serverless so I want to deploy on AWS using its Lambda service.Please assist to clear my following doubts :- 1) First, can I upload my whole application code to AWS Lambda in order to make it serverless?2) If yes, then do I need to use AWS API Gateway (compulsorily) to invoke my Lambda function when the request passes through it?3) If yes (point 2), then end points which are there in my original microservice code will become ineffective and will be overridden by new API Gateway end points?My whole doubt is about end points, which end point will be used to invoke the Lambda functions?Please assist to clarify my doubt. If there is any sample reference material then it will be really great.Cheers
How to deploy a SpringBoot microservice application(RESTful) as serverless to AWS Lambda?
The issue is thatgenerate_series()can be run on the Leader node, butnot on a compute node.Therefore, it is possible to run a statement like this:SELECT '1970-01-01'::date + generate_series(0, 20000)However, it is not possible to use that statement in aFROMbecause that would involve the compute nodes.Solution:Create a table of information externally and load the results into adatetable, or usegenerate_series()directly to generate the desired values, save the results and import them into adatetable.
I am trying to generate time series in Redshift and insert into table, but no luck. What I have tried so far:insert into date(dateid,date) SELECT to_char(datum, 'YYYYMMDD')::int AS dateid, datum::date AS date FROM ( select '1970-01-01'::date + generate_series(0, 20000) as datum ) tbl;Getting the following errorSQL Error [500310] [0A000]: [Amazon](500310) Invalid operation: Specified types or functions (one per INFO message) not supported on Redshift tables.;Any ideas or workaround ?
redshift - how to insert into table generated time series
The way to do that is by usingCTAS query statements.ACREATE TABLE AS SELECT(CTAS) query creates a new table in Athena from the results of a SELECT statement from another query. Athena stores data files created by the CTAS statement in a specified location in Amazon S3.For example:CREATE TABLE new_table WITH ( external_location = 's3://my_athena_results/new_table_files/' ) AS ( -- Here goes your normal query SELECT * FROM old_table; )There aresome limitationsthough. However, for your case the most important are:The destination location for storing CTAS query results in Amazon S3 must be empty.The same applies to the name of new table, i.e. it shouldn't exist in AWS Glue Data Catalog.In general, you don't have explicit control of how many files will be created as a result of CTAS query, since Athena is a distributed system. However, can try this to use"this workaround"which usesbucketed_byandbucket_countfields withinWITHclauseCREATE TABLE new_table WITH ( external_location = 's3://my_athena_results/new_table_files/', bucketed_by=ARRAY['some_column_from_select'], bucket_count=1 ) AS ( -- Here goes your normal query SELECT * FROM old_table; )Apart from creating a new files and defining a table associated with you can also convert your data to a different file formats, e.g. Parquet, JSON etc.
I have a table in AWS Glue which uses an S3 bucket for it's data location. I want to execute an Athena query on that existing table and use the query results to create a new Glue table.I have tried creating a new Glue table, pointing it to a new location in S3, and piping the Athena query results to that S3 location. This almost accomplishes what I want, buta .csv.metadata file is put in this location along with the actual .csv output (which is read by the Glue table as it reads all files in the specified s3 location).The csv file places double quotes around each field, which ruins any fieldSchema defined in the Glue Table that uses numbersThese services are all designed to work together, so there must be a proper way to accomplish this. Any advice would be much appreciated :)
Duplicate Table in AWS Glue using AWS Athena
You probably already checked the AWS page about "Task Networking in AWS Fargate"The key to be able to reach internet is a NAT, so, if it's not working, you should start from that in checking for errors. You can see how important it is from the following description taken from the page I linkedIn this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.If the NAT for some reason is not working, another approach could be to ENABLEAuto Assign Public IP, but define a security group that blocks any attempt to connect to your tasks in the private VPC. In this way the Task will be able to reach the DNS server required to resolvecommondatastorage.googleapis.com
My VPC consists of 2 public and 2 private subnets, private subnet having NAt gateway to access internet and my docker instance is running on private subnet which receives external URLs (http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4) as input and download content/file and store the files in s3I have application load balancer setup in public subnet and connects to fargate instancewhen i try to run and the logs say NAME cannot be resolved commondatastorage.googleapis.comI understand that docker is not having internet connectionWhat i am doing wrong here and what needs to corrected?PS: While creating the fargate service I DISABLED Auto Assign Public IP as instance should be on private subnet
Docker instance running on private subnet AWS Fargate
One method you could do, is to useAWS Data Pipelinein order to export the dynamodb data into S3, and then import the data from S3 into the relational database of your choice.More info here:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
I'm quite new in AWS. I have a running database on DynamoDB and now want to migrate to Amazon RDS (Aurora)? I am not getting any clue how to do that? Welcome for any kind of help or assessment. I also need to consider the downtime and transformation tools for NoSQL to Relational DB.
How to migrate DynamoDb to RDS (Aurora)
Check AWS CLI version:aws --versionIt looks like the AWS CLI needs update. To upgrade an existing AWS CLI installation, use the--upgradeoption:pip install --upgrade awscliIf you have pip3 then.pip3 install --upgrade awscliorsudo pip3 install --upgrade awscliAlso remember thataws sts assume-role --role-arnhas expiry token, to need to run this command again to getAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKENto continue. What I did was I, I prepared a tempprofile for these credentials and used this profile in nextaws quicksightcommands. e.gaws configure set AWS_ACCESS_KEY_ID XXXXXXX --profile tempprofile aws configure set AWS_SECRET_ACCESS_KEY XXXXXXXX--profile tempprofile aws configure set AWS_SESSION_TOKEN XXXXXXX --profile tempprofileIn my case also I setaws configure set REGION ap-southeast-2 --profile tempprofileand then in nextaws quicksightcommands use--profile tempprofile
I am trying to embed a QuickSight Dashboard and am following the current steps.https://aws.amazon.com/blogs/big-data/embed-interactive-dashboards-in-your-application-with-amazon-quicksight/I'm at step 3 and able to assume the role and,export AWS_ACCESS_KEY_ID="access_key_from_assume_role" export AWS_SECRET_ACCESS_KEY="secret_key_from_assume_role " export AWS_SESSION_TOKEN="session_token_from_assume_role"However when I try and do the next step of calling "aws quicksight ..." from the next part I'm getting the following error,aws: error: argument command: Invalid choice, valid choices are:I've installed pip and made sure the command line text matches with correct details.Has anyone experienced this or has any ideas why aws quicksight command wouldn't be working in the CLI?
AWS QuickSight Embedding CLI error - aws: error: argument command: Invalid choice, valid choices are:
A504 Gateway Timeoutmeans the client trying to access the server doesn't get a response in a certain amount of time. According to theAWS documentation:Description: Indicates that the load balancer closed a connection because a request did not complete within the idle timeout period.Which means, the 504 response you get in your browser (or other client) when trying to access your Django app is generated by the Elastic Load Balancer that's in front of your actual server after closing the connection. Since your ELB is an external networking tool and has no actual control over your server, it cannot control your code and which processes are running or not. Meaning, the process will keep running until it has to return an HTTP response and it fails because of the closed connection.
I have a Django server running in Elastic Beanstalk and I am not sure if the process continues to run in the server or the process gets killed. Does anyone have any insight on this? There is no application logic to stop the request in case of a disconnection. Would Elastic Beanstalk kill off the process along with the client connection or will the process continue to run regardless of the timeout?
What happens during AWS Elastic Beanstalk 504 Gateway Timeout
This is a perfect example for aStepFunctionyou can have it scheduled by CloudWatch Event instead of the lambda.TheStepFunctioncan call your lambda and handle the retry logic on failure with configurable exponential back-off if needed.Here is an example of a StepFunction{ "Comment": "Call lambda with retry", "StartAt": "Scraper", "States": { "Scraper": { "Type": "Task", "Resource": "<LAMBDA_ARN>", "Retry": [ { "ErrorEquals": [ "States.ALL" ], "IntervalSeconds": 20, "MaxAttempts": 5, "BackoffRate": 2 } ], "End": true } } }
I have a serious question and I need your help. I can not find any solution on the Internet after spending a lot of time.I made a bot to get data which is really heavy task because I need to setup a scraper and then it extract data from a webpage through many steps (login, logout, click, submit button, ...) and after got this result, it will post to an API to make a report.I use Cloudwatch event to make my lambda function run in certain time every day.The problem is although I set my lambda function in its max settings (3GB RAM, and 15 minutes timeout, the metrics is in Jan 2019), but sometime my lambda function failed when executing (maybe the scrape tasks take too lot of steps or maybe the webpage I tried to scrape is not stable) and it rarely failed, about only 5% I think.But I want to know if there is any approach to deal with this situation, I want my lambda function can be auto retry when it fail without doing manual.
Best solution to retry AWS lambda function when it got timeout
If you want to point a custom domain name to the site you are hosting in S3, then the bucket name must match the domain name, so you can't have multiple sites with multiple domains in the same bucket. Also, the static site hosting settings are at the bucket level not the "folder" level.NOTE:There is actually no such thing as a folder in S3, just key prefixes.
I am trying to host static websites on S3. I know how to do it using S3 buckets (Link)Question:Can I do it using S3 folder instead of S3 bucket? So that I don't have to create a new bucket when I want to host a static site and simply host it by creating a new folder in the same bucket.
Hosting a Static website in S3 Bucket Folder
Try altering your NewDownLoader() to this. Seehttps://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#NewDownloader// Create a downloader with the session and custom options downloader := s3manager.NewDownloader(sess, func(d *s3manager.Downloader) { d.PartSize = 64 * 1024 * 1024 // 64MB per part d.Concurrency = 4 })List of Options that can be set with d. in the func can be found herehttps://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#Downloader
I am writing a function to download a large file (9GB) from AWS S3 bucket using aws-sdk for go. I need to optimize this and download the file quickly.func DownloadFromS3Bucket(bucket, item, path string) { os.Setenv("AWS_ACCESS_KEY_ID", constants.AWS_ACCESS_KEY_ID) os.Setenv("AWS_SECRET_ACCESS_KEY", constants.AWS_SECRET_ACCESS_KEY) file, err := os.Create(filepath.Join(path, item)) if err != nil { fmt.Printf("Error in downloading from file: %v \n", err) os.Exit(1) } defer file.Close() sess, _ := session.NewSession(&aws.Config{ Region: aws.String(constants.AWS_REGION)}, ) downloader := s3manager.NewDownloader(sess) numBytes, err := downloader.Download(file, &s3.GetObjectInput{ Bucket: aws.String(bucket), Key: aws.String(item), }) if err != nil { fmt.Printf("Error in downloading from file: %v \n", err) os.Exit(1) } fmt.Println("Download completed", file.Name(), numBytes, "bytes") }Can someone suggest a solution to extend this function.
AWS S3 parallel download using golang
You can use either theaws_iam_group_policy_attachmentresourceor theaws_iam_policy_attachmentresourceto attach a policy to a group.As mentioned in theaws_iam_policy_attachmentresource docs this resource creates an exclusive attachment of that policy to specified users, groups and roles and isn't normally what you want so I'd recommend theaws_iam_group_policy_attachmentresource.This might look something like this:resource "aws_iam_group" "aws_config_group" { name = "AWSConfigGroup" path = "/" } resource "aws_iam_group_policy_attachment" "aws_config_attach" { group = "${aws_iam_group.aws_config_group.name}" policy_arn = "arn:aws:iam::aws:policy/service_role/AWSConfigRole" }Note that you don't actually need theaws_iam_policydata sourcehere as you are already building the ARN to pass into the data source and that's all that's needed by theaws_iam_group_policy_attachmentresource.
I've got this so far:data "aws_iam_policy" "config_role" { arn = "arn:aws:iam::aws:policy/service_role/AWSConfigRole" }But I'm not sure how to attach this to a group.
How do you add a managed policy to a group in terraform?
Not all givenmessage template placeholderswork for all custom message workflows. For instance, email verification I couldn't make any placeholder work but verification code {####}. It is not mentioned in AWS documentation though but that's my experience.I managed to achieve it usingLambda custom message triggers. Much easier to implement and provide a lot more customisation options.
I'm trying to configure AWS Cognito to send a verification email containing a custom one-click link. Followingthis guideI created this link into my template:<a href="https://www.example.com/verify/{username}/{####}">Click on the link</a>Since{username}is a valid template token, I expected it to be changed into the actual username when a verification email is sent, but it's not. I also tried a couple of advanced tokens, like{ip-address}and{country}, without success. What am I missing here?Edit: I'm trying this on eu-central-1, verification type iscodeand here's a screenshot from the AWS Console:
Message Template placeholders on AWS Cognito
My guess is that the version of boto3 that the AWS Lambda console uses has not been updated/refreshed yet to support Layers.That's completely right. AWS usually updates the available libraries on AWS Lambda regularly, but hasn't updated them for several months now for unknown reasons.The supported API endpoints are actually not defined inboto3, but inbotocore. Currentlybotocore1.10.74is available on AWS Lambda, while support for AWS Lambda Layersgot added inbotocore1.12.56.To avoid such incompatibilities between your code and the versions of available libraries, you should create adeployment packagecontainingboto3andbotocorein addition to your AWS Lambda function code, so your code uses your bundled versions instead the ones AWS provides. That's what AWS suggests as part of theirbest practicesas well:Control the dependencies in your function's deployment package.The AWS Lambda execution environment contains a number of libraries such as the AWS SDK for the Node.js and Python runtimes (a full list can be found here:Lambda Execution Environment and Available Libraries). To enable the latest set of features and security updates, Lambda will periodically update these libraries. These updates may introduce subtle changes to the behavior of your Lambda function. To have full control of the dependencies your function uses, we recommend packaging all your dependencies with your deployment package.
If I try to use boto3 Lambdacreate_function()to create a Lambda function, and I try to include Layers viaLayers=['string']parameter, I get the following error message:Unknown parameter in input: "Layers", must be one of: FunctionName, Runtime, Role, Handler, Code, Description, Timeout, MemorySize, Publish, VpcConfig, DeadLetterConfig, Environment, KMSKeyArn, TracingConfig, Tags... any ideas?The documentationsuggests that this should work, but something is clearly off here. NOTE: I also have a similar problem with "Layers" inupdate_function_configuration()as well.My guess is that the version of boto3 that the AWS Lambda console uses has not been updated/refreshed yet to support Layers. Because when I run the same code locally on a machine with a fairly recent version of boto3, it runs without any problems. I have already tried using both listed Python runtimes of 3.6 and 3.7 that in the AWS console, but neither worked. These runtimes have respective versions of boto3 of 1.7.74 and 1.9.42. But my local machine has 1.9.59. So perhaps the addition of Lambda Layers occurred between 1.9.42 and 1.9.59.
Problem creating Lambda function that has a Layer using boto3
No, the maximum length of aVARCHAR data typeis 65535 bytes and that is the longest data type that Redshift is capable of storing. Note that length is inbytes, not characters, so the actual number of characters stored depends on their byte length.If the data is already in parquet format then possibly you don't need to load this data into a Redshift table at all, instead you could create a Spectrumexternal tableover it. The external table definition will only support a VARCHAR definition of 65535, the same as a normal table, and any query against the column will silently truncate additional characters beyond that length - however the original data will be preserved in the parquet file and potentially accessible by other means if needed.
I have a text field in parquet file with max length141598. I am loading the parquet file to redshift and got the error while loading as the max avarcharcan store is65535. Is there any other datatype I can use or another alternative to follow?Error while loading:S3 Query Exception (Fetch). Task failed due to an internal error. The length of the data column friends is longer than the length defined in the table. Table: 65535, Data: 141598
AWS Redshift: How to store text field with size greater than 100K
The Lambda backend polls SQS on your behalf and invokes a Lambda function if a message is returned. If the invocation succeeds the message will be deleted if however the function fails the message will be returned to the queue (or DLQ depending on your redrive policy) after the visibility timeout has expired. Check this blogpost.Check if you can see any error metrics for the function in Cloudwatch. Your Lambda function might be failing before it gets a chance to run any code. When this happens there's an error metric but no invocation metric/logs and it's most likely due to an incorrect permission.
Currently I'm using SQS - Lambda integrationThe concurrency for Lambda is available. SQS batch is set to 1 record, 0 delay.Visibility timeout for SQS is 15 Minutes, Lambda max exec time is 15 MinutesI would notice thatsometimesSQS Messages are stuck in-flight without being processed by any Lambda at all ( They fall into the dead letter queue after 15 minutes, CloudWatch show no Lambda being invoked with the message )Has anyone faced the same issue?I run Lambda inside VPC, if that matters
SQS Lambda Integration - Lambda does not process the queue message
To request a specific file part you can either do it yourself or use one of the AWS managed servicesS3 SelectorAthena. The difference between both is simple: S3 Select over one file, Athena can execute a request over a whole bucket.Depending on your situation you may use one or the other, you will have to think up the performance needed and admissible costs.In any case you cannot just plug API Gateway directly to one of this service, you need a middleware processing the requests.Still i need to mention that it is possible to directly use S3 Select or Athena by-passing API Gateway. If you do so you will have to bereally carefulon the rights related to the access keys used. You can create in IAM a specific access (very narrow) to S3 and then use an sdk to directly process your queries from the client side. You have more security issues to handle but you avoid the use of both API Gateway and Lambda.
My aim is to use S3 in AWS to store csv files and API Gateway to query those objects and ideally select rows and columns from within the csv files and return them in my web app.In AWS, there is a method for selecting content from S3 objects. It acts as a filter on a csv file for example to only return certain columns. It can be written in SQL see here:https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.htmlThere is also a way to use API Gateway as a proxy for S3 to create an API into the bucket, see here:https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.htmlCan these methods be combined so that I can map API Gateway requests directly to a SQL SELECT content from S3 Object query or do I need to use a Lambda function in the middle or some other technique?
Can SELECT content from S3 Object be used with API Gateway?
You can use a combination ofcount&elementlike so:variable "s3_bucket_name" { type = "list" default = ["prod_bucket", "stage-bucket", "qa_bucket"] } resource "aws_s3_bucket" "henrys_bucket" { count = "${length(var.s3_bucket_name)}" bucket = "${element(var.s3_bucket_name, count.index)}" acl = "private" force_destroy = "true" }Edit: as suggested by @ydaetskcoR you can use thelist[index]pattern rather than element.variable "s3_bucket_name" { type = "list" default = ["prod_bucket", "stage-bucket", "qa_bucket"] } resource "aws_s3_bucket" "henrys_bucket" { count = "${length(var.s3_bucket_name)}" bucket = "${var.s3_bucket_name[count.index]}" acl = "private" force_destroy = "true" }
Creating a bucket is pretty simple.resource "aws_s3_bucket" "henrys_bucket" { bucket = "${var.s3_bucket_name}" acl = "private" force_destroy = "true" }Initially I thought I could create a list for thes3_bucket_namevariable but I get an error:Error: bucket must be a single value, not a list-variable "s3_bucket_name" { type = "list" default = ["prod_bucket", "stage-bucket", "qa_bucket"] }How can I create multiple buckets without duplicating code?
Terraform - creating multiple buckets
Yes, Google Cloud platform providesDeployment Managerto write and provision your Infrastructure as a Code.If you want can go through this blog to how to get start with Deployment Manager it's as simple as CloudFormation you can code everything in YAML :https://medium.com/google-cloud/2018-google-deployment-manager-5ebb8759a122
As a big fan of AWS, I think the CloudFormation(CFN) is such a good tool to execute IaC. So I'm interested in if GCP has a similar tool.Thanks.
Does GCP have an Iac tool just like CloudFormation of AWS?
One option would be to launch multiple EC2 instances from the same AMI in a single RunInstances request and have each EC2 instance read the same JSON file from S3.Each instance would thenqueryits own ami-launch-index from its metadata service. That ami-launch-index is going to be unique on each EC2 instance related to a given RunInstances request, and will be numbered from 0 to N-1 (where N is the number of instances that you launched).Each EC2 instance could then process a subset of the list of jobs in the JSON file, based upon its local ami-launch-index (let's call that K), for example the jobs at index K, K + N, K + 2N, ...Another option would be to write a script that parses the JSON file upfront, decide which jobs each of the N EC2 instances should process, and then pass that subset of the list into each EC2 instance in userdata e.g. writing it to a json file on the instance. The application running on the instance would read that local file and process the relevant jobs.
I have a json file with a tens of thousands of individual job details. These jobs can be executed by a single script and finish relatively quick.I calculate that 500 instances in AWS will finish the job in under 1h minutes and keep my costs affordable.How can I get each instance to run a different chunk of the data?
How to split data file over hundreds of AWS instances?
If you read the table carefully you will notice that the last column has a header "Can Be Increased" and value "Yes" for "Maximum number of API keys per account per region".Just contact support once you will be getting close to your limit and ask for an increase. It may take up to 2-3 work days, but otherwise it should be only a matter of asking.
I'm running a business API on AWS, through API Gateway and Lambda. Currently, I handle rate limiting with the built in usage plans and api keys. Each account tier (think basic, medium, premium) is associated to a usage plan, to which each customer's api key is linked.I just found out that there is a hard (but increasable) limit of 500 api keys that a single AWS account can have per region (https://docs.aws.amazon.com/fr_fr/apigateway/latest/developerguide/limits.html).Is it sustainable to rely on api keys to rate limit each customer ? We will get to the 500 limit eventually. Are there other solutions we could use ?Thanks a lot
How to rate limit per user in API Gateway?
You could fall back to a default value for the optional fields from the payload with #if-else.#set($req = $input.path('$')) #if($req.optional_field != "") #set( $my_default_value = $input.path('$.optional_field')) #else #set ($my_default_value = "no_data") #end { "TableName": "A_Table", "Item": { "id": { "S": "$context.requestId" }, "optional_field": { "S": "$my_default_value" } } }
I've been working with Mapping Templates on AWS API Gateway, in particular for DynamoDB integration. And I found very inconvenient to check against optional fields. For example I have a JSON payload like this:{ "optional_field": "abcd" }now to put it to the database I use a mapping like this:#set($hasOptionalField = $input.path('$.optional_field') != "") { "TableName": "A_Table", "Item": { "id": {"S": "$context.requestId"} #if($hasOptionalField), "optional_field": {"S": "$input.path('$.optional_field')"} #end } }According toApache Velocity ReferenceI should be able to use much simpler syntax do check for null, empty, false or zero and fallback automatically to some alternative value, something beautiful like this:{ "TableName": "A_Table", "Item": { "id": {"S": "$context.requestId"}, "optional_field": {"S": "${input.path('$.optional_field')|'no_data'}"} } }I could just leave it as is without any fallback, but DynamoDB API gives you an error if try to put empty string as an attribute value.It seems like API Gateway Mapping Templates does not 100% implement Apache Velocity specification?
API Gateway Mapping Template optional field
This issue has been fixed in Fargate Platform Version‐1.3.0.In 1.3.0, along with Secrets support, AWS fixed the issue of pulling images from the private registry which runs on HTTPS ports other than 443.https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.htmlhttps://aws.amazon.com/about-aws/whats-new/2018/12/aws-fargate-platform-version-1-3-adds-secrets-support/
I am having an issue pulling private images from Artifactory to AWS Fargate. It is showing an error "access violation". Anybody getting the same error while running task in AWS Fargate?Status reason : CannotPullContainerError: API error (500): Gethttps://xxx.artifactory.xx:xxx/v2/: Access violation
Not able to pull Private Images from Artifactory to AWS Fargate
The answer is yes, but it's probably a bit of a premature optimisation.Lambda has two parts to it's performance:Transfer, build and init the container for each concurrent executionRun the code for each execution.pycfiles offer you some optimisation of 1, or the "cold start" time. This is because you can shiponlythepycfiles, and they tend to be smaller (reducing transfer time), and because you have already compiled to byte code, which takes away a step of the build process (note that python is still compiled further, but it's an optimisation none-the-less).Frankly, I'd be surprised if this made a difference enough to justify the added complexity at deployment and the resulting opaqueness of the code in the lambda console. And so I would challenge you to profile using something like X-Ray before you commit to this optimisation over anything in your actual code.(n.b. MapBox have a good article about reducing size and discussing the effect of.pycdeployments:https://blog.mapbox.com/aws-lambda-python-magic-e0f6a407ffc6)
In Bash I can do:python3 -OO -m py_compile myscript.pyAnd build deployment zip with__pycache__inside, for multiple scripts I can run:python3 -OO -m compileall .Executing this in the sameunderlying AMI image.Is it wise for AWS Lambda performance improvement?
Is it a good practice to *compile* Python3 for Lambda?
You could uselerna.Lerna will also help you in case you have dependencies between your packages.Basically you just have to add a lerna.json in your root directory and install your dependencies using lerna.lerna.json:{ "lerna": "2.11.0", "packages": [ "lambda-functions/*" ], "version": "0.0.0" }I assume you are using AWS CodeBuild, so here are some examples on how you could configure your install phase:buildspec.yml with lerna:version: 0.2 phases: install: commands: - echo Entered the install phase... - npm install --global lerna - lerna bootstrap --concurrency=1 -- --production ...lerna bootstrapwill createnode_modulesfor every single package.If you don't want to use lerna, you could add one command for each package. Something like:buildspec.yml with yarn:version: 0.2 phases: install: commands: - echo Entered the install phase... - npm install --global yarn - yarn --cwd lambda-functions/function-1 --production install - yarn --cwd lambda-functions/function-2 --production install - yarn --cwd lambda-functions/function-3 --production install ...or:buildspec.yml with npm:version: 0.2 phases: install: commands: - echo Entered the install phase... - cd lambda-functions/function-1 && npm install --production - cd lambda-functions/function-2 && npm install --production - cd lambda-functions/function-3 && npm install --production ...
I am using AWS Cloud Formation for my backend with the following project files structure:| template.yaml | lambda-functions | ---- function-1 |----function.js |----package.json | ---- function-2 |----function.js |----package.jsonIn the AWS buildspec I doaws cloudformation packagefollowed by aaws cloudformation deploy.If I want it to work, I need to donpm installon bothfunction-1andfunction-2subfolders and commitnode_modulessubfolders to git repo.How can I run npm install on all my subfolders directly from the buildspec so I don't have to commit node_modules subfolders?
How to npm install all functions directories with AWS CodeBuild
This can be done via YAML's literal block scalar, as follows:container_commands: 07_run_command: | mkdir -p /var/cache/tomcat8/temp/.m2/repository chmod 777More documentation on the same can be found here:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands-options
All I've found about this is:https://forums.aws.amazon.com/thread.jspa?threadID=112988I know that I can do this:container_commands: 07_run_command: "mkdir -p /var/cache/tomcat8/temp/.m2/repository && chmod 777"But can I do this?container_commands: 07_run_command: mkdir -p /var/cache/tomcat8/temp/.m2/repository && chmod 777And do I still need the && to separate the commands or are they executed as separate commands? or is it still only one command?
Can I put a eb extension container command on multiple lines and if I can how?
We have the application hosted locally not on EC2. Is it possible to access the AWS S3 using the IAM Role instead of profile or credentials from java?Service-roles are bound to AWS services, so - long story short - for your on-premise server you need to use AWS API keys.The security team has raised concern about storing the credentials locally as it is vulnerable.Unfortunatelly - at the end you need to store the credentials somewhere. Even using services such as Cognito or STS you will need to store the credentials for the service somewhere (effectively - for any external or cloud service regardless what cloud or service you may use).IMHO the best you can do is using dedicated AWS credentials (API keys) with only permission what are really needed.
We have the application hosted locally not on EC2. Is it possible to access the AWS S3 using the IAM Role instead of profile or credentials from java? The security team has raised concern about storing the credentials locally as it is vulnerable.As far as I have googled, I have found options to access using Credentials stored in Environment or in .aws as a profile. If we need ROLE based authentication, then the application is supposed to be deployed in EC2. But we have the server hosted locally. Please provide if you have any suggestions.
access AWS S3 from java using IAM Role
You will be able to access that URL even if your instance does not have internet access. Another way you can get the id is by using the aws cli. Theget-caller-identitycommand returns the account, userid and the ARN. You will want to make sure you EC2 instance has permissions to call this.aws sts get-caller-identity{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:GetCallerIdentity", "Resource": "*" } ] }
We have a requirement where we need to validate the AWS accountID from our code running on EC2 instance. One way I found is to get this information from AWS metadata IP at this URL:http://169.254.169.254/latest/dynamic/instance-identity/documentbut what if I dont have access to internet. Is it saved and retrievable from Instance without pinging any outside URL.
Get AWS Account ID from instance
"Also the environment creation and setup being fully scripted sounds like a lot of work" - it is. its also the correct thing to do. it allows you to not only version your code but the environments that the code runs in. automating your deployment is more than just your code. i'd recommend this.
So my app stack looks like this in prod:Backend: AWS API Gateway + Lambda + DynamoDB + ElastiCache(redis)Backend - algo: Long running process - dockerized Java app running on ECS (Fargate)Frontend: Angular app, served from S3I'd like to usehttps://www.cypress.io/for end-to-end testing and I'd like to usehttps://circleci.com/for my build server.How do I go about creating an environment to allow the end-to-end tests to run?Options:1) Use Terraform to script the infrastructure and create/tear down a whole environment every time we run the end-to-end tests. This sounds like a huge overhead in terms of spin up time. Also the environment creation and setup being fully scripted sounds like a lot of work!2) Create a dedicated, long lived environment that we deploy to incrementally. This sounds like it'll get messy - not ideal for a place to run tests.3) Make it so we can run the environment locally. So perhaps use use AWS'sSAMor something like this projecthttps://github.com/gertjvr/serverless-plugin-simulateThat last option may also answer the question of the local dev environment setup however everything that mocks serverless tech locally seems to be in beta and I'm concerned that if I go down that road I might hit some issues after investing a lot of time....
How can I automate the end-to-end testing of my serverless web app?
You should be able to rely onAWS_SAM_LOCAL=trueperthis commit.
I am playing with AWS SAM Java serverless application. I am using the eclipse AWS serverless plugin to create simple Dynamo DB based CRUD application. Application takes an http request and depending on the HTTP method tries the corresponding CRUD operation on DynamoDB.So all is working good except that I am not able to figure out how to pass an environment variable or a property file to my Lambda java code to determine whether lambda is running locally or in AWS environment. Depending on that I want to use local Dynamo DB client or AWS DB client. Here is the code snippet for that:String environment = System.getenv("profile"); AmazonDynamoDB dynamoDBclient = null; if("local".equalsIgnoreCase(environment)) { dynamoDBclient = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration( new AwsClientBuilder.EndpointConfiguration("http://172.16.123.1:8000", "local")) .build(); } else { dynamoDBclient = AmazonDynamoDBClientBuilder.standard().build(); } dynamoDBMapper = new DynamoDBMapper(dynamoDBclient);Trying to figure out how to pas this environment variable "profile". In SAM local run/debug config, I don't see any option to do that.
How to determine whether Lambda is running locally or under AWS under Java AWS serverless framework setup
One option is to pass theAWS_REGIONas a job parameter. For example, if you trigger the job from Lambda:import os response = client.start_job_run( JobName = 'a_job_name', Arguments = {'--AWS_REGION': os.environ['AWS_REGION'] } )Alternatively, if you define your jobs using theAWS::Glue::JobCloudFormation resource:GlueJob: Type: AWS::Glue::Job Properties: Role: !Ref GlueRole DefaultArguments: "--AWS_REGION": !Sub "${AWS::Region}" Command: ScriptLocation: !Sub s3://${GlueScriptBucket}/glue-job.py Name: glueetlThen you can extract theAWS_REGIONparameter in your job code usinggetResolvedOptions:import sys from awsglue.utils import getResolvedOptions args = getResolvedOptions(sys.argv, ['AWS_REGION']) print('region', args['AWS_REGION'])
How can I get the region in which the current Glue job is executing?When the Glue job starts executing, I see the outputDetected region eu-central-1.In AWS Lambda, I can use the following lines to fetch the current region:import os region = os.environ['AWS_REGION']However, it seems like theAWS_REGIONenvironment variable is not present in Glue and therefore aKeyErroris raised:KeyError: 'AWS_REGION'The reason why I need the region is I am trying to fetch all databases and tables as described inthis questionand I do not want to hard code the region when creating the boto client.
AWS region in AWS Glue
AWS answers this specifically for RDS in their docs:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAM.ServiceLinkedRoles.html{ "Action": "iam:CreateServiceLinkedRole", "Effect": "Allow", "Resource": "arn:aws:iam::*:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS", "Condition": { "StringLike": { "iam:AWSServiceName":"rds.amazonaws.com" } } }
This is my cloudformation json code to create DBInstance. I have successfully created VPC and EC2 instances and added this code to create DBInstance. But I am having following error while updating my stack with new json file including this DBInstance code.Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 1b64b02f-255a-4f5d-b68a-b0bacf6f2dba)"myDBInstance" : { "Type" : "AWS::RDS::DBInstance", "Properties" : { "DBName" : { "Ref" : "DBName"}, "AllocatedStorage" : "20", "DBInstanceClass" : { "Ref" : "DBInstanceClass"}, "Engine" : "MySQL", "EngineVersion" : "5.7.17", "MasterUsername" : { "Ref" : "DBUser"}, "MasterUserPassword" : { "Ref" : "DBPassword"}, "DBParameterGroupName" : { "Ref" : "myRDSParameterGroup"} } }, "myRDSParameterGroup" : { "Type" : "AWS::RDS::DBParameterGroup", "Properties" : { "Family" : "MySQL5.6", "Description" : "Cloudformation database parameter group", "Parameters" : { "autocommit" : "1", "general_log" : "1", "old_passwords" : "0" } } }enter image description here
AWS::RDS::DBInstance is not being created from cloudformation
Had the exact same thing, just found the answer in their documentation:"If you are using macOS, use HTTPS to connect to an AWS CodeCommit repository. After you connect to an AWS CodeCommit repository with HTTPS for the first time, subsequent access will fail after about fifteen minutes. The default Git version on macOS uses the Keychain Access utility to store credentials. For security measures, the password generated for access to your AWS CodeCommit repository is temporary, so the credentials stored in the keychain will stop working after about 15 minutes."https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-unixes.html#setting-up-https-unixes-credential-helper
I have two account AWS: DEV and PRD. I need to setupCodeCommiton PRD account. I test on DEV first. It worked well. Both DEV and PRD are setup CodeCommit on EU(Ireland) region. Then I clone all policy from DEV to PRD account.When I tried to clone repo CodeCommit on PRD account, I have a problem like that:fatal: unable to access 'URL xxx': The requested URL returned error: 403.As I researched, I checked git and curl version. Git version isgit version 2.14.4, curl version iscurl 7.53.1 (x86_64-redhat-linux-gnu) libcurl/7.53.1 NSS/3.28.4 zlib/1.2.8 libidn2/0.16 libpsl/0.6.2 (+libicu/50.1.2) libssh2/1.4.2 nghttp2/1.21.1 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz HTTP2 UnixSockets HTTPS-proxy PSLPolicyAWSCodeCommitPowerUserwas attached for IAM user. Of course, I edited exactly repo as I want to user can access. IAM user also enabled HTTP credentials to access RepoCodeCommit.I also tried to add credentials:[credential] helper = !aws codecommit credential-helper $@ UseHttpPath = trueI don't know what am I miss ? On DEV account, I setup the same. It worked. Why did error appear on PRD account ? Could anyone explain for me ?Thank you!UpdateI got that error cause by on PRD account, I've to enable MFA for IAM user. It was resolved! Thank all!
AWS: Can't clone repo Codecommit, The requested URL returned error: 403
) You do not need issue an unzip command, the files section in appspec.yml is used to specify the files in your archive (source) you wish to copy to the file system on ec2 (destination) and will be done by the code deploy agent REhttps://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-files.html2) Create a run script to issue java -jar command under the hook ApplicaitonStart REhttps://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.htmlExample appspec.yml on os linux based:version: 0.0 os: linux files: - source: ./ destination: /home/ubuntu/myapp hooks: ApplicationStart: - location: runapp.sh runas: ubunturunapp.sh:#!/usr/bin/env bash echo 'Starting my app' cd '/home/ubuntu/myapp' java -jar myapp.jarIn this example, you would include runapp.sh in your deployment package
I want to use AWS CodeDeploy to deploy a jar file and then run myjava -jarcommand from there once it is on the EC2. But I've noticed that AWS CodeDeploy only pullszip,tarand tar.gz` from S3. I'm thinking I will use CLI from my local jenkins to push a .zip file (containing the jar) to S3, then run another CLI command to start AWS CodeDeploy which will pull the .zip from S3.However I do have a question the details on AWS CodeDeploy:Can I use the appspec.yml to issue two commands,1) unzip the .zip from S3 once it is on the EC22) Issue thejava -jaron a specific location?Thanks
AWS CodeDeploy jar
In the FIFO case, when you receive a message with a message group ID,no more messagesfor the same message group ID are returned unless you delete the message or it becomes visible again (i.e. until you have successfully processed the first message with that group ID or you have proven to be unable to process it within the time allowed, in which case it becomes visible again at the head of the queue of messages with that group ID).In the non-FIFO case, I would expect message processing to continue regardless.For more, seeAmazon SQS FIFO (First-In-First-Out) Queues - Amazon Simple Queue Service.
If I have 5 messages in a batch, and 4th message fails during receive due to network failure. Will the message processing be blocked, 5th message will be processed or not for both standard and FIFO queues.
AWS SQS- If message fails to process, will transmission of message stop
using (var amazonClient = new AmazonS3Client()) { var getObjectMetadataRequest = new GetObjectMetadataRequest() { BucketName = RawBucketName, Key = fileName }; }methods linkhttps://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_S3_Model_GetObjectMetadataRequest.htm
I am wondering how I could pull metadata for an object in an S3 bucket. I am using AWS SDK for .NET. I already know how to pull a list of objects but I need to know how to pull metadata for each objects. Please help me out.
How can I pull metadata for an object in a S3 bucket using C#?
Instead of using--exclude "*" --include "$b.txt", I just usedaws s3 mv myfolder/"$b.txt" s3://mybucket/. I'm pretty sure I tried that same thing earlier without the "" around $b.txt and it didn't work because there was a whitespace in front of the variable.
I am getting the error message"Unknown options: s3://mybucket/"when using the following set of commands to mv files to S3. The output that I am getting fromecho $bis exactly what I am expecting so I know I am targeting the correct file. The error occurs on the lineaws s3 mv ...tag=$( tail -n 2 /var/log/cloud-init-output.log ) if [[ ${tag} == *"Processed"* ]]; then b=${tag##*"from"} b=${b%%.*} # retain the part before the colon aws s3 mv myfolder/ s3://mybucket/ --recursive --exclude "*" --include "$b.txt" fiAfter messing around with it for a long time, I believe the$bvariable in the mv command is the issue because it will work if I substitute the output ofecho $bfor$bin$b.txt. However, I cannot figure out how to fix it.Here is the output when I runaws --version:aws-cli/1.14.8 Python/2.7.14 Linux/4.14.47-64.38.amzn2.x86_64 botocore/1.8.12which is the latest version and I have already tried running (I have python3 installed):pip3 install --upgrade awscliI know wildcards are weird with the aws-cli but I don't see why I would get an error using a variable. Thanks in advance.
Unknown Options when Using aws s3 mv
If you haven't created any keys, that would be sort of the same as them being deleted.AWS strongly recommends that you don't use root account keys, in fact they strongly recommend not using the root account, but instead creating an IAM user for yourself.
I have no understanding on why this status is shown as checked and I don't recall anything as like I deleted any keys. Can someone let me know why this shows as checked by default ? I haven't created any keys yet.
Why "Delete your root access keys" security status is shown as ticked in AWS IAM console?
This is a common case with stateless JWT tokens issued with Cognito for authentication.Once a user got hold of a token which valid for 1 hour, the token itself acts as the proof for authentication. The token is signed and issued by AWS and for validation it only requires to do a signature verification using a publickey.The approach you can handle this is at the authorization layer in your application where you can check either the user is active/deactive in your database after the user successfully authenticates. You can further delete the user from Cognito where he is not able to login back again.
I am trying to use AWS Cognito user pools with Cognito federation as auth for my APIs on api-gateway. I got the authentication & authorization part (using roles) to work, but now stuck on how to revoke access. After login & getting the federated identity, I deleted the identity from identity browser (console) & deleted the user from cognito user pool. But that does not invalidate access using the earlier generated tokens, till they expire (which is a minimum of 1 hour).I also tried settingServerSideTokenCheckto true, but that doesn't work either. The only way to "revoke" access seems to bethis. But this does not work for us as our use case assigns roles to a group. I cannot have groups of users lose access to revoke/deny access to one user.Is there anything I have missed to get this done? I cannot fathom an auth service which does not give me easy way to revoke access to user.
Deleting cognito user & identity has no affect on user access
asyncis a keyword in python 3.5+. As you are running this code in python 3.7 assigning a value to a keyword raises a syntax error. If you ran this code in 2.7, it would work just fine.It looks like this line is not in the most recent version ofparamiko, which renames this variable toasync_:def _close(self, async_=False): # We allow double-close without signaling an error, because realSimply upgradingparamikoto the most recent version should solve your problem:sudo pip install -U paramiko
I'm trying to set up Bees with Machine Guns and noticed that regardless of the command for the bees I'm getting a syntax error inside the paramiko library:File "/usr/local/lib/python3.7/site-packages/paramiko/sftp_file.py", line 66 self._close(async=True) ^ SyntaxError: invalid syntaxThoughts on how to handle this?/how to get Bees with Machine Guns running? I was looking athttps://gist.github.com/mattheworiordan/1892979but I don't think thats for the same issue.
Bees with Machine Guns syntax error involving paramiko. (self._close(async=True))
It is possible to route to host a static website without using S3.Go to your S3 console and select your bucket.Select the "Properties" tab, then select "Static web hosting".Once this is setup you should see an endpoint url. Similar to this: "Endpoint :http://xxxx.yyy.s3-website.xxxx.amazonaws.com"Copy this url then create a CNAME record with EuroDNS and paste this link as the alias for "@" & "www" (optional). In the case of Eurodns set the host to your domain (gopropel.io & www.gopropel.io) and the canonical name as the url.Allow a few minutes for the effect to go through and your domain should resolve to the s3 bucket.This is not an ideal solution as it will limit certain features such as SSL (HTTPS).The recommended approach is to go with Route53 it should cost less than $1.
I want to host a static website on Amazon S3. Created the relevant buckets - testing them ok. Now I have a domain name i've registered with EuroDNS - www.gopropel.io - I can't find how to connect it to my AWS S3 bucket. Do I need to create a route 53 hosted zone? Went over the AWS documentation and they all assume you are registering your domain with them.
Connecting external domain name to AWS S3 website
The--querycapability in theAWS Command-Line Interface (CLI)is a feature of the CLI itself, rather than being performed during an API call.If you are using the boto3list_object_v2()command, a full set of results is returned.You can thenuse Python to manipulate the results.It appears that you are wanting to list the most recent object in the bucket/path, so you could use something like:import boto3 client = boto3.client('s3',region_name='ap-southeast-2') response = client.list_objects_v2(Bucket='my-bucket') print (sorted(response['Contents'], key=lambda item: item['LastModified'])[-1])
Would like to know the python boto3 code for below AWS CLIaws s3api list-objects-v2 \ --bucket myBucket \ --prefix path1/path2 \ --query 'reverse(sort_by(Contents,&LastModified))[0]'i didnt see any query option for list_objects_v2https://boto3.readthedocs.io/en/stable/reference/services/s3.html#S3.Client.list_objects_v2
boto3 version of list-objects-v2 --query command in AWS CLI
There are multiple events that can trigger Lambda from S3. When selecting your events from S3 choose the right one. See Image for details.Put and post will trigger lambda when the files get created. In Lambda call copy so that will not trigger the Lambda again. Problem Solved.Otherwise, it will create a loop doing the same thing and your Lambda will throttle and cost you money.
I have a lambda that gets triggered when an object gets created in my s3 bucket. It adds metadata to the object and does some validation. In order to add the metadata I issue a copy request, and delete the old object.But this creation also triggers my lambda. Is there any easy way around this?
My AWS Lambda is triggered by S3 object creation and it issues a copy object request. This creates an infinite loop. Any way around this?