Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I've already got it. There's other parameter called DesiredCount, and it's gonna make x replicas, depending the number of tasks you need. So if you want to put 5 tasks in your service, DesiredCount parameter that you have to put is 5. Here is the link if you want to search the parameterDesiredCountfor more info.
Hi I'm newbie on CloudFormation AWS, and I'm working right now with a ECS Service with 1 task, but I would like to put more tasks using CloudFormation. However inside the properties on AWS ECS Service, there's one called Task Definition, and only allows to put 1 tasks. How can I configure in order to use more tasks. I´m doing the project on the same Region. ThanksTask Definition Property
Multiple Tasks Definition on ECS Service using CloudFormation
From the documents:hereData sharing via datashare is only available for ra3 instance types The document lists ra3.16xlarge, ra3.4xlarge, and ra3.xlplus instance types for producer and consumer clusters.So, if I were in your place - I would first go back and check my instance type. If still not sure, drop a simple CS ticket and ask them if anything has changed recently & documentation is not updated
I am trying to create a datashare in Redshift by following thisdocumentation. When I type this command:CREATE DATASHARE datashare_nameI get this message:ERROR: CREATE DATASHARE is not enabled.I also tried to make it using console, but same issue. So how to enable data sharing in Redshift ?
How to enable datasharing in Redshift cluster?
From AWS:The call to kms:Decrypt is to verify the integrity of the new data key before using it. Therefore, the producer must have the kms:GenerateDataKey and kms:Decrypt permissions for the customer master key (CMK).https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html#sqs-what-permissions-for-sse
I noticed that in both of the following scenarios:S3 -PutObjectto an encrypted bucket.SQS -SendMessageto an encrypted queue.I need to have thekms:Decryptpermission (in addition to thekms:GenerateDataKeypermission), otherwise it throws an "unauthorized" exception.Why would that be the case?
AWS KMS - why do I need the "kms:Decrypt" permission when I try to encrypt data?
You need, at the least, to preform three actions:# Download the zip file from S3, note the use of the S3 URI, not HTTPS aws s3 cp s3://aws-lake/test/test.zip temp.zip # Decompress the zip file into a temp directory unzip -d temp_zip_contents temp.zip # Sync up the contents of the temp directory to S3 prefix aws s3 sync temp_zip_contents s3://aws-lake/test/test_contents # And optionally, clean up the temp files and directory # Unix: rm -rf temp.zip temp_zip_contents # Windows rd /s/q temp_zip_contents del temp.zipIt's possible to write a program to download the file into memory, read the contents of the zip file, and upload the individual files decompressed, but that requires more than a few command line commands.
Can you unzip a file from S3 and push the unzipped version back into S3 after using AWS CLI ?Trying the below, no success yet.unzip aws s3 cp https://aws-lake/test/test.zip
Unzip S3 File and push back into unzipped file into S3 via AWS CLI
Is it possible to attach an ACM certificate to alb from a different region using terraform?Sadly its not possible. ACM certs can only be used in the regions where they created, not counting global resources such as CloudFront.For your ALB, you have to create new ACM in ALB's region and register it to the same domain. FromAWS blog:ACM certificatesmustbe requested or imported in thesame AWS Region as your load balancer. Amazon CloudFront distributions must request the certificate in the US East (N. Virginia) Region.
I have my AWS infrastructure setup in ap-southeast-1 using terraform, however, I want to link my ACM certificate created in us-east1 to my load balancer using aws_alb_listener resource.resource "aws_alb_listener" "https" { load_balancer_arn = aws_lb.main.id port = 443 protocol = "HTTPS" ssl_policy = "ELBSecurityPolicy-2016-08" certificate_arn = var.acm_certificate_arn depends_on = [aws_alb_target_group.main] default_action { target_group_arn = aws_alb_target_group.main.arn type = "forward" } }When I do terraform apply, it raises an error.Is it possible to attach an ACM certificate to alb from a different region using terraform?My use case is this cert will also be used in AWS CloudFront as a CDN.
How to attach an ACM certificate from a different region (us-east1) to an application load balancer in another region using terraform
What you are looking for is theScreencommand. This allows you to create a virtual, detached terminal session that is retained even if your SSH connection closes. The screen's processes continue to execute gracefully in the background.(Screen is not pre-installed on all systems. See your distribution's documentation for installation instructions).Create new session:screen -S java_session(Run your java script here.)To detach (go back to SSH):CTRL + A + DTo re-attach (on new SSH session):screen -rMore information on Screen:Documentation
I am running a Java CLI utility tool on an EC2 instance. It does some process and it writes the result to the console and also to a log file. I am running it with the following command:java -jar <jar_file_name> |& tee output_file.txtThis processes should run for a few days. But when I get disconnected (the SSH session is closed) from the EC2 instance the process stops. How can I make the process running even when I get disconnected? (or turn off my local machine)
Process running on EC2 instance stops when get disconnected from EC2
I've just found an answer using an old script to delete IAM users I had:aws lambda list-functions --region us-east-1 | jq -r '.Functions | .[] | .FunctionName' | while read uname1; do echo "Deleting $uname1"; aws lambda delete-function --region us-east-1 --function-name $uname1; done
I have around 200 lambda functions that I need to delete. Using the console I can only delete one at a time, which would be really painful. Does anyone know a cli command to bulk delete all the functions? Thanks!
How to bulk delete lambda functions using AWS Cli
In theRegional Availabilityparagraph of the Amazon Cognito developer guide it is stated that:Amazon Cognito is available in multiple AWS Regions worldwide. In each Region, Amazon Cognito is distributed across multiple Availability Zones. These Availability Zones are physically isolated from each other, but are united by private, low-latency, high-throughput, and highly redundant network connections. These Availability Zones enable AWS to provide services, including Amazon Cognito, with very high levels of availability and redundancy, while also minimizing latency.Additionally, the current version (published March 6, 2019) of the Amazon CognitoSLA (Service License Agreement)has defined an uptime of "three nines" for any given month:AWS will use commercially reasonable efforts to make Cognito available with a Monthly Uptime Percentage for each AWS region, during any monthly billing cycle, of at least 99.9% [...]. In the event Cognito does not meet the Service Commitment, you will be eligible to receive a Service Credit as described [...]
I'm writing a report about high availability in my application. Today we use Cognito as authentication service.In AWS documentation I found this page about resilience in Cognitohttps://docs.aws.amazon.com/cognito/latest/developerguide/disaster-recovery-resiliency.htmlBut I want to understand this really means that Cognito is high available? In case in a failure of an availability zone, Cognito still works?
Is Cognito High Available service by default?
You should setuporigin request policiesfor your cache behavior. You can try with AWS managedManaged-AllViewerpolicy orcreate new onejust to forward the query strings:
I created a lambda function with a API gateway and Cloudfront distribution in the frontin the cloudfront behaviors I disabled cachingthis is the lambda function:exports.handler = async (event) => { const response = { statusCode: 200, body: JSON.stringify('rawQueryString is: ' + event.rawQueryString), }; return response; };calling the api gateway I see the querystring in the lambda responsehttps://xxx.execute-api.us-east-1.amazonaws.com/api?name=johnrawQueryString is: '?name=john'calling the cloudfront distribution i can't see the querystring in the lambda responsehttps://xxx.cloudfront.net/api?name=johnrawQueryString is: ''I tried with "Origin Request Policy"but now when i callhttps://xxx.cloudfront.net/api?name=johnI get{ "message": "Forbidden" }
pass query params from cloudfront to api gateway
You can't filter certain messages with SQS solely, however, you can do that with SNS.You can publish the messages to an SNS topic. The message filter feature of SNS enables endpoints subscribed to an SNS topic to receive only the subset of topic messages it is interested in. So you can ensure only the relevant messages with specific attributes are enqueued to the consumer's queue.Refer toFilter Messages Published to TopicsandSNS subscription filtering policies.
I have set up a standard queue with AWS SQS and I want to poll thisqueuefor messages containing a specificattribute, preferably using theboto3library in python. I know thatboto3has a methodrecieve_message()which polls messages from the queue. However, I want to only get those messages which contain a specific attribute. A naive approach is to iterate through thereceive_message()output and check if amessageinreceive_message()contains theattribute, but I was wondering if there is another solution to this problem.
Polling an AWS SQS queue for messages with certain attributes
Please take a look on this docs:CodeBuild - GitHub webhooksCodeBuild ArtifactsThe main idea:You can create webhook from GitHub to CodeBuild. Every PUSH in GitHub it will start a CodeBuild container. This container get the artifacts (repository).In the CodeBuild itself you already have access to the artifacts so for example you can use boto3 (Python)/aws-cli to interact with s3 bucket and upload the artifacts.
I want to copy a latest code of gitHub into amazon s3 bucket with the help of only code build service.I know it's pretty much straight forward to do using codepipeline and codedeploy service. But I am more interested to see how its doable only with the codebuild service.At the moment, I do not have any kind of java/scala/node js code in githib repository and also do not want to execute compile,run or test the code. I just simply looking to copy files from github to s3 bucket when there is PUSH event occured.Please guide me ,how to configure codebuild service for this use case.Please share your thoughts and links with respect to this problem.Thanks in advance.
How to copy github code to s3 bucket using only aws codebuild service?
This is not a service quota issue. The issue you are facing is due to excessive authentication requests from your side. This issue will not occur if you renew your SMTP credentials in the SES console to use Sigv4 credentials.https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html
The errorThrottling failure: Maximum SigV2 SMTP sending rate exceeded.suddenly started to appear in our .NET application though there were no exceeding any quota (14 mails per second or 50000 per day) in our AWS Sending Statistics.I can see many similar issues aboutThrottling – Maximum sending rate exceededon StackOverflow but I'm confused aboutSigV2in my error message.Searching inother resources like this onegave me the idea that this issue started to happen recently from about October 20, 2020, and there is no exact answer to why this happened. The only solution I can see is to migrate from using SigV2 signing process tothe new method.The question is: Why this happened and can this issue be solved without changes in the application code?
Amazon AWS SES Error - Throttling failure: Maximum SigV2 SMTP sending rate exceeded
WithSNSpublishyou could publish message one by one, AWS does not provide a way to publish message in bulk or batch to SNS.Below is the possible solution you could try:You could usesend-message-batchAPI and send bulk messages to SQS. That SQS will be subscribed by a SNS. Below is an image to create subscription to SQS:
I have a scenario where every time lambda runs, it will send 1000 messages to a SNS topic. I can loop through the list of messages and call the publish method 1000 times, one message at a time but I was wondering if there is a way to send multiple messages in one call. In that case I can batch the messages and call publish let say 10 with 100 messages on each execution.I would really appreciate if someone can help on this.
Sending multiple messages to SNS on a single call
Amazon S3 does not have the concept of a 'Directory'.Instead, thefull pathof an object is stored in its Key (filename).For example, an object can be stored in Amazon S3 with a Key of:invoices/2020-09/inv22.txtThis object can be created even if theinvoicesand2020-09directories do not exist. When viewed through the Amazon S3 console, it willappearas though those directories were automatically created, but if the object is deleted, those directories will disappear (because they never existed).If a user clicks the "Create Folder" button in the Amazon S3 management console, azero-length object is created with the same name as the folder. This 'forces' the folder to appear even if there are no objects 'inside' the folder. However, it is not actually a folder.Therefore, it is not possible to "check if the listed path is a file or a directory" because directories do not exist. Instead, I recommend that youassume everything is a 'file' unless it is zero-length.
I have functionality for listing AWS S3 directories with Scala and I would like to check if the listed path is a file or a directory. How can I implement this functionality (isFile method) using amazon-sdk-s3?Here is how it looks like:def listContents(): Seq[T] = val paths = s3Client.list(inputPath) for { path <- paths if isFile(new Path(path)).getOrElse(false) res <- transform(path.toString) } yield res def isFile(path: String) = ??? //implementation I need
Check if path is file Amazon S3
This is the expected behavior of theputoperation. From thedocs(emphasis mine):Creates a new item, orreplaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item.
I have a simple logic where I want toINSERTrecord in the table usingAWS.DynamoDB.DocumentClient. Below is code which I am using. Here, I am noticing that.putis updating the record and not sure why.While checking the logs in CW, I am not seeing any error so not sure what am I missing here.Code:async addEvent(incomingModel: MyModel): Promise <MyModel> { const dynamoModel: MyModel = { fieldOne: incomingModel.fieldOne, fieldTwo: incomingModel.fieldTwo, "One#two#three": `${incomingModel.one}#${incomingModel.two}#${incomingModel.three}`, fieldThree: incomingModel.fieldThree, fieldFour: incomingModel.fieldFour, }; var params = { TableName: 'Table', Item: dynamoModel, }; return this.documentClient .put(params) .then((data) => { this.logger.debug( "Response received from Dynamo after adding an incomingModel", data, this.constructor.name, ); return data as MyModel; }) .catch((error) => { const errorMessage = JSON.stringify(error); this.logger.error( `Error creating incomingModel with body of ${errorMessage}`, error, this.constructor.name, ); throw error; }); }
Why "documentClient.put" is doing UPDATE instead of INSERT in DynamoDB?
The error is actually related to CodeBuild not CodePipeline. It seems like CodeBuild does not have valid permissions for its attached service role.From the console you can find the attached service role by performing the following:Go to the CodeBuild consoleClick "Build Projects" from the menu on the left hand sideClick the radio button next to build project you're using, then on the top menu click "Edit" and select then "Edit Source" option.At the bottom of the page will be a section titled "Service role permissions" with the Arn below it.This IAM role will need to be granted the permissions it requires (in your case "s3:PutObject") if they are not already there.AWS provides a full policy in theCreate a CodeBuild service roledocumentation.
Have been trying to setup an AWS pipeline following the tutorial here:https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.htmlBut the pipeline continously fails with below error logs:Here are some of the actions, I tried already:Granted full access of S3 to "cfn-lambda-pipeline" role associated with Cloud Formation and Code Pipeline Service Role.Allowed public ACL access to S3 bucket.Below is my buildspec.ymlversion: 0.2 phases: install: runtime-versions: nodejs: 12 build: commands: - npm install - export BUCKET=xx-test - aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yml artifacts: type: zip files: - template.yml - outputtemplate.ymlBelow is my template.yamlAWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > helloWorld DZ Bank API Gateway connectivity helloWorld Globals: Function: Timeout: 3 Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: CodeUri: ./ Handler: app.lambdaHandler Runtime: nodejs12.x Events: HelloWorld: Type: Api Properties: Path: /hello Method: get
Unable to execute AWS Pipeline Error: "An error occurred (AccessDenied) when calling the PutObject operation: Access Denied"
I was able to add a "Resource-based policy" entry by using the base CfnEventBusPolicy class and referencing the corresponding bus by its name:const defaultBus = event.EventBus.fromEventBusName(this, 'default-bus', 'default'); new event.CfnEventBusPolicy(this, 'xaccount-policy', { statementId: 'AllowXAccountPushEvents', action: 'events:PutEvents', eventBusName: defaultBus.eventBusName, principal: 'account-id-goes-here', });
What I am trying to do is send an event from a different AWS account to my account which contains the eventbus.For that I am trying to attach a role/policy to EventBus but I am not able to. I tried to use grantPutEvents but no luck there too. How to do this? (add/attach a Policy)Also if I attach policy withPrincipalas account ID of the other AWS account andresourceas the ARN of the EventBus, Will this allow me to send events ? Or do I need to do something more?
Attach Policy to EventBus using CDK and send cross-account events to Eventbus
You can copy the contents of one prefix (folder) to another prefix (folder) by using this command -aws s3 cp --recursive s3://<bucket-name>/<source-folder-name> s3://<bucket-name>/<target-folder-name> --region <region-name>
Trying to copy data from one folder in the S3 bucket to another folder in the same bucket. Is it possible from GUI or AWS CLI?
How to copy data from one AWS S3 folder to another folder in the same bucket?
You can get the certificate ARN using,data "aws_acm_certificate" "certificate" { domain = "your.domain" statuses = ["ISSUED"] most_recent = true }and then attach it to listenerresource "aws_lb_listener_certificate" "ssl_certificate" { listener_arn = aws_lb_listener.alb_front_https.arn certificate_arn = data.aws_acm_certificate.certificate.arn }
I have certificate in aws certificate manager.How I connect this certificate toaws_alb_listenerin terraform?Right now I take the certs from files in my computer.resource "aws_alb_listener" "alb_front_https" { load_balancer_arn = "${aws_alb.demo_eu_alb.arn}" port = "443" protocol = "HTTPS" ssl_policy = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06" certificate_arn = "${aws_iam_server_certificate.lb_cert.arn}" default_action { target_group_arn = "${aws_alb_target_group.nginx.arn}" type = "forward" } } resource "aws_iam_server_certificate" "lb_cert" { name = "lb_cert-${var.app}" certificate_body = "${file("./www.xxx.com/cert.pem")}" private_key = "${file("./www.xxx.com/privkey.pem")}" certificate_chain = "${file("./www.xxx.com/chain.pem")}" }I want toaws_alb_listenerto use certificate on aws certificate manager.How to do that in terraform?
How to connect aws certificate manager to aws_alb_listener in terraform?
There are many ways to build an AMI. An AMI is built through creating an image of an existing instance.This instance can be configured manually or using a configuration tool (such asAnsible,CheforPuppet).By using the tools you can automate the build of your servers to always follow a suite of instructions that are reproducible. A workflow known as pre-baked/golden AMIs involves running these configuration tools over a new server and then create the image from that server. This would then be rolled out replacing other servers.Amazon have a blog post on theGolden AMI Pipelinethat helps to explain how this can be automated.AWS also have recently launched a tool named theEC2 Image Builderwhich allows full automation as well a validation of the AMI after it has been created.
These are steps I found in an AWS document: You can create an AMI using the AWS Management Console or the command line.The following diagram summarizes the process for creating an Amazon EBS-backed AMI from a running EC2 instance. Start with an existing AMI, launch an instance, customize it, create a new AMI from it, and finally launch an instance of your new AMI.Is it possible for these to be automated? I am not sure which service to use at the moment or if it even is possible. any ideas appreciated
Is it possible to automate the creation of AMIs in AWS?
I found examples with YAML input but not with JSON input online. While YAML has its advantages, sometimes JSON is easier to work with in my opinion (in bash/gitlab CI scripts for example).The way to callaws deployusing JSON without the use of S3 and constructing the Appspec content in a variable:APPSPEC=$(echo '{"version":1,"Resources":[{"TargetService":{"Type":"AWS::ECS::Service","Properties":{"TaskDefinition":"'${AWS_TASK_DEFINITION_ARN}'","LoadBalancerInfo":{"ContainerName":"react-web","ContainerPort":3000}}}}]}' | jq -Rs .)Note thejq -Rs .at the end: the content should be a JSON-as-String and not be part of the actual JSON. Usingjqwe escape the JSON. Replace the variables as needed (AWS_TASK_DEFINITION_ARN, ContainerName and ContainerPort etc.)REVISION='{"revisionType":"AppSpecContent","appSpecContent":{"content":'${APPSPEC}'}}'And finally we can create the deployment with the new revision:aws deploy create-deployment --application-name "${AWS_APPLICATION_NAME}" --deployment-group-name "${AWS_DEPLOYMENT_GROUP_NAME}" --revision "$REVISION"Tested on aws-cli/2.4.15
Is there a way to runAWS Codedeploywithout the use of anappspec.ymlfile?I am looking for a way to create a 100% purely command line way of running create-deployment without the use of any yml files in S3 bucket
AWS cli create deployment without appspec file
Fromdocs:Batch size – The number of items to read from the queue in each batch, up to 10. The event mightcontain fewer itemsif the batch that Lambda read from thequeue had fewer items.Thus, based on this, you should get 3 messages. Lambda should not be waiting for 5.
I am working with AWS SQS and Lambda. I wanted to know that if thebatchsize= 5 and sqs messages left = 3.Will the Lambda be triggered by a batch of 3 messages or will sqs wait for the message count to become 5?
AWS SQS Batch size and messages
The problem is that output files from Amazon Athena are being mixed-in with your source files.To fix it, go to the Athena console and clickSettings.Then, change theQuery result locationto a different location that doesnotpoint to the location where you are storing the source data files.The Query result location is where Athena stores the output of queries, in case you need the results again or want to use them as input to future queries.
The first time I ran the query, it returned 2 rows with columns names. I edited the table and added skip.header.line.count - 1 and reran(First time), but it returned same result with double inverted commas. Then reran again(Second time), and this changed everything.First time Query run output:https://i.stack.imgur.com/k6T2O.pngSecond time Query run output:https://i.stack.imgur.com/6Cxrf.png
AWS Athena query returns results in incorrect format when query is run again
You cannot create on an existing DynamoDB table, from the AWSdocumentation:To create one or more local secondary indexes on a table, use the LocalSecondaryIndexes parameter of the CreateTable operation. Local secondary indexes on a table are created when the table is created. When you delete a table, any local secondary indexes on that table are also deleted.You would need to either use a global secondary index, or migrate to a new table that you create the LSI at create time.
i am trying to create a local secondary index via the management console. After selecting the desired table in DynamoDB, I choose the "index" tab and click "Create Index". Then I am prompted with the following UI.I cannot state, if the index should be local or global. AWS does not automatically check, wheter the index partition key equals the table partition key. Even when it is the same partition key, AWS specifies the Index as "GSI" instead of "LSI". I can configure a local secondary index via the cli correctly - this works perfectly fine.I was just wondering, can you actually do this right in the management console?
How can I create a local secondary index in AWS DynamoDB?
CodePipeline notifiacations are handled and setup usingAWS::CodeStarNotifications::NotificationRuleThe events supported and which should be used in the rule are described in:Events for Notification Rules on Pipelines
I'm building a CodePipeline stack using CloudFormation. Everything works flawlessly. One element I am unable to add is the Notification rule using CloudFormation Template and I cannot find any documentation on it apart from the console method.I tried adding aNotificationArnlike this but this doesn't work as I found out it is specific toManuapprovalaction.CodePipelineSNSTopic: Type: AWS::SNS::Topic Properties: Subscription: - Endpoint: !Ref Email Protocol: email . . . - Name: S3Source Actions: - Name: TemplateSource ActionTypeId: Category: Source Owner: AWS Provider: S3 Version: '1' Configuration: S3Bucket: !Ref 'S3Bucket' S3ObjectKey: !Ref 'SourceS3Key' NotificationArn: !Ref CodePipelineSNSTopic OutputArtifacts: - Name: TemplateSource RunOrder: '1'Is there a documentation that I am unable to find? Please help me
Create CodePipeline Notification Rule using CloudFormation
Youshoudn't make an SQS queue publicso that anyone without AWS credentials could use it. Its not a good practice.A better option is to uses API gateway in front of your SQS queue:Creating an AWS Service Proxy for Amazon SQSThis way you can make yourAPI gateway endpoint public, control its throughput, limits, throttling, access using API keys, and more.The API gateway would be integrated with your SQS queue which would allow you totrigger your lambda function.With the use ofAPI keysorlambda authorizesyou will be able tocontrol accessof your devices/agents to the API gateway, and subsequently, to the SQS.
My Agents running on various environments/devices are going to drop periodic messages from public network. These messages will be processed by my AWS Lambda. The systems are asynchronous.I am thinking of using SQS to feed the Lambda. Just that, SQS endpoint will be open to internet. How can I validate the messages posted on AWS SQS.Most of the devices/agents pushing messages will be on customer VPN. So, establishing a private-vpn-link is a possible solution.
Publishing AWS SQS message from Public Internet
So... i had two ways, send my param as&value1=[1,2,3]or use aws_proxy and access this value1 from eventmultiValueQueryStringParametersI choosed the last one.
I'm using aws integration api gateway with lambda and i have data mapping template. The url with query is likehttps://example.com/query?value1=val1&value1=val2&value1=val3I'm trying to pass all those params to lambda, but have no luck - only last value is passed. Here is part of data mapping template."queryStringParameters": { #foreach($queryParam in $input.params().querystring.keySet()) "$queryParam": "$util.escapeJavaScript($input.params().querystring.get($queryParam))" #if($foreach.hasNext),#end #end },I know that there is multivaluequerystringparameters in aws proxy integration but had no luck finding them using data mapping template. Here is test results:Method request query string: {value1=[val1,val2,val3]} Endpoint request body after transformations: "queryStringParameters": {"value1": "val3"}Tried to iterate through that parameter like in VTL using #foreach but had no luck with that too
How to pass multi value query string parameter to lambda on api gateway?
Assuming you have 242 unique IDs among the 243 log events, you can group on ID and filter out the unique results like this:stats count() as cnt by ID | filter cnt > 1
I have my Android app's logs in CloudWatch. One event am tracking is giving data like this.Using 'count_distinct' it's giving count as 242 and while using 'count', it gives 243. So one duplicate entry is there.I have id field as well. And i guess it might be repeating. How can i filter it out?
How to filter out duplicate values from CloudWatch logs?
Based onhttps://forums.aws.amazon.com/message.jspa?messageID=947039for me it was very helpful troubleshooting via AWS CloudTrail:Navigate to the Cloudtrail console:https://console.aws.amazon.com/cloudtrail/Click onEvent HistorytabThen use filter asEvent Sourceand in Time range select the timestamp during cluster launch.From the buttons on Right side, click on the Gear Icon, which is for Show/Hide columns and select theError Codecolumn check box.Once all the above is done, go through the list of events and expand the one which has an ErrorCode likeAccessDenied,Client.UnauthorizedOperationor any other exception.Once you know which API call is being denied, you can then investigate further regarding the same.
I'm trying to create a new EMR cluster (tried emr-5.30.0 and emr-6.0.0 versions) but I'm receiving the validation message error: "Terminated with errorsService role EMR_DefaultRole has insufficient EC2 permissions".I've tried this workaroundhttps://aws.amazon.com/premiumsupport/knowledge-center/emr-default-role-invalid/recreating the default roles for EMR but the validation message error still happening.Any guidance or recommendations on how to resolve this issue are much appreciated!Thank you
AWS EMR - EMR_DefaultRole has insufficient EC2 permissions
I had same result, but the error was how I was setting the command for sync.a) From local to S3 Bucket: Eg. create a file 'hi.txt' inside this path '/home/ec2-user/src_folder'The sync command should be similar as:aws s3 sync src_folder s3://your-bucket/dest_folderVerify your files with:aws s3 ls s3://your-bucket/dest_folder/b) From S3 bucket all content to localaws s3 sync s3://your-bucket .
I have the proper access and role in IAM. I have administrator access. Myaws s3 lscommand works alright. I have doneaws configureas well. But when I run the command :aws s3 sync s3://my-bucket local_destination_folderIt only downloads until the last folder of the bucket and that folder is empty. I even tried theaws s3 cpcommand with--recursiveswitch, but nothing gets downloaded. I don't get any error as well.
Getting no error but AWS S3 sync does not work at all
I am Max from DataGrip team, and the correct answer is:It could be JDBC driver issue and the desired method hasn't been implemented yet. Since you're trying to run purecqlshcommand as SQL. Follow the issueDBE-10638.
I'm trying to perform inserts on Amazon's Managed Cassandra service from IntelliJ's DataGrip IDE, however I recieve the following error:Consistency level LOCAL_ONE is not supported for this operation. Supported consistency levels are: LOCAL_QUORUMThis is due to Amazon using theLOCAL_QUORUMconsistency level for writes.I tried to set the consistency level withCONSISTENCY LOCAL_QUORUM;before running other queries but it returned the following error:line 1:0 no viable alternative at input 'CONSISTENCY' ([CONSISTENCY])From my understanding, this is becauseCONSISTENCYis a cqlsh command and not a CQL command.I cannot find any way to set the consistency level from within DataGrip so that I can run scripts and populate my tables.Ultimately, I will use plain cqlsh if I cannot find a solution but I was hoping to use DataGrip as I find it useful and have many databases already configured. I hope someone can shed some light on the issue, this seems like it should be a basic feature.
Cannot set consistency level when querying Amazon Keyspaces service from DataGrip
Another way:date(format('%d-%d-%d', 2020, 3, 31))Based onCalculate date and weekending date on Presto
Entries in my table are saved with a date, as distinct fieldsday,monthandyear. I want to read the dates as Date type.What's the correct way to do it?
How to combine day, month, year fields into a date in Presto?
And this is the desired behaviour because it shouldn't delete any network interfaces.This is an incorrect assumption. If your Build project uses VPC Configuration, CodeBuild will create a network interface in your account and attach it to the Build container so that the build container can access VPC resources, e.g a Database. CodeBuild will delete this network interface once the build finishes. The requirement for "ec2:DeleteNetworkInterface" is clearly documented in CodeBuild documentation:https://docs.aws.amazon.com/codebuild/latest/userguide/auth-and-access-control-iam-identity-based-access-control.html#customer-managed-policies-example-create-vpc-network-interfaceI agree that the dry run behaviour may have changed but it does not change the fact that you need the 'DeleteNetworkInterface' permission everytime your project uses VPC Configuration.
I have a CodeBuild service that gets this errorUNAUTHORIZED_OPERATION_DELETE_NETWORK_INTERFACE: The service role is not authorized to perform ec2:DeleteNetworkInterfaceThe service role that I am using has the necessary permissions forec2:DeleteNetworkInterface, but it is blocked by a global deny policy - which has been fine until recently because previously CodeBuild has been runningDeleteNetworkInterfacewith the--dry-runflag. It is just checking that I have the permissions instead of actually executing it. And this is the desired behaviour because it shouldn't delete any network interfaces. This has been working for months.However, right now it is failing because the--dry-runflag is no longer set. I'm really stumped as to why, because the pipeline hasn't been updated and it was working fine up until now.We've also detected these differences between working vs failed sequences of commands:** Working sequence: "DescribeVpcs" is presented DescribeSubnets DescribeVpcs DescribeNetworkInterfaces DeleteNetworkInterface (Client.DryRunOperation) ** Failed sequence: DescribeVpcs is missed DescribeSubnets DescribeNetworkInterfaces DeleteNetworkInterface (Client.UnauthorizedOperation)I've checked that my service role has all the above permissions.Could someone point me to a possible cause for this? I'd really appreciate it. Thank you.
AWS CodeBuild failing due to UNAUTHORIZED_OPERATION_DELETE_NETWORK_INTERFACE error
To stick with that you're already doing, you could run the AWS CLI from within your userdata script:"UserData": { "Fn::Base64": { "Fn::Join": [ "\n", [ "#!/bin/bash", "yum update -y", "yum install -y httpd24 php56", "service httpd start", "chkconfig httpd on", "groupadd DMO", "usermod -a -G DMO ec2-user", "chgrp -R DMO /var/www", "chmod 2775 /var/www", "aws s3 cp s3://MYBUCKET/MYFILE.zip /tmp", "unzip -d /var/www /tmp/MYFILE.zip", "rm /tmp/MYFILE.zip", "find /var/www -type d -exec chmod 2775 {} +", "find /var/www -type f -exec chmod 0664 {} +" ] ] } }In order to do this, you EC2 instance profile must grant permission to read the file from S3.An alternative is to useAWS::CloudFormation::Init: it's a predefined metadata key that you can attach to either anEC2::InstanceorAutoScaling::LaunchConfigurationresource, which allows you to configure packages, services, and individual files (including retrieving and unzipping a file from S3).There's a tutorialhere
I've created a CloudFormation template that launches an AutoScaling group. During the launch, a policy allowings3:GetObjectaccess is attached to each EC2 instance. After this, I use User Data to install an Apache web server and PHP, and then change the settings for the relevant folders. I then need to copy multiple files from an S3 bucket (which has no public access) to the /var/www/html folder in each instance, but I can't work out how to do so without reverting to manually copying or syncing the files with the CLI after the CloudFormation stack has completed - this has to be an entirely automated process.The user data in the template is as follows:"UserData": { "Fn::Base64": { "Fn::Join": [ "\n", [ "#!/bin/bash", "yum update -y", "yum install -y httpd24 php56", "service httpd start", "chkconfig httpd on", "groupadd DMO", "usermod -a -G DMO ec2-user", "chgrp -R DMO /var/www", "chmod 2775 /var/www", "find /var/www -type d -exec chmod 2775 {} +", "find /var/www -type f -exec chmod 0664 {} +" ] ] } }
How do I copy data from AWS S3 to EC2 in a CloudFormation template?
If you want to use theCronobject you have to import it from the chalice package, and then each value is a positional parameter to theCronobject:from chalice import Chalice, Cron app = Chalice(app_name='sched') @app.schedule(Cron(0, 0, '*', '*', '?', '*')) def my_schedule(): return {'hello': 'world'}Here's thedocsforCronthat has more info.Or alternatively use this syntax, which works without the extra import:@app.schedule('cron(0 0 * * ? *)') def dataRefresh(event): print(event.to_dict())
I am trying to follow the documentation fromhttps://chalice.readthedocs.io/en/latest/topics/events.htmlI tried [email protected]('0 0 * * ? *') def dataRefresh(event): print(event.to_dict())and got this error:botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the PutRule operation: Parameter ScheduleExpression is not valid.and so tried this:@app.schedule(Cron('0 0 * * ? *')) def dataRefresh(event): print(event.to_dict())and got this other error:NameError: name 'Cron' is not definedNothing works... what's the correct syntax?
What is the chalice @app.schedule syntax for cron events?
Here is what I gathered a bit back from ACloudGuru cert training, these may change or remove in the future as AWS changes them. Probably a few more that I'm missing.D for density R for RAM M for main choice for general purpose apps C for compute G for Graphics I for IOPs F for FPGA T for cheap general purpose (think t2 micro) P for graphic (think pics) X for extreme memory H for High disk throughput A for arm base processor instance
You will find these prefixes inaws documentationover various instances types: 'a', 'm', 't', 'r', 'c', 'u', 'x', 'd', 'i', 'f', 'g'.I canonlyassumeC stands for Compute R stands for RAM G stands for GPU or Graphics I stands for I/O M stands for medium??I wonder if other prefixes have some kind of meaning too.
Is there a meaning of in names of AWS instance type prefixes? or it just random?
API gateway is not intended to be data transfer gateway, but lightweight API definition layer.The most suitable approach is to generate temporary pre-signed upload URL and redirect (30X) requests there. API Gateway should define an endpoint, calling lambda function which generates pre-signed S3 URL and redirect post request there (after user's authentication of course).Please refer an example ofapp with API Gatewayand pre-signed S3 URLs to upload filesAPI documentation for generating pre-signed S3 URLs inPython,AWS CLIand evenGo-lang
I'm working on the application that will receive files from users and then upload to Amazon S3. The application is accessed using API Gateway. The service API Gateway has limits for payload size for both WebSocket and REST APIs. Is there any way to access my service from the Internet through API Gateway?
Is there any way to to send large file through API Gateway?
It largely depends on scale. If you'll only have a few scheduled at any point in time then I'd use the CloudWatch events approach. It's very low overhead and doesn't involve running code and doing nothing.If you expect a LOT of schedules then the DynamoDB approach is very possibly the best approach. Run the lambda on a fixed schedule, see what records have not yet been run, and are past/equal to current time. In this model you'll want to delete the records that you've already processed (or mark them in some way) so that you don't process them again. Don't rely on the schedule running at certain intervals and checking for records between the last time and the current time unless you are recording when the last time was (i.e. don't assume you ran a minute ago because you scheduled it to run every minute).Step functions could work if the time isn't too far out. You can include a delay in the step that causes it to just sit and wait. The delays in step functions are just that, delays, not scheduled times, so you'd have to figure out that delay yourself, and hope it fires close enough to the time you expect it. This one isn't a bad option for mid to low volume.Edit: Step functions include await_untiloption onwait statesnow. This is a really good option for what you are describing.
I have to implement functionality that requires delayed sending of a message to a user once on a specific date, which can be anytime - from tomorrow till in a few months from now.All our code is so far implemented as lambda functions.I'm considering three options on how to implement this:Create an entry in DynamoDB with hash key being date and range key being unique ID. Schedule lambda to run once a day and pick up all entries/tasks scheduled for this day, send a message for each of them.Using SDK Create cloudwatch event rule with cron expression indicating single execution and make it invoke lambda function (target) with ID of user/message. The lambda would be invoked on a specific schedule with a specific user/message to be delivered.Create a step function instance and configure it to sleep & invoke step with logic to send a message when the right moment comes.Do you have perhaps any recommendation on what would be best practice to implement this kind of business requirement? Perhaps an entirely different approach?
Scheduling one time tasks with AWS
The biggest consideration when "archiving" the data is ensuring that it is in a useful format should you every want it back again.Amazon RDS recently added that ability toexport RDS snapshot data to Amazon S3.Thus, the flow could be:Create a snapshotof the Amazon RDS databaseExport the snapshotto Amazon S3 as a Parquet file (you can choose to export specific sets of databases, schemas, or tables)Set the Storage Classon the exported file as desired (eg Glacier Deep Archive)Delete the datafrom the source database (make sure you keep a Snapshot or test the Export before deleting the data!)When you later wish to access the data:Restore the dataif necessary (based upon Storage Class)Use Amazon Athena toquery the datadirectly from Amazon S3
I am looking for options to archive my old data from specific tables of an AWS RDS MySQL database. I came across AWS S3, AWS Glacier and copy the data to either one using some Pipelines or Buckets, but from what I understood they copy the data to vault or backups the data, but don't move them.Is there a proper option to archive the data by moving from RDS to S3 or Glacier or Deep Archive? i.e., deleting from the table in AWS RDS after creating an archive. What would be the best option for the archival process with my requirements and would it affect the replicas that already exist?
Archiving AWS RDS mysql Database
EMR Notebooks can only be created manually using the AWS EMR console. From the documentation (https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-notebooks-create.html):You create an EMR notebook using the Amazon EMR console. Creating notebooks using the AWS CLI or the Amazon EMR API is not supported.Since there is no API for this I don't think there will be a way to create notebooks using CloudFormation or similar tools.
I am aware of AWS cloudformation EMR resource to create Clusters. But, I could not find any instructions about EMR notebooks. Is there a cloudformation resource for EMR notebooks or similar alternative?
EMR notebooks template for cloudformation
As perthisEXPLAIN statements are not supported in Athena as of now.If you want to perform the same then you can do it in EMR-Presto byintegratingit with AWS Glue catalog.
Can I use 'explain' statement in AWS Athena? (For reviewing query's plan) I tried to use explain statement in Athena, but I met below error message.Your query has the following error(s):Queries of this type are not supported (Service: AmazonAthena; Status Code: 400; Error Code: InvalidRequestException
AWS Athena - explain query plan
There is no "official tool" to do this. It could be done by iterating through the existing parameters and creating them in the target.I found this tool that somebody has written:aws-ssm-copy · PyPI: Copy parameters from a AWS parameter store to anotherIt looks like it can copy between Regions and between AWS Accounts (by providing multiple Profiles).
Consider that I have got a AWS account that already has some parameter store data.Is there a way to migrate these data from this parameter store to another:parameter store?region?AWS account?I would prefer official tools to do this, but tools similar to dynamoDB dump are also welcome.
How to migrate parameter store data to other Region / AWS Account
Automatic key rotation does not create a new CMK, it just rotates the HSM backing key. The old key is only used for decryption, whereas the new backing key is used for encrypting and decrypting new objects. On the other hand, manual key rotation requires you to create a new CMK and update the key alias to point to the new CMK. That means you have to maintain more than one CMK as long as you have objects encrypyted with the old one.When KMS encrypts an object, the generated ciphertext contains the HSM backing key identifier in cleartext. That is how KMS retrieves the key to decrypt the encrypted messages. As a result, KMS can decrypt a message as long as the backing key stored in ciphertext is not deleted. A backing key is only deleted if you delete a CMK.Coming back to your questions:Yes, but you have to create a new CMK, as stated above.You can just update the key alias to point to the new CMK instead. It becomes much easier to rotate a CMK, especially if it is used to encrypt multiple buckets.You cannot decrypt the objects if you delete the CMK that is used to generate data keys. You should either re-encrypt all the objects using the new CMK or retain the old key.
Scenario -I created -1. One S3 Bucket2. Two KMS Keys3. EnabledDefault encryptionon the S3 bucket, using KMS key #14. Uploaded a file in the bucket5. Check the object details, it showed theServer-side encryption: AWS-KMSand theKMS key ID: ARN of KMS key #16. Changed theAWS S3 Default encryptionand now choseKMS key #27. The old object still showedKMS key ID: ARN of KMS key #1Questions -1. Can the KMS key rotation be done before 1 year?2. Is what I did the correct way to rotate an AWS KMS key? If not what's the correct way?3. What happens to the older objects of the key gets deleted?
How to do AWS S3 SSE KMS key rotation?
It depends on which construct you are using.For low-level constructs, so called CFN resources, you can use therefproperty. For high-level constructs, you should check the API for xxx_id property. In the example below, the cfn resource usesrefproperty, whereas the high-level VPC construct usesvpc_idproperty.my_vpc = _ec2.Vpc( tgw = _ec2.CfnTransitGateway(...) tgw_attachment = _ec2.CfnTransitGatewayAttachment( self, id="tgw-myvpc", transit_gateway_id=tgw.ref, vpc_id=my_vpc.vpc_id, ... )
I am currently using aws-cdk to generate a cloudformation template and I would like to access a parameters defined withCfnParameter(self, id="Platform", type="String", default="My Platform")with a reference (Like!Ref Platformin a cloudformation template)Any of you know what is the equivalent of a Ref in the aws cdk.Here is the yaml equivalent of the parameter I defined aboveParameters: Platform: Type: String Default: "My Platform"
Equivalent of !Ref in aws cdk
Revoke the default privileges:ALTER DEFAULT PRIVILEGES FOR USER "ted.mosby" IN SCHEMA tv_shows REVOKE ALL ON TABLES FROM "ted.mosby";You can use\ddpinpsqlto see if any default privileges are left.
I'm trying to delete an user from my database but I'm getting an error:user "ted.mosby" cannot be dropped because some objects depend on it Detail: owner of default privileges on new relations belonging to user ted.mosby in schema tv_showsHow can I fix this error and remove the user from my database?I've changed owner and already revoked all permissions fromted.mosby.
Redshift: cannot drop user owner default privileges
Type the name of your domain in the Domain name box and choose Next. In this example, I typewww.example.com. You must use a domain name that you control.Requesting certificates for domains that you don’t control violates the AWS Service Terms.so in short, you can not use LB DNS name because you can not control LB DNS name but it controls by AWS.easier-certificate-validation-using-dns-with-aws-certificate-managerNow, the question is how you will validate the DNS? as AWS ACM required to validate the ownership of DNS.You may request for the LB DNS but you will have to validate, and for validation, you need to place CNAME record in your DNS provider setting or have to use email.
I have an application running on AWS ELB and want to set uphttpslistener. I tried to request an SSL certificate using AWS ACM but was unable to do because the ELB is using default AWS DNS name. Is it possible to request ACM for the DNS name like below?abc-123455.us-east-2.elb.amazonaws.com
how to request ACM using AWS default DNS for ELB
Option 1: Within Terraform Enterprise, you can use Sentinel to enforce policies how a resource should look like. See the Hashicorp example for enforced tags:https://github.com/hashicorp/terraform-guides/blob/master/governance/second-generation/aws/enforce-mandatory-tags.sentinelOption 2: If you don't have Terraform Enterprise, create modules with parameters that are filling the tags within the module, and discourage usage of "plain" aws resources.Option 3: Make tag inspection part of your automated test suite (e.g. with terratest), and let tests fail when they do not have appropriate tags.
I was wondering if there is a good way to enforce the tagging of AWS resources for all developers. Or at least provide a predefined set of tags that are inserted automatically. The reason behind this is that some team members forget to tag their resources or using a different set of tags. Furthermore if you want to change the tags for future deployments you have to change it everywhere. So, my idea up to now is to create a map that includes all tags that should be set by default (project, version, cost allocation). Now everyone can use this default list and add further tags if needed for their resources. But there is no guarantee that everybody is using this map for default tagging. I don't know a way how I could achieve that but maybe someone has a good idea to do this...
Forced tagging for terraform resources in AWS
I think the best approach is to use thekubectl waitcommand:Wait for a specific condition on one or many resources.The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource.It will only return when the Job is completed (or the timeout is reached):kubectl wait --for=condition=complete job/myjob --timeout=60sIf you don't set a--timeout, the default wait is 30 seconds.Note:kubectl waitwas introduced on Kubernetesv1.11.0. If you are using older versions, you can create some logic usingkubectl getwith--field-selector:kubectl get pod --field-selector=status.phase=Succeeded
This question already has answers here:Tell when Job is Complete(7 answers)Closed4 years ago.I'm looking for a way to wait for Job to finish execution Successfully once deployed.Job is being deployed from Azure DevOps though CD on K8S on AWS. It is running one time incremental database migrations usingFluent migrationseach time it's deployed. I need to readpod.status.phasefield.If field is "Succeeded", then CD will continue. If it's "Failed", CD stops.Anyone have an idea how to achieve this?
Waiting for K8S Job to finish [duplicate]
# !/bin/bash INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) PRIVATE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) DOMAIN_NAME=$(aws route53 get-hosted-zone --id "<Hosted Zone ID >" --query 'HostedZone.Name' --output text | sed 's/.$//') hostnamectl set-hostname hostname."${DOMAIN_NAME}" CN=`echo $PRIVATE_IP | cut -d . -f 3` echo $CN a=5 if [ $CN == $a ] then aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"Dns Name"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}' else aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"< Dns Name>"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}' fi
I am using Amazon EC2 Auto Scaling in my environment, whenever Auto Scaling triggers a new instance, I need to change the IP manually in Route 53. I want to automate this process.Tried usingLifecycle Hooksbut didn't see any update for Route 53.
Updating Route 53 automatically when Auto Scaling brings up new instance
+500You just need to use two single quotesSELECT * FROM Test.testing WHERE "last" = 'O''Hara';
Sample data, stored in a file in S3. As you can see the format of my data is one json per line{"first": "John", "last": "Smith"} {"first": "Mary", "last": "O'Hara"} {"first": "Mary", "last": "Oats"}My ultimate objective is to query by the last name and using the like operator together with a user provided substring. So I go step by step from easy to difficult:This query works and returns all rows:select s.* from s3object sGood! Let's continue. The next query I tried works and returns, as expected, John Smithselect s.* from s3object s where s."last" = 'Smith'The next step is to try by a substring of the surname. Let's find all persons whose last name starts with an "O".select s.* from s3object s where s."last" like 'O%';This works and returns the two Marys in my dataset.The next step is the one that doesn't work. I want to find all users whose last name starts with an O and an apostrophe. This I can't make to work. I tried:select s.* from s3object s where s."last" like 'O'%' select s.* from s3object s where s."last" like 'O\'%' select s.* from s3object s where s."last" like "O'%"None of them works. How can I put a single quote (') inside a string literal in s3 select?
escape single quote in s3 select query
The quickest way would be to use the aws cli.aws glue get-job --job-name <value>where value is the specific job that you are trying to replicate. You can then alter the s3 path and JDBC connection info in the JSON that the above command returns. Also, you'll need to give it a new unique name. Once you've done that, you can pass that in to:aws glue create-job --cli-input-json <value>where value is the updated JSON that you are trying to create a new job from.SeeAWS command line referencefor more info on the glue command line
I have a large number of clients who supply data in the same format, and need them loading into identical tables in different databases. I have set up a job for them in Glue, but now I have to do the same thing another 20 timesIs there any way I can take an existing job and copy it, but with changes to the S3 filepath and the JDBC connection?I haven't been able to find much online regarding scripting in AWS Glue. Would this be achievable through the AWS command line interface?
Is there a simple way to clone a glue job, but change the database connections?
From Docker you can send KILL signal "SIGPWR" thatPower failure (System V)docker kill --signal="SIGPWR"and from Kubernetkubectl exec <pod> -- /killme.shand so scriplt killme.shbeginning of script----- #!/bin/bash # Define process to find kiperf=$(pidof iperf) # Kills all iperf or command line kill -30 $kiperf script end -------------signal 30 you can findhere
I have myrook-cephcluster running onAWS. Its loaded up with data. Is there's any way to stimulatePOWER FAILUREso that I can test the behaviour of my cluster?.
How to simulate Power Failure In Kubernetes
You don't need to do anything. That is exactly how it works. Almost.Your configuration should be:An Amazon VPC with at least one Public Subnet and at least one Private SubnetAnInternet Gatewayattached to the Public Subnets (which is what makes them "Public")ANAT Gatewayin a Public Subnet with an Elastic IP addressARoute Table configuration on the Private Subnetsthat send Internet-bound traffic (0.0.0.0/0) to the NAT GatewayThen, anything traffic that comes out of instances in a Private Subnet will be routed to the NAT Gateway, which forwards the traffic to the Internet. It will come from the Elastic IP address. Return traffic will flow back through the NAT Gateway to the instance in the Private Subnet.Incoming traffic to the VPC (that is not responding to the NAT Gateway) willnotbe able to reach the instances in the Private Subnet because they are not directly connected to the Internet. This is intentional.Please note thatthe Internet Gateway attaches the VPC to the Internetand is used for both inbound and outbound traffic. Just think of it as plugging a cable into the Internet.
I need a static ip to hit an URL. For that I am planning to put NAT Gateway with static IP in front of instances for outbound traffic.So the question here is how to use IGW for input traffic and NAT for outbound traffic.
How to use NAT Gateway for only outbound traffic
If I understand correctly, you are trying to call an API Gateway endpoint that is behind the built-in Cognito Authoriser.I think you've misunderstood how you call an Cognito Authorised API Gateway:Authorise against Cognito to get anid_tokenCall API Gateway with theAuthorizationheader set toid_tokenRenewid_tokenevery hourBy enablingADMIN_NO_SRP_AUTHyou're allowing the first step (sign-in to Cognito) to be simplified so that you can more easily do it manually. (If you hadn't, then you would need to doSRPcalculations).One way to get theid_tokenis to use theaws cli(further ways are shown inthe documentation):aws cognito-idp admin-initiate-auth --user-pool-id='[USER_POOL_ID]' --client-id='[CLIENT_ID]' --auth-flow=ADMIN_NO_SRP_AUTH --auth-parameters="USERNAME=[USERNAME],PASSWORD=[PASSWORD]"You can then use the result (AuthenticationResult.IdToken) as theAuthorizationheader in Postman (no need for the AWS v4 signature- that is only for IAM authentication).n.b. a much fuller explanation with imagescan be found here.
I've managed to successfull login to the API gateway I've made via my iOS device and Cognito. The problem is I'd like to use postman to test the API calls then implement them on the phone. Currently, Postman cannot authenticate (despite AWS saying it can). No matter what I do I get a 401 error (visible in the screen-shots)What I've triedDownloaded the postman collection from AWS Api GatewayThen imported it into postman, and switch the authentication to "AWS Signature"And Here is a screen shot of the Postman Generated Header Info
Cannot test Cognito authenticated API Gateway call in Postman (its an ADMIN_NO_SRP_AUTH pool)
Okay so we found the answer! It was SO LONG to find it, so i'm gonna save you that trouble if you happen to have the same problem/configuration than us.You need port 53 outbound in NaCL and SG. That's the way kubernetes checks DNS. (DNS problem on AWS EKS when running in private subnets)In the connection string, Data source, we previously had "Data Source=DNSName;etc". We changed it to "Data source=tcp:DNSName".That was it2 days for that. :DEDIT: I might add I faced the same problem in another environment/aws account (53 was the answer but slightly differently):Pods in EKS: can't resolve DNS (but can ping IP)
Once again I shall require help from Stack Overflow :).We have a fresh public access endpoint EKS Cluster, an app inside the nodes that return something from the RDS. The VPC of the cluster is VPC peering with the private VPC that holds the RDS. We also haveAccepter DNSresolutionenabled. The Accepter is the RDS VPC.When SSH-ing into my worker nodes, and we telnet the RDS, it resolves it. Initially, the Connection String was establish with the Endpoint. It didn't reach the database. I changed it to the IP of the RDS and it worked.When doing with the DNS names, it takes:1) lots of time to load,2)"Unable to retrieve Error: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."Therefore I was wondering if any of you faced this issue and how you solved it? There seems to be a lot of fun regarding DNS resolution with EKS and I'm not exactly sure why the instance can resolve but not the pod.Thank you for your help!
Node in EKS doesn't resolve DNS names of RDS (IP working)
There is now the concept of Message Groups for FIFO (First in, First Out) SQS queues. This allows you to send a message to the SQS queue with a particular ID, in which the queue will automatically group it for you in the order it receives new items.Interacting code can query the overall SQS queue with a Message Group parameter and will receive back any messages in that queue. Using something like ReceiveMessage(Node.js link provided), one can provide the parameter "MessageGroupId" which will then fetch one or more messages (in the order received) with a matching "MessageGroupId".A basic high level concept would be to have the code always checking the high priority message group until it determines it is empty, and then moving to the medium (and then the low) priority queues to process work there until a new entry in a higher priority queue is found.
I have a messaging use case question.As of now, we have a queue in AWS SQS, say,origQueueand out-of-the-box-lambda-based-message-consumption on that queue.Now to cater to one particular feature of priority (High,Medium,Low) based message consumption (on the basis of a 'priority' number set within the message), I am thinking to have a set of 3 queues, wherein each queue is pertaining to a different priority level. On the highest priority queue, out-of-the-box-lambda-based-message-consumption would continue to happen. A batch process would keep running in an interval of 5 mins. each to promote some messages from the mid and low priority queues. The logic of this batch process has not been thought of currently, but it could be anything, say pick up 10 Medium priority messages and 5 low priority messages , both aged more than 1 hour and promote them to the high priority queue, so that they can be consumed by the above mentioned out-of-the-box-lambda-based-message-consumption.So before going that way, I just wanted to gather other potential ideas. Is there any out-of-the-box AWS feature or any pattern to solve this priority based message consumption problem?P.S.Another (not chosen) approach I came up with was to 'insert' the items in the queue considering the priority which would keep the queue always ordered by priority. But this 'run-time-dynamic-insertion' does not seem feasible as the stream of incoming messages is always on.
Priority queue with AWS SQS and Lambda as consumer
You do not need an EC2 instance to connect to the Redis ElastiCache cluster.Yes, you can connect to ElastiCache using Lambda. There is a not very well documented "gotcha" to make sure that your Lambda is running in the same VPC as the ElastiCache cluster AND make sure that you keep your Lambda warm; Lambdas running inside of VPC's can have significant cold start times. Also, don't forget to set your security groups to allow traffic from Lambda to the the cluster.You can read more about connecting to ElastiCache from Lambda here. The tutorial connects to Memcached however the same process applies to Redis:https://docs.aws.amazon.com/lambda/latest/dg/vpc-ec.html
I have created a Redis Elasticache cluster in AWS and would like to read and write data to the cluster using Python script which will eventually become a Lambda function. I've read that the typical way to connect to the cluster is EC2. I have setup an EC2 instance and connected to it successfully using SSH and key pair.My questions are:Do I need an EC2 instance or can I connect directly to the cluster using Python?If I need to connect via EC2, what is the best way to do it so I can read and write the data from the Redis cluster or are there any examples? At the moment I have to go to EC2 and then Redis in an SSH session. I was thinking I would have to run the same commands in Python but njot sure how I would execute a redis command through an EC2 connection in Python.Thanks for any help
Connect to AWS Elasticache Redis cluster using Python
It's currently not possible to import a DynamoDB table with the AWS CDK.Importing a DynamoDB table.Still you can reach your goal by using theEventSourceMappingclass from@aws-cdk/aws-lambdadirectly:import iam = require('@aws-cdk/aws-iam'); import lambda = require('@aws-cdk/aws-lambda'); const fn = new lambda.Function(...); new lambda.EventSourceMapping(this, 'DynamoDBEventSource', { target: fn, batchSize: ..., eventSourceArn: <your stream arn>, startingPosition: lambda.StartingPosition.TrimHorizon }); fn.addToRolePolicy( new iam.PolicyStatement() .addActions('dynamodb:DescribeStream', 'dynamodb:GetRecords', 'dynamodb:GetShardIterator', 'dynamodb:ListStreams') .addResource('<your stream arn>/*'); );
I have a lambda function which reads from Dynamodb stream. I have the Dynamodb stream ARN exported from another stack in the same AWS account. Now, while adding eventSource in Lambda, it asks from Table construct.const function = new lambda.Function(...); function.addEventSource(new DynamoEventSource(table, { startingPosition: lambda.StartingPosition.TrimHorizon }));Ref:https://awslabs.github.io/aws-cdk/refs/_aws-cdk_aws-lambda-event-sources.html#dynamodb-streamsBut I have the stream ARN. Is there any way I can make use of this to add the event source. Or I have to export the table itself?
Adding eventSource to Lambda by ARN in CDK
This error message almost always means the request did arrive at CloudFront, but that CloudFront does not recognize the hostname that was contained in the HTTP headers. This suggests that you overlooked the need to set theAlternate Domain Namein your CloudFront distribution.If you want to use your own domain name, such aswww.example.com, instead of thecloudfront.netdomain name that CloudFront assigned to your distribution, you can add an alternate domain name to your distribution forwww.example.com.https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.htmlBy default, CloudFlare doesn't change theHostheader the browser sends, so CloudFront sees your custom domain name in the incoming request, and doesn't know what to do next, because no configured distribution has this name configured as an Alternate Domain Name, so it returns this error.
I'm using Cloudflare for my DNS and have SSL full. I'm trying to set up a subdomain;https://humlor.myrenas.seand point that to an AWS S3-bucket. I have Set up Cloudfront with a Default CloudFront Certificate (*.cloudfront.net) and the site is available athttps://d2ufhnw2kk1vh9.cloudfront.net/(without styling of some absolute paths to the CSS)I have then created a CNAME record in Cloudflare: CNAME humlor is an alias of d2ufhnw2kk1vh9.cloudfront.netButhttps://humlor.myrenas.se/gives: 403 ERROR The request could not be satisfied. Bad request.Do I need another certificate in Cloudfront? Or what is missing?
Cloudflare HTTPS subdomain to Cloudfront/S3-bucket gives 403
The Amazon SQS queue must be in the same region as your Amazon S3 bucket.https://docs.aws.amazon.com/AmazonS3/latest/user-guide/setup-event-notification-destination.htmlIt isn't a supported configuration for S3 to reach across a regional boundary to send a notification to SQS. The reason why is not specifically stated in the documentation.But there is a workaround.An S3 bucket in us-west-1 can send an event notification to an SNS topic in us-west-1, and and an SQS queue in us-east-1 can subscribe to an SNS topic in us-west-1... so S3 (us-west-1) → SNS (us-west-1) → SQS (us-east-1) is the solution, here. After subscribing the queue to the topic, you may want to enable the "raw message delivery" option on the subscription, otherwise the message format will differ from what you expect, because otherwise SNS will add an outer wrapper to the original event notification payload.
I have a bucket which I have configured SQS in US-EAST-1 and S3 Bucket in US West Carlifornia region , Is there any way that I can configure the SQS from other region to be invoked at the time of an s3 eventAble to setup event notification whenS3 - Same region - Same account SQS - Same region - Same accountS3 - Same Region - Different account SQS - Same Region - in another accountNOT WORKINGS3 - Different region SQS - Different regioncan someone help me, please?
S3 Notifications invoke sqs topic in other region
Almost a year late, but I figured out a way to do this. You can't do it from the console itself, but you can get it to work via the CLI.Most of the steps are covered inthis guide, but make it a WS api instead of REST api.You'll notice that it isn't possible to specify HTTP header, nor is it possible to specify the mapping template. For some reason, this isn't available on the console. You'll have to do this via the CLI instead. Save the following as a JSON file.{ "PassthroughBehavior": "NEVER", "RequestParameters": { "integration.request.header.Content-Type": "'application/x-www-form-urlencoded'" }, "RequestTemplates": { "application/json": "Action=SendMessage&MessageBody=$util.urlEncode($input.body)" } }And update the integration like so:aws apigatewayv2 update-integration \ --api-id API_ID \ --integration-id INTEGRATION_ID \ --cli-input-json file://update.jsonYou'll see the API id on the console on the API overview, but the integration ID you'll have to find via the CLI, like so:aws apigatewayv2 get-integrations --api-id API_IDThis results in the body itself being sent as plaintext to the SQS queue.Note: You can forego all of this if you create the API with CloudFormation/SAM, as that enables you to set RequestParameters and RequestTemplates directly.
I am trying to integrate an API Gateway WebSocket Route with SQS.I have configured the SQS Integration with below propertiesAWS Region: ap-southeast-1 AWS Service: SQS HTTP Method : POST Path override: 111111110111/my-queueConfigured Request Template as"Action=SendMessage&MessageBody=$util.urlEncode($input.body)##set($context.requestOverride.header.Content-Type="application/x-www-form-urlencoded")##"When I try to send the data to SQS it is failing with below errorError:(VK1mEHZSyQ0FlZg=) Endpoint request body after transformations: Action=SendMessage&MessageBody=foobar (VK1mEHZSyQ0FlZg=) Sending request tohttps://sqs.ap-southeast-1.amazonaws.com/111111110111/my-queue(VK1mEHZSyQ0FlZg=) Received response. Integration latency: 16 ms (VK1mEHZSyQ0FlZg=) Endpoint response body before transformations: Unable to determine service/operation name to be authorized
apigateway websocket sqs integration
Another alternative is to useAWS CloudFormation. You can define all AWS resources you want to create (not only Glue jobs) in a template file and then update stack whenever you need fromAWS Consoleorusing cli.Template for aGlue jobwould look like this:MyJob: Type: AWS::Glue::Job Properties: Command: Name: glueetl ScriptLocation: "s3://aws-glue-scripts//your-script-file.py" DefaultArguments: "--job-bookmark-option": "job-bookmark-enable" ExecutionProperty: MaxConcurrentRuns: 2 MaxRetries: 0 Name: cf-job1 Role: !Ref MyJobRole # reference to a Role resource which is not presented here
I have pyspark script which I can run in AWS GLUE. But everytime I am creating job from UI and copying my code to the job .Is there anyway I can automatically create job from my file in s3 bucket. (I have all the library and glue context which will be used while running )
AWS Glue automatic job creation
The configured provisioned capacity isper second, while the data you see in CloudWatch isper minute. So your configured 5 WCU per second translate to 300 WCU per minute (5 WCU * 60 seconds), which is well above the consumed 22 WCU per minute.That should already answer your question, but to elaborate a bit on some details:A single write of 7KB with a configured amount of 5 WCU would in theory never succeed and cause throttling, as 7KB would require 7 WCU to write, while you only have 5 WCU configured (and we can safely assume that your write would occur within one second). Fortunately the DynamoDB engineers thought about that and implementedburst capacity. While you're not using provisioned capacity you'll save them up for up to 5 minutes to use them when you need more than the provisioned capacity. That's something to keep in mind when increasing the utilization of your capacity.
Input dataDynamoDB free tier provides:25 GB of Storage25 Units of Read Capacity25 Units of Write CapacityCapacity units (SC/EC - strongly/eventually consistent):1 RCU = 1 SC read of 4kB item/second1 RCU = 2 EC reads of 4kB item/second1 WCU = 1 write of 1kB item/secondMy application:one DynamoDB table 5 RCU, 5 WCUone lambdaruns each 1 minutewrites 3 items ~8kB each to the DynamoDBlambda execution takes <1 secondThe application works ok, no throttling so far.CloudWatchIn my CloudWatch there are some charts (ignore the part after 7:00):the value on this chart is 22 WCUon this chart it is 110 WCU - actually figured it out - this chart resolution is 5min - 5*22=110 (leaving it here in case my future self gets confused)QuestionsWe have 3 writes of ~8kB items/second - that's ~24 WCU. That is consistent with what we see in the CloudWatch (22 WCU). But the table is configured to have only 5 WCU. I've read some other questions and as far as I understand I'm safe from paying extra if the sum of WCUs in my tables configurations is below 25.Am I overusing the write capacity for my table?Should I expect throttling or extra charges?As far as I can tell my usage is still within the free tier limits, but it is close (22 of 25). Am I to be charged extra if my usage gets over 25 on those charts?
AWS DynamoDB. Am I overusing my write capacity?
The moment your code throws an unhandled/uncaught exception Lambda fails. If you have max receive count set to 1 the message will be sent to the DLQ after the first failure, it will not be retried. If your max receive count is set to 5 for example, the moment the Lambda function fails, the message will be returned to the queue after the visibility timeout has expired.The reason for this behaviour is you are giving Lambda permissions to poll the queue on your behalf. If it gets a message it invokes a function and gives you a single opportunity to process that message. If you fail the message returns to the queue and Lambda continues polling the queue on your behalf, it does not care if the next message is the same as the failed message or a brand new message.Here is a great blogpostwhich helped me understand how these triggers work.
The document states thatA Lambda function can fail for any of the following reasons:The function times out while trying to reach an endpoint.The function fails to successfully parse input data.The function experiences resource constraints, such as out-of-memory errors or other timeouts.For my case, I'm using C# Lambda with SQS integrationIf the invocation fails or times out, every message in the batch will be returned to the queue, and each will be available for processing once the Visibility Timeout period expiresMy question: What happen if I, using an SQS Lambda integration ( .NET )My function throws an ExceptionMy SQS visibility timer is set to 15 minutes,max receive count is 1, DLQ setupWill the function retry? Will it be put into the DLQ when Exceptions are thrown after all retries?
SQS Lambda Integration - what happens when an Exception is thrown
Custom header can be passed through following structure.request.origin.custom.customHeadersRef:https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-requestSo, the code should look like .exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; request.origin.custom.customHeaders['custom_header'] = [{ key: 'custom_header', value: 'custom_header' }]; return callback(null, request); }
I have a Cloudfront distribution with a custom origin.I want to use a Lambda@Edge Origin Request to modify and add some extra headers to be forwarded to my origin server.Below is my Lambda function. Thecustom_headeris visible in Cloudwatch logs for my Lambda, but doesn't show up in my custom server request headers :(.exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; headers['custom_header'] = [{ key: 'custom_header', value: 'custom_header' }]; return callback(null, request); }I expectcustom_headerto be visible in my Node.js route underreq.headers.
How to add add custom header in Cloudfront Lambda@Edge Origin Request?
iOS 12.1.1 introduced Apple's new Certificate Transparency policy. From Apple's release notes:iOS 12.1.1 requires that publicly-trusted Transport Layer Security (TLS) server authentication certificates issued after October 15, 2018 meet the Certificate Transparency policy to be evaluated as trusted on Apple platforms.This policy is becoming a widespread standard which Google already enforces in its Chrome browser. Amazon knew this was coming and, in response to these new policies, released updates to their MQTT backend (AWS IoT) to include appropriate certification on a new endpoint. Seehttps://aws.amazon.com/blogs/iot/aws-iot-core-ats-endpoints/:You must explicitly request an Amazon Trust Services endpoint for each region in your account. Any existing customer endpoint you have is most likely a VeriSign endpoint. If your endpoint has “-ats” at the end of the first subdomain, then it is an Amazon Trust Services endpoint. For example, ‘asdfasdf-ats.iot.us-east-2.amazonaws.com’ is an ATS endpoint.In short, for my iOS App, we were using our AWS provided MQTT endpointasdfasdf.iot.us-east-2.amazonaws.com(just an example), without the-ats. I updated the endpoint toasdfasdf-ats.iot.us-east-2.amazonaws.comand we were able to accomplish our SSL handshake.I hope this helps with your issue! Good Luck!
When we are connecting to AWS IoT using wss protocol on ios version 12.1.1, we were able to connect to IoT successfully, but immediately we could see onError event being triggered from IoT and then the connection gets closed. It tries to reconnect again but without any luck. The error we are getting from IoT is "{IsTrusted : true}". We are not using any certificates, just using a profile access key and secret key.The same build is able to connect properly on ios 12.0.1, 12.1IOS version:12.1.1(Not working version)AWS IOT SDK:2.0.0
AWS IOT connection is getting closed on IPAD OS v12.1.1
AWS currently does not support custom sender id in some countries to find the list of countries that are supported by aws click on the below linkAWS custom sender id supported regions
I am using AWS Cognito User Pool for user sign up. I am using phone number as the attribute and I have set up verification of mobile number and enabled Multi-Factor Authentication.I get messages from AWS and its getting verified and everything is working fine. But the sender of the message is "AXNOTICE".I need to change "AXNOTICE" to my business id. I tried changing "Default sender ID" in "Text messaging preferences" of SNS Dashboaard but this didn't work.Please let me know whether this is the correct place to change or do I need to change somewhere else.Any help is appreciated.
How to change the default sender id in AWS Cognito messages to verify mobile number?
The${AWS::Region}substitution is not supported - only the function name can be substituted. seehttps://github.com/awslabs/serverless-application-model/issues/79
The following code I put for x-amazon-apigateway-integration, please let me know if I am missing something. Thanksx-amazon-apigateway-integration: httpMethod: post type: aws uri: Fn::Sub: - arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${FunctionArn}/invocations - { FunctionArn: !GetAtt PriceAPIFunction.Arn} responses: default: statusCode: '200'
"Unable to parse API definition because of a malformed integration at path /price. (Service: AmazonApiGateway; Status Code: 400;
To indicate an attribute of type SS (String Set) using the boto3 DynamoDB resource-level methods, you need to supply a set rather than a simple list. For example:import boto3 res = boto3.resource('dynamodb', region_name=region) table = res.Table(config[region]['table']) sched = { "begintime": '09:00', "description": 'Hello there', "endtime": '14:00', "name": 'james', "type": "period", "weekdays": set(['mon', 'wed', 'fri']) } table.put_item(Item=sched)
I'm trying to create an item in AWS DynamoDB using boto3 and regardless what I try I can't manage to get an item of type 'SS' created. Here's my code:client = boto3.resource('dynamodb', region_name=region) table = client.Table(config[region]['table']) sched = { "begintime": begintime, "description": description, "endtime": endtime, "name": name, "type": "period", "weekdays": [weekdays] } table.put_item(Item=sched)The other columns work fine but regardless what I try,weekdaysalways ends up as a 'S' type. For reference, this is what one of the other items look like from the same table:{'begintime': '09:00', 'endtime': '18:00', 'description': 'Office hours', 'weekdays': {'mon-fri'}, 'name': 'office-hours', 'type': 'period'}Trying to convert this to a Python structure obviously fails so I'm not sure how it's possible to insert a new item.
Creating a 'SS' item in DynamoDB using boto3
The answers byJohnandStefanare both correct. There's no way to trigger a simple "Roll this EC2 instance back to an earlier snapshot" feature on AWS.There is a way to "roll back" an instance's filesystem to a snapshot byrestoring the snapshotto a new EBS volume, detaching and deleting the old one, and attaching the new one.And, of course, AWS is eminently automatable. You could definitely write your own automation to make that happen.Having said all of that, if you're trying to test instance creation scripts, I have to agree with John, tearing down and rebuilding the instance is the most reliable way to make sure you're testing it accurately, and shouldn't really be more costly than restoring to a snapshot.The other path you might consider, particularly if you want the instance to start in a known state that doesn't match a particular predefined AMI, is to build an AMI of your own (e.g. w/Packer) and use that as the basis for your test. Then instead of restoring to a snapshot, you're creating a new instance from an AMI you've prepared.
Is there a convenient way to roll-back an EC2 instance to a previously saved snapshot in the same manner that you can do so with VMWare and other virtualisation platforms. In my investigations so far, it seems you have to deploy a new instance and select the snapshot as the starting volume.I am doing a lot of testing with new EC2 instance initialisation scripts at present, and having to configure and deploy a new instance for every test is tedious and costly. If I can roll-back to a snapshot of the initial state of the system quickly, this would save a lot of time and effort.
Revert Amazon EC2 instance to snapshot?
I think the best way you can achieve this is by just convert your existing key with puttygen:puttygen mykey.pem -o mykey.ppkThisresourceexplains exactly what you need.
I want to convert.pem fileto.ppk fileon myMac OS.The requirement comes because my clients use windows and so I had to provide themppk file, while I use Mac.
How do I convert .pem file to .ppk file on mac?
It's not exactly that you want, but looks likeVCScan fit your needs. You can use Github(if you already use it) or CodeCommit(free privat repos) Details and additional ways likesynctargetdirwithS3bucket -https://aws.amazon.com/blogs/machine-learning/how-to-use-common-workflows-on-amazon-sagemaker-notebook-instances/
We have a notebook instance within Sagemaker which contains many Jupyter Python scripts. I'd like to write a program which downloads these various scripts each day (i.e. so that I could back them up). Unfortunately I don't see any reference to this in theAWS CLI API.Is this achievable?
How do I download files within a Sagemaker notebook instance programatically?
One of the most common reasons to getInternal server erroris that your Lambda function is either crashing or not returning what is expected by the triggering service.In this case I suspect a bit of both.When you proxy through API Gateway your event payload isn't just what you POST'ed. You can find out more about the shape of events here, including those of an API Gateway request: (https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-api-gateway-request).Your lambda is crashing because the event you get from API Gateway can not be cast into yourtype MyEvent struct, as it does not have anameproperty; in fact, the body of the request is actually inevent.bodyas a string which has to be decoded.A good guide to the events your expected for responses for API Gateway with lambda can be found here (https://serverless.com/framework/docs/providers/aws/events/apigateway/)
I'm using literally the example function from theGo docs:package main import ( "context" "fmt" "github.com/aws/aws-lambda-go/lambda" ) type MyEvent struct { Name string `json:"name"` } func HandleRequest(ctx context.Context, name MyEvent) (string, error) { return fmt.Sprintf("Hello %s!", name.Name), nil } func main() { lambda.Start(HandleRequest) }If I use the test event console and input{ "name": "John" }it works fine.But if I go to Add Triggers, click API Gateway, then click Create a new API, set Security to Open, leave everything else default, then click Add then Save.If I see the URL it lists at the bottom as "API endpoint:" and click it, I get "Internal server error".If I docurl -XPOST -d "{ \"name\": \"Paul\" }" https://AWS-URL-ENDPOINT/amazonaws.com/default/mytestfunctionI get "Internal server error".What am I doing wrong?
Setting up AWS Lambda with Go, why do I always get "Internal server error" with this simple function?
I am not aware of any third party libraries that support the top cloud vendors with the quality that I would use in production.I work with AWS, Google, Alibaba and Azure. Their features sets are both very similar but also different enough that you really need to pay attention to the little details. This is very true when it comes to security.I do not recommend working with a bunch of cloud vendors. Unless you are have really large infrastructure that requires cross vendor support, stick with one cloud and know it very well. If you do have a large infrastructure then have an expert for each cloud vendor. The cloud vendors are moving so fast with new products, services and features, that it really takes time to be an expert with just one vendor let alone three or four.
We currently have services distributed across the big 3 (e.g. s3 on AWS, VMs on Azure, Functions on Gcloud, etc), and accessing these services with their separate APIs is becoming unwieldy. They're all different and the documentation is hit or miss. I’m looking for a wrapper (Node.js or Python) to control all three APIs from one place.For example, I want to write something like.create(“vm”,”azure”)to create a VM or.list(“all”)to list everything I have running on all 3.Googling around, I couldn’t find anything that does this except for some rogue github repositories.Anyone know of any solutions open source or otherwise that do this?
Is there a wrapper for AWS / Azure / Gcloud APIs?
First according toRDS FAQs, there should be no downtime at all as long as you are only increasing storage size but not upgrading instance tier.Q: Will my DB instance remain available during scaling?The storage capacity allocated to your DB Instance can be increased while maintaining DB Instance availability.Second, according toRDS documentation:Baseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB, which means that larger volumes have better performance.... Volumes below 1 TiB in size also have ability to burst to 3,000 IOPS for extended periods of time (burst is not relevant for volumes above 1 TiB). Instance I/O credit balance determines burst performance.I can not say for certain why but I guess when RDS increase the disk size, it may defragment the data or rearrange data blocks, which causes heavy I/O. If you server is under heavy usage during the resizing, it may fully consume the I/O credits and result in less I/O and longer conversion times. However given that you started with 200GB I suppose it should be fine.Finally I would suggest you to use multi-az deployemnt if you are so worried about downtime or performance impact. During maintenance windows or snapshots, there will be a brief I/O suspension for a few seconds, which can be avoided with standby or read replicas.
We are currently working with a 200 GB database and we are running out of space, so we would like to increment the allocated storage.We are using General Purpose (SSD) and a MySQL 5.5.53 database (without Multi-AZ deployment).If I go to the Amazon RDS menu and change the Allocated storage to a bit more (from 200 to 500) I get the following "warnings":Deplete the initial General Purpose (SSD) I/O credits, leading to longer conversion times:What does this mean?Impact instance performance until operation completes: And this is the most important question for me. Can I resize the instance with 0 downtime? I mean, I dont care if the queries area bitslower if they work while it's resizing, but what I dont want to to is to stop all my production websites, resize the instance, and open them again (aka have downtime).Thanks in advance.
Resize Amazon RDS storage
What you are trying to do (use a map element as a key attribute for an index) is not supported by DynamoDB.The index partition key and sort key (if present) can be any base table attributes of type string, number, or binary. (Source)You cannot use (an element of) a map attribute as a a key attribute for an index because the key attribute must be a string, number, or binary attribute from the base table.Consider using theadjacency list design patternfor your data. It will allow you to easily add both the left and right dispensers to your index.
I wonder if it's possible to create an index that could look like this{ "dispenserId": "my-dispenser-123", // primary key "users": ["user5", "user12"], "robotId": "my-robot-1", "enabled": true, "side": left }Based on my DynamoDB documents that look like this{ "robotId": "my-robot-1", // primary key "dispensers": { "left": "left-dispenser-123", "right": "right-dispenser-123", "users": ["user5", "user12"] }, "enabled": true, "users": ["user1", "user32"] }I can't figure out how to point at eitherdispensers.leftordispensers.rightand use that as a key, neither can I figure out how to make aside: left/rightattribute based on the path of the dispenser ID.Can it be achieved with the current structure? If not, what document structure would you guys suggest instead. which allows me to hold the same data?
Creating indexes from nested structure in DynamoDB
The AWS CLI has command set options to control multipart transfers.multipart_threshold - The size threshold the CLI uses for multipart transfers of individual files.multipart_chunksize - When using multipart transfers, this is the chunk size that the CLI uses for multipart transfers of individual files.You can also set these via command line:aws configure set default.s3.multipart_threshold 64MBConfiguration ValuesReference GuideYou can also use the low level api which does not use multipart transfers:aws s3api put-object --bucket mybucket --key myfile.txt --body mylocalfile.txt
I am transfering items between buckets. 'aws s3 sync' does not preserve metadata if the item was upload via multipart upload or is more than 5GB. Luckily, all my items are only a few megabytes. How can I disable multipart upload to prevent metadata lose?
How to disable multipart upload for 'aws s3 sync' to prevent metadata lose
You can't get back a result from a step function execution in a synchronous way.Instead of polling the result of the step function on completion send a result to an SNS topic or SQS queue for further processing in the final lambda function or model the whole process in the step function state machine.
I am using AWS step function to invoke lambda function like this.return stepfunctions.startExecution(params).promise().then((result) => { console.log(result); console.log(result.output); return result; })And result is{ executionArn: 'arn:aws:states:eu-west-2:695510026694:...........:7c197be6-9dca-4bef-966a-ae9ad327bf23', startDate: 2018-07-09T07:35:14.930Z }But i want the result as output of final lambda functionI am going throughhttps://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/StepFunctions.html#sendTaskSuccess-propertyThere are multible function there i am confused which one could be used to get back result of final lambda function.Same question is there on stackoverflowApi gateway get output results from step function?i dont want to call any function periodically and keep checking status.Even if i use DescribeExecution function periodically i will only get the status of execution but not the result i wanted. Is there any way or any function which returns promise and is resolved once all the lambda has executed and give back the result
How to get result of AWS lambda function running with step function
I imagine you would use something like the Gremlin Console to connect to your Neptune instance. I think the documentation is pretty good:http://tinkerpop.apache.org/docs/3.3.3/tutorials/getting-started/#_the_first_five_minutesUnzip the console from the downloaded file and then just runbin/gremlin.bat(on Windows).
I have an Amazon EC2 (windows) and an Amazon Neptune, both in the same VPC. I would like to connect to Neptune from EC2 using either sparql or Gremlin and don't know how to do this. I foundhttps://docs.aws.amazon.com/neptune/latest/userguide/access-graph-sparql.htmlandhttps://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin.htmlNone of the two explains how to call Neptune using sparql or Gremlin (is it from terminal or do they have studios of their own?) Thanks for any hint.
access Amazon Neptune from EC2 windows
It seems like the prod user does not have superuser priviledges:As stated from AWS Docs:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html1. create role testuser with password 'testuser' login; CREATE ROLE 2. grant rds_superuser to testuser; GRANT ROLEPoint 1 has already been done as there is a prod userThus, you need to run point 2 command to grant priviledges.
I am trying to enable some Postgres extensions to specific user in AWS RDS Postgres instance.1) I have tried through deployment using rails migration, didnt work.class InstallPgTrgmContribPackage < ActiveRecord::Migration[5.1] def change enable_extension "fuzzystrmatch" enable_extension "pg_trgm" enable_extension "unaccent" # execute "CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;" # execute "CREATE EXTENSION IF NOT EXISTS pg_trgm;" # execute "CREATE EXTENSION IF NOT EXISTS unaccent;" end end2) Also, tried through ssh-ing into postgres and create it from there.psql -h blabla.us-east-1.rds.amazonaws.com -p 5432 -U prod -d prod prod=> CREATE EXTENSION IF NOT EXISTS fuzzystrmatch; returns: ERROR: permission denied to create extension "fuzzystrmatch" HINT: Must be superuser to create this extension.By default RDS instance role is test, and I am able to create extension in test user. I am trying to install in prod and dev users.The rails application deployed through Elastic Beanstalk.Question:How to add superuser privileges into new user role?
AWS RDS Extension Installation
This is Amazon's management service. This is the message you would get if someone shuts the machine down via the Web UI or if Amazon's infrastructure shut the machine down (for autoscaling etc).If you need to know who's doing this you should consider enabling AWS Cloud Trail on the EC2 instances.
I have hosted a web server in an EC2 instance running Windows Server 2012 R2, and suddenly the instance became not available. I ran into the issue couple of times and when I checked the AWS Console, status of the instance has changed to Stop.Interestingly when I checked for system logs in Event Viewer, I found this error message.The process C:\Program Files\Amazon\XenTools\LiteAgent.exe (EC2AMAZ-******) has initiated the shutdown of computer EC2AMAZ-****** on behalf of user NT AUTHORITY\SYSTEM for the following reason: No title for this reason could be found Reason Code: 0x8000000c Shutdown Type: shutdown Comment:Any idea why it happened and what does LiteAgent.exe do?
Spontaneous shutdowns in AWS EC2 instance
I would argue that using something else other than dynamoDB would be overkill.DynamoDB is made for this exact purpose. And it is virtually free for your use case. And will only add around more or less 10ms to your run time. Very negligible compared to the cold starts your Lambda will be getting.If you really want to trim both your DynamoDB cost and your Lambda runtime, you can cache the counter inside the Lambda container (outside your handler).Assuming, there are no concurrent invocations (triggered via scheduled events only), I would do something like this for each invocation:Check for cached counter value.Ifcounter == 0, read value from DynamoDB. Ifcounter > 0, use that value.Do whatever you want to do.Increment counter in DynamoDB and the cached value.
If I wanted to launch a lambda function every 2 minutes, that makes a call to an api with an index number. How would I store the index number for lambda to read upon initialisation and increment it by one every time the lambda function makes a successful api call?I think that having a dynamo table just for a counter is overkill.
How would I create a global Counter for AWS Lambda functions?
It was slightly more complicated but here the command:aws iam get-policy-version --policy-arn arn:aws:iam::XXXXXXXXXXXXXXX:policy/developer_allow --version-id v1You need to specify the version
It might seems a silly question but I'm not able and I haven't find any command to show in the aws CLI the policy body. I have a managed policy attached to a role. I can simply display the ID and other information but not the body. Am I missing anything?I runaws iam get-policy --policy-arn <arn>and get something like:{ "Policy": { "PolicyName": "developer_allow", "CreateDate": "2017-03-28T12:57:11Z", "AttachmentCount": 1, "IsAttachable": true, "PolicyId": "XXXXXXXXXXXXXXXXXXXXX", "DefaultVersionId": "v1", "Path": "/", "Arn": "arn:aws:iam::xxxxxxxxxxx:policy/developer_allow", "UpdateDate": "2017-03-28T12:57:11Z" } }
Describe a policy in AWS using the CLI
I believe I discovered the answer to what I am looking for, which is to define the domain as "data" instead of as a resource in my TF code.data "aws_route53_zone" "my_zone" { name = "myzone.net" } resource "aws_route53_record" "myzone_net_mx_record" { zone_id = "${data.aws_route53_zone.my_zone.zone_id}" name = "*.myzone.net." type = "MX" records = [ "10 inbound-smtp.us-west-1.amazonaws.com" ] ttl = "300" }This allows me to reference the zone ID without hardcoding the random string in there, but won't touch the base of the domain itself.Thanks!
I'm using terraform with AWS to manage an environment and want to be able to reference an existing route53 domain, add/modify records in the domain, etc.If I run "terraform destroy" I want to delete all of the records added in the terraform code, but I do not want to delete the domain itself.Is there a "proper" method for accomplishing this within the terraform config? Currently I have the domain information (zone ID, etc.) hardcoded into the .tf files, but if there is a way to reference this from the resource itself without allowing TF to destroy the domain that would seem ideal.Any help would be appreciated!Thanks,Chris
Terraform - AWS Route53 Prevent Domain Deletion
Turns out this was due to my instance having been created with this:# enable termination protection disable_api_termination = trueThis will apparently prevent normal termination behavior from terraform.
I have a set of.tffiles that reflect anAWSinfra.The files in myterraformfolder are more or less:eip.tf instance.tf key.tf provider.tf rds.tf route53.tf securitygroup.tf terraform.tfstate terraform.tfstate.1520442018.backup terraform.tfstate.backup terraform.tfvars terraform.tfvars.dist vars.tf vpc.tfI created the infra and I want to destroy it.I see that the internet gateway destruction takes forever:aws_internet_gateway.my-gw: Still destroying... (ID: igw-d53fa0b2, 14m50s elapsed)By browsing in myawsconsole I see that this is because myec2instance is still up and running.Why is terraform trying to destroy the internet gateway without making sure the ec2 instance is down?How can I prevent this from hapenning again?The same scripts have executed (apply/destroy) many times before without any issues.
Terraform keeps destroying internet gateway forever
The warning is there because many people unintentionally make information public. However, if you are happy for these particular files to be accessed by anyone on the Internet at any time, then you can certainly make the individual objects public or create an Amazon S3 bucket policy to make a particular path public.The alternative method to granting access is to create anS3 Pre-Signed URL, which is a time-limited URL that grants access to a private object.Your application would be responsible for verifying that the user should be given access to a particular object. It would then generate the URL, supplying a duration for the access. Your application can then insert the URL into thesrcfield and the image would appear as normal. However, once the duration has passed, it will no longer be accessible.This is typically used when providing access to private files -- similar to how DropBox gives access to a private file without making the file itself public.
I am, for the first time, implementing file uploads using S3 (in this case specifically user profile avatar images) usingFlysystem. I'm currently at the point where I have created an S3 bucket, and a user can upload an image, which is then visible online in the bucket console.I now need the ability to display those images when requested (i.e. viewing that user's profile). I assumed that the process for this would be to generate the URL (e.ghttps://s3.my-region.amazonaws.com/my-bucket/my-filename.jpeg) and use that as thesrcof an image tag however to do this, the file (or bucket) must be marked as public. This seemed reasonable to me because the files within are not really private. When updating the bucket to public status however you are presented with a message stating;We highly recommend that you never grant any kind of public access to your S3 bucket.Is there a different, or more secure, way to achieve direct image linking like this that a newcomer to AWS is not seeing?
AWS S3 - storing and serving non-private images
This is pretty common.In the second account, create a zone for your domain. That will create two records - NS and SOA.Go back to the first account, and under Registered Domains select the appropriate domain. Then edit the Name Server records, pointing to the values in the zone you created in the first step.
I have a domain example.com purchased and owned by one AWS account, and a hosted zone exists in its Route53 service.I would like to delegate all DNS queries for the apex and any subdomains to a second AWS account, without transferring the domain to the second AWS account.Is this possible? If so, how can I do it?
How can I delegate DNS to a separate AWS account?
Since you're using Lambda Proxy integration for your method, you'll need to:(1) provide theAccess-Control-Allow-Originheader as part of the Lambda response. For example:callback(null, { statusCode: 200, headers: {"Content-Type": "application/json", "Access-Control-Allow-Origin": "*"}, body: JSON.stringify({message: "Success"}) });(2) and add theAccess-Control-Allow-Originas a 200 response header in yourMethod Responseconfig.
I created REST API using AWS API Gateway & AWS Lambda and when I configured CORS I faced with such issue - I was able to configure CORS response headers for OPTIONS method, but didn't for GET method.I made it accordingAmazon documentation, but when I called GET method I didn't see required headers (Access-Control-Allow-Methods, Access-Control-Allow-Headers, Access-Control-Allow-Origin) in response. Due to that I got errors on client side:Failed to load #my_test_rest#: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin #my_test_rest_url# is therefore not allowed access.As a temporary fix I hardcode required headers in code of Lambda function, but it looks not like right solution and I'd like to understand why it don't work for me. Any ideas what's I'd doing wrong?
How to Enable CORS for an AWS API Gateway Resource
But then another lambda event could override the global.info before the anotherLib is called.This is actually a non-issue in Lambda functions. It will never happen, by design.Only one invocation of your function iseverrunning at a time in each container. Global scope is perfectly safe in Lambda function code.If a function invocation is running and another one needs to start, it will always, without exception, run in an entirely different container than any other currently running function. A container won't be reused for another invocation of the function until the first invocation is completely finished.As your concurrency increases, new containers are automatically created. When a container is around for a few minutes without enough traffic to justify its existence, it is automatically destroyed by the service.This allows all kinds of things that feel a little dirty but are convenient and quite safe, like creating a global "stash" object instead of passing values around.This makes sense when you remember that you are paying a per-n-gigabyte-milliseconds charge for each function invocation to haveexclusiveuse of the container in which it is running.
Is there a scope in AWS Lambda (with node.js) that is request safe? That can be globally visible but won't be change by another possible incoming lambda event? For instance now I have something like this:module.exports.handler = (event, context, callback) => { dependency.doSomething(event.info) } const doSomething = info => { anotherLib(info) } const anotherLib = info => { placeWhereIReallyNeedInfo(info) }I would like to do something like this:module.exports.handler = (event, context, callback) => { global.info = event.info dependency.doSomething() } const doSomething = () => { anotherLib() } const anotherLib = () => { placeWhereIReallyNeedInfo(global.info) }But then another lambda event could override theglobal.infobefore theanotherLibis called. This is especially a problem when I have a lot of different files and asynchronous code and need to keep passing parameters that a function doesn't need.Thanks in advance
Is there something like Request Scoped in AWS Lambda with node?
I runVACUUM(ANALYZE, DISABLE_PAGE_SKIPPING);after our nightly snapshot restores for our Staging DB to get everything running smoothly again
I have restored a snapshot of a PostgreSQL instance as a new instance with exactly the same configuration as the original instance. However, running queries takes much longer on the new instance. A query that takes less than 0.5 ms to execute on the original instance, takes over 1.2 ms on the new one. A nightly Python script that runs in 20 minutes on the old instance is now taking over an hour with the new one. This has been going on for several days now.
AWS RDS instance created from snapshot very slow
You seem to be confusing buckets, folders, and object keys. Your code should look something like this (where key contains both the folder and file name, and bucket contains only the S3 bucket name):obj = s3.get_object(Bucket='bucketname', Key='folder1/folder2/filename.csv')
I'm trying to load a csv file in pandas from a s3 bucket in aws. Boto3 seems to fall short in providing functionalities for loading files from subfolders. Let's say i have the following path in s3: bucket1/bucketwithfiles1/file1.csvHow do i specify how to load file1.csv? I know s3 doesn't have a directory structure.import boto3 import pandas as pd s3 = boto3.client('s3') obj = s3.get_object(Bucket='/bucket1/creditdefault-ff.csv') df = pd.read_csv(obj['Body'])
Loading files from s3 with subfolders with python
Instead ofADD, you could useSETwith thelist_appendfunction (in general, AWSrecommendsusingSETrather thanADD):(NOTE: Thelist_appendfunction name is case-sensitive)var params = { TableName: "rides", Key: { "rid": data2.Items[0].rid }, UpdateExpression: "SET #c = list_append(#c, :vals)", ExpressionAttributeNames: { "#c": "cord" }, ExpressionAttributeValues: { ":vals": [{date: secondStartDate.toString(), latitude: xcorpassed, longitude: ycorpassed}] }, ReturnValues: "UPDATED_NEW" } docClient.update(params, function (err, data) { if (err) console.log(err); else console.log(data); }
This doesn't work. Is there another way to do this?cordis a list to which I want to add a map.var params5 = { TableName: 'rides', Key: { 'rid': data2.Items[0].rid }, UpdateExpression: 'add cord :x', ExpressionAttributeValues: { ':x': [{date: secondStartDate.toString(), latitude: xcorpassed, longitude: ycorpassed}] }, ReturnValues: 'UPDATED_NEW' } docClient.update(params5, function (err5, data5) { ... }
DynamoDB: Appending an element to a list using Node.js
This response was cached several hours ago.age:17979CloudFront won't go back and gzip what has already been cached.CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code 304, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.htmlDo a cache invalidation, wait for it to complete, and try again.http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
I have an angular app which, even builded with prod mode, has multiple large files (more than 1MB).I want to compress them with gzip compression feature present on CloudFront.I activated "Compress Objects Automatically" option in CloudFront console. The origin of my distribution is a s3 bucket.However the bundle downloaded when I'm loading the page via my broswer are not compressed with gziphere's an example of an request/responseRequest header ::authority:dev.test.com :method:GET :path:/vendor.cc93ad5b987bea0611e1.bundle.js :scheme:https accept:*/* accept-encoding:gzip, deflate, br accept-language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 cache-control:no-cache pragma:no-cache referer:https://dev.test.com/console/projects user-agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36Response headeraccept-ranges:bytes age:17979 content-length:5233622 content-type:text/javascript date:Tue, 07 Nov 2017 08:42:08 GMT etag:"6dfe6e16901c5ee5c387407203829bec" last-modified:Thu, 26 Oct 2017 09:57:15 GMT server:AmazonS3 status:200 via:1.1 9b307acf1eed524f97301fa1d3a44753.cloudfront.net (CloudFront) x-amz-cf-id:9RpiXSuSGszUaX7hBA4ZaEO949g76UDoCaxzwFtiWo7C-wla-PyBsA== x-cache:Hit from cloudfrontAccording to the AWS documentation everything is ok :Accept-Encoding: gzipContent-Length presentfile between 1,000 and 10,000,000 bytes...Have you an idea why cloudfront doen't compress my files ?
Gzip compression with CloudFront doesn't work
Rather than try to enforce ordering when adding records to the stream, order the records when you read them. In your use case, every binlog entry has a unique file sequence, starting position, and ending position. So it is trivial to order them and identify any gaps.If you do find gaps when reading, the consumers will have to wait until they're filled. However, assuming no catastrophic failures, all records should be close to each other in the stream, so the amount of buffering should be minimal.By enforcing ordering on the producer side, you are limiting your overall throughput to how fast you can write individual records. If you can keep up with the actual database changes, then that's OK. But if you can't keep up you'll have ever-increasing lag in the pipeline, even though the consumers may be lightly loaded.Moreover, you can only enforce order within a single shard, so if your producer ever needs to ingest more than 1 MB/second (or > 1,000 records/second) you are out of luck (and in my experience, the only way you'd reach 1,000 records/second is viaPutRecords; if you're writing a single record at a time, you'll get around 20-30 requests/second).
I am writing an application which reads MySQL bin logs and pushes changes into a Kinesis stream. My use case requires perfect ordering of mysql events in thekinesisstream for which I am using theputrecordoperation instead ofputrecordsand also including the 'SequenceNumberForOrdering' key. But one point of failure still remains i.e. the retry logic. Being anasyncfunction (using js sdk of aws), how can i ensure order in case of failure during the write operation to kinesis.Is blocking write (blocking the event loop till the callback is received for the put record) too bad a solution? Or is there a better way?
How to ensure ordering while putting records in kinesis stream asynchronously?
I would check - 1. If you are authenticating to AWS correctly or not - you can specify the access and secret keys explicitly in the clients.client = boto3.client( 'ec2', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY, )If the user hasec2:runInstancesIAM permission on the resource you are trying to create.
I've read this questionHow do I use Boto3 to launch an EC2 instance with an IAM role?and try to launch an instance with IAM role in python script. Here's the code:instance = ec2.create_instances( ImageId='ami-1a7f6d7e', KeyName='MyKeyPair', MinCount=1, MaxCount=1, SecurityGroups=['launch-wizard-3'], InstanceType='t2.micro', IamInstanceProfile={ 'Arn': 'arn:aws:iam::627714603946:instance-profile/SSMforCC'} )However, I got this error after running the scriptbotocore.exceptions.ClientError: An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation.I found this questionhow do I launch ec2-instance with iam-role?provides an solution forRubyto solve the problem. Can anybody tell me if there's a way to solve this problem in pythonBoto3?
Unauthorized operation error occurs when using Boto3 to launch an EC2 instance with an IAM role
There is no spot instance launched if the request is still active, so there is no question of terminating your spot instances. Yourrequest will expireonce theValidUntiltime is reached. You didn't specify the type of this spot request:Type='one-time'|'persistent'By default, the value isone-time. In that case, the request expires and removed once theValidUntiltime is reached. If you do not specifyValidUntilthen the request is effective indefinitely.From:request_spot_instancesValidUntil(datetime) -- The end date of the request. If this is a one-time request, the request remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date and time is reached.Default: The request is effective indefinitely.
I'm using boto3 to deploy spot instances. My request is expired after period of time (as I defined). When the request expires, I'm expecting that the machine will terminate. In order to create the spot request I used this script:client = boto3.client('ec2', region_name=regions[idx][:-1]) client.request_spot_instances( DryRun=False, SpotPrice=price_bids, InstanceCount=number_of_instances_to_deploy, LaunchSpecification= { 'ImageId': amis_id[idx], 'KeyName': 'MyKey', 'SecurityGroups': ['SG'], 'InstanceType': machine_type, 'Placement': { 'AvailabilityZone': regions[idx], }, }, ValidUntil=new_date, )How can I terminate the spot instances when the request is not valid anymore?
Terminate spot instances when the request expires
If you are using the Facebook integration with Cognito User Pool (under federation -> identity providers), you can then map the access_token from the facebook integration to a useable Cognito attribute by going to federation -> attribute mapping -> Facebook tab. the facebook ID is the username, minus the "Facebook_" prefix.hope this helps!
I'm creating a small API using Amazon's AWS Cognito as well as Lambda and a Facebook Login.When a user / my App sends an API request to Lambda, Cognito does a good job and authenticates the user with it's Facebook-Login on the fly. My point is that as far I can see, Cognito isn't handing on any information about the user (like an ID or the Fb access token), except I'm providing it in my request of course.In my case, I'd like to get the users Facebook access token in AWS Lambda to do some stuff with it.Does anyone know how to get any information of the current user, which is hitting the API (like the Fb access token) or is Cognito a closed system in this way?
Get Facebook access token in AWS Cognito and Lambda
No. This is not a built-in feature of the DynamoDB API.You have to implement yourself by adding a column to each item for each UpdatedTime with the current time.
Is there a way to check when a dynamoDB item was last updated without adding a separate attribute for this? (For example, are there any built-in attributes or metadata that could provide this information?)
AWS DynamoDB - a way to get when an item was last updated?
The ARN you have provided for the IAM Role is a policy. It needs to be a role. Please go to your generated role and update your ARN to that. It should look something like this*:role/AmazonDynamoDBFullAccess-201709151726
I'm trying to followthistutorial, but when I try to test the API I've created, I get the following message:API Gateway does not have permission to assume the provided roleThe API request should be posting to a DynamoDB table I've created.I've created an IAM Role and attached the policy AmazonDynamoDBFullAccess. I've also tried attaching this policy to my administrator user.Here is the integration request in my API:Any help is much appreciated.
API Gateway does not have permission to assume the provided role DynamoDB
You need to create a path and use in put method instead of$imageNameto store in that particular path of bucket. It'll create the folder itself according to path.For e.g. if you set the path as$path = "folder_1/folder_2/file.pdf", s3 driver will store file.pdf in folder_2 which is inside folder_1.$imageName = time().'.'.$request->image->getClientOriginalExtension(); $image = $request->file('image'); //image stored in folder name image_folder $path = "image_folder/".$imageName; $t = Storage::disk('s3')->put($path, file_get_contents($image));
I am currently using this packages for amazon file uploads and it works, the only problem is i don't know how to specify a folder in my chosen bucket.Package Used - "aws/aws-sdk-php": "~3.0",This is how i currently upload to the bucket$imageName = time().'.'.$request->image->getClientOriginalExtension(); $image = $request->file('image'); $t = Storage::disk('s3')->put($imageName, file_get_contents($image), 'public'); $imageName = Storage::disk('s3')->url($imageName);
upload to s3 bucket folder with laravel