Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
As a quick suggestion, check the SES time by making an HTTP request to SES (e.g. wget -S "https://email.us-east-1.amazonaws.com"), and compare to your server's time. Update the server's time (or use NTP if you aren't already) and see if the problem resolves itself.Thanks @cyberx86.
I am not able to send the mail from aws.amazon SES since from Monday (30-10-2012) previously it is working fine suddenly it stopped working. I got following is the errorsA PHP Error was encountered Severity: User Warning Message: SimpleEmailService::getSendQuota(): Sender - RequestExpired: Request timestamp: Wed, 31 Oct 2012 11:50:32 UTC expired. It must be within 300 secs/ of server time. Request Id: e472fb5a-2351-11e2-8183-8138c6c456cf Filename: libraries/ses.php Line Number: 363But this same code working with fine in another server What is the issue Please help,I could not find solution for this.
AWS amazon SES stopped working suddenly
To confirm: You're wanting to share a) the same database and b) the same files between two instances, each in a different region (not different availability zone)?Sharing the same database across multiple regions isn't easy. Your best option is either asynchronous, multi-master replication or a manual sharding strategy. However, to do any of these safely and effectively, it has to be carefully planned and your application has to be be written with this architecture in mind.This articlewill give you a good overview of the caveats associated with multi-master replication with MySQL.Sharing files across region can also be somewhat of a challenge because you have so many options, but it is generally easier than doing the same with a SQL database. Variables include:How much replication latency is acceptable?Are you going to be writing to both locations?What kind of I/O performance do you need locally?How frequently do the files change?GlusterFSis frequently a good compromise for most use cases. However, just yesterday AWS announced a new feature thatallows inter-region copies of EBS snapshots. Depending on your needs, this may also be worth researching.
In an Amazon AWS setup, how can two cloned instances (in different regions) refer to and share the same "base" dynamic web server files and read to / write from the same main database?Our current AMI instance is on an EBS-backed volume right now. However, apparently trying to share an EBS volume between two instances is a bad idea. I gathered that much fromthis older 2009 answer, but it doesn't break down what alternatives there might be, other than Amazon S3. Or is S3 the only option? Is the OPs reasoning still valid 3 years later? What are our storage options for sharing DB data live realtime by multiple instances?If you want shared data, you can setup a server that all your instances can access. If you are wanting a simple storage area for all your instances, you can use Amazon's S3 storage service to store data that is distributed and scalable.Moving to the cloud, you can have the exact same setup, but you can possibly replace the fileserver with S3, or have all your instances connect to your fileserver.
Amazon AWS: Multiple Instances, Same DB
The item_count value is only updated every six hours or so. So, I think boto is returning you the value as it is returned by the service but that value is probably not up to date.
I'm using the python framework Boto to interact with AWS DynamoDB. But when I use "item_count" it returns the wrong value. What's the best way to retrieve the number of items in a table? I know it would be possible using the Scan operation, but this is very expensive on resources and can take a long time if the table is quiet large.
Boto's DynamoDB API returns wrong value when using item_count
According tothisdocumentation page,Metadataprovided by Amazon andUser Dataspecified by the user:Amazon EC2 instances can access instance-specific metadata as well as data supplied when launching the instances.You can use this data to build more generic AMIs that can be modified by configuration files supplied at launch time. For example, if you run web servers for various small businesses, they can all use the same AMI and retrieve their content from the Amazon S3 bucket you specify at launch. To add a new customer at any time, simply create a bucket for the customer, add their content, and launch your AMI.
Amazon EC2 instances can be created with 'User Data' (a long string), or metadata tags (a number of key/value pairs).What is the difference between these? Why do these two systems exist in parallel?In particular, I wish to pass certain pieces of custom data (i.e. a connection string and two resource URLs) to an EC2 machine on startup so it can configure itself. Presumably these are best sent as three key/value pairs?
AWS: Difference between User data and Metadata tags when creating EC2 instance
My recommendation is to create an IAM role (not IAM user) with CloudFormation and assign this role to the instance (again using CloudFormation). The role should be allowed to delete snapshots as appropriate.One of the easiest ways to delete the snapshot using the IAM role on the instance is to use the boto Python AWS library. Boto automatically finds and uses the correct credentials if you run it on the instance with the assigned IAM role.Here is a simple boto script I just used to delete snapshotsnap-51930522inus-east-1:#!/usr/bin/python import boto.ec2 boto.ec2.connect_to_region('us-east-1').delete_snapshot('snap-51930522')Alternatively, you might have an external server run the snapshot cleanup instead of running it on the instances themselves. In addition to simplifying credential management and cron job distribution, it also lets you clean up after stopped or terminated instances.
I'm trying to create a cloudformation template that would create a EC2 instance, mount a 2GB volume and do periodic snapshots, while also deleting the ones that are say a week or more old.While I could get and integrate the access and secret keys, it seems that a signing certificate is required to delete snapshots. I could not find a way to create a new certificate with cloudformation, so it seems like I should create a new user and certificate manually and put that to the template parameters? In this case, is it correct that the user would be able to delete all the snapshots, including the ones that are not from that instance?Is there a way to restrict snapshot deleting to only the ones with matching description? Or what's the proper way to handle deleting old snapshots?
How to allow an instance to delete old snapshots of an attached volume?
After playing around, I found out that changing the code to connect in this fashion works:import boto from boto.ec2.connection import EC2Connection from boto.dynamodb import connect_to_region key = 'abc' secret = '123' regions = EC2Connection(key,secret).get_all_regions() for r in regions: con = connect_to_region(aws_access_key_id=key,aws_secret_access_key=secret,region_name=r.name) table = con.get_table('Table Name') # no problem -- rest of code --
I'm using thebotolibrary in Python to connect to DynamoDB. The following code has been working for me just fine:import boto key = 'abc' secret = '123' con = boto.connect_dynamodb(key,secret) table = con.get_table('Table Name') -- rest of code --When I try to connect to a specific region, I can connect just fine, but getting the table to work on is throwing an error:import boto from boto.ec2.connection import EC2Connection key = 'abc' secret = '123' regions = EC2Connection(key,secret).get_all_regions() # some filtering after this line to remove unwanted entries for r in regions: con = boto.connect_dynamodb(key,secret,region=r) table = con.get_table('Table Name') # throws the error below -- rest of code --Using the second block of code above, I get aValueError: No JSON object could be decoded. Callingcon.list_tables()shows the table I'm looking for in the first code block, but throws the same error when I try it in the second code block. Can anyone help me out?
Amazon DynamoDB -- region-specific connection
Unless your config.php file will output the tokens when it is run, you should be safe. To take extra precaution, you could place the config.php file below the root directory of your website so that the user isn't even able to try to run that file.Your php is being executed on the server, and as long as no output is being sent to the client that contains the tokens, the contents of that file will never be sent to the client. Therefore, they would have no way of reading the file because the contents never leave the server, just the output from running the script.
I am using Amazon web services in my php application. Is it safe to store the secret aws access tokens in a config.php file that are linked to my php web service?I have been unable to download the file to look at the content, but isn't it possible to use a packet sniffer or something and be able to read the key and pass phrase?I know Amazon recommends using a token vending machine to create temporary credentials, instead of using the aws creds directly, but we are hoping to be able to skip implementing one.
Storing secret keys in php files
Yes, you can use Route53 to map DNS names to EC2 instances.Elastic IP address is the basic way to point to an EC2 instance in a permanent fashion. It can be associated with a replacement instance if you decide your original instance is no longer suitable, and it needs to be re-associated with the instance after stop/start (unless you're in a VPC). When adding it to your DNS, I recommend using aCNAME to the Elastic IP address DNS name.Auto Scalingcan automatically start a replacement instance if it detects that an instance has failed or is no longer passing the health check. However, it will not automatically re-associate Elastic IP addresses. You can combine Auto Scaling withElastic Load Balancingto have a permanent DNS entry to access the healthy instance including any replacements. You would map your DNS entries as CNAME pointers to the ELB DNS name as described in the docs.I'm not sure how exactly your question title relates to the question body, but if you are interested in what stop/start does, I've written an article on all the ways it differs from simply rebooting an instance:Rebooting vs. Stop/Start of Amazon EC2 Instance
We need to dynamically spin up EC2 instances for new customers, and assign them a subdomain: customer1.mydomain.com, customer2.mydomain.com. I'd like to do this programmatically using the AWS SDK. I'd like to use Route 53 to assign the subdomains to instances.Questions:Is it possible to point Route 53 at an instanceId, instead of an IP? Or do I also need to assign an elastic IP to each instance dynamically?What happens when the hardware crashes? I haven't been able to figure out how to get CloudWatch and Auto Scaling to detect when an instance goes down, and then automatically spin up the (EBS-backed) instance on new hardware and reattach the subdomain.
How to ensure that an EC2 instance survives a stop/start?
Your concerns would be valid if you had to give your secret key out for clients to pull data from the queue. However the typical workflow involves using your AWS account ID for creating and modifying queues and perhaps pushing data onto the queues. Then you can set permissions with either the SQS addPermission action or setup a more finely controlled access policy. This means you would give read access to only a specific AWS account or to anonymous usage, but you would not allow for other modifications.So basically you have a couple options. You could compile in the AWS public and private keys which you have setup in advanced that has restricted permissions on your client application. A better approach in my opinion is to make the public and private key files a configurable option on your client and tell the users of the client they are responsible for getting their own AWS account and keys and they can tell you what their AWS key is and you can give them as fine grain control as you want on a per client basis.These resources would be good for you to look at:Using the Access Policy LanguageControlling User Access to your AWS accountaddPermission action for SQS
So this might be a silly question, but what is the point of using Amazon SQS if it requires a private and public key? If the client has the private and public key they could probably discover the keys via decompile or some other means...The only secure way I could think of would be to use a proxy (like php) that has the private and public keys. But then what is the point of using SQS in the first place? The main benefit of SQS (I can see) is that it can scale upwards and you don't have to worry about how many messages you are receiving. But if you are going to be using a proxy then you will have to scale that too... I hope my concerns make sense?Thanks
Why use Amazon SQS when private keys are exposed
When you start the temporary instance, specify--instance-initiated-shutdown-behavior terminateThen, when the instance has completed all its tasks, simply run the equivalent ofsudo haltorsudo shutdown -h nowWith the above flag, this will tell the instance that shutting down from inside the instance should terminate the instance (instead of just stopping it).
I configured a ubuntu server(AWS ec2 instance) system as a cronserver, 9 cronjobs run between 4:15-7:15 & 21:00-23:00. I wrote a cron job on other system(ec2 intance) to stop this cronserver after 7:15 and start again @ 21:00. I want the cronserver to stop by itself after the execution of the last script. Is it possible to write such script.
To stop the EC2 instance after the execution of a script
For anyone else looking for this, entering the following phrase in Google should be helpful for changes up through around 2014:site:aws.typepad.com pricing s3For more recent changes,site:aws.amazon.com/blogs/aws/ pricing s3This searches the AWS blog forpricing s3which brings up most of their previous price change announcements.
Is there a site that offers price history for various Amazon Web Services such as EC2, Cloudfront, etc? Something like on 1/1/2009 a small on demand ec2 instance in the US East region cost $x.xxxx, on 1/1/2010 it cost $x.xxxx.I would like to be able to forecast that if something like a small on demand EC2 instance costs $0.085 per hour today that it will likely half in cost to $0.043 per hour a year from now. Similarly if I have 10GB of files in S3 storage how will the cost be affected over a similar span of time? I can only imagine, that like all technology, the cost will go down.I cannot seem to find any pricing information aside from this site which lists only the fluctuating cost of spot instances.http://thecloudmarket.com/stats#/spot_pricesAnd this statement made by Amazon on 8/20/2009 claiming that reserved instance pricing had been reduced by 30%.http://aws.amazon.com/about-aws/whats-new/2009/08/20/New-Lower-Prices-for-Amazon-EC2-Reserved-Instances/Any suggestions?
Amazon Web Services Price History?
You can remove the "Server: Apache/2.2.22 (Unix) ..." line in the header as follows:Download the Apache httpd tarball and unpack it in the usual way.Change include/ap_release.h from:#define AP_SERVER_BASEVENDOR "Apache Software Foundation" #define AP_SERVER_BASEPROJECT "Apache HTTP Server" #define AP_SERVER_BASEPRODUCT "Apache"to#define AP_SERVER_BASEVENDOR "-" #define AP_SERVER_BASEPROJECT "-" #define AP_SERVER_BASEPRODUCT "-"Then recompile with your usual configure / make / make install procedure.Finally, in your httpd.conf file, include the line:ServerTokens ProdRestart your server and the Apache header line will simply become "Server: -" .
I cannot remove the "Server" header from the response headers. I am using Amazon EC2. I have added this in Apache config:ServerSignature Off Header unset Server RequestHeader unset ServerIt does not do anything. I can still see the server header saying "Apache (Amazon)" in the response headers. Any clue?
Cannot remove server in response headers (Amazon AWS)
I am assuming each user's data is independent of the other users' data, which seems logaical to me. If that-s not the case, please ignore this answer.Since you have mutually independent data (that is, each user's data is independent from other users') there is no need to use MapReduce. MR is just a paradigm in programming that simplifies data manipulation when the data isnotindependent (map prepares the data, then there is sorting phase, then reduce pulls the results from the sorted records).In your case, if you want to use more computers, just split the load between them - each computer should process ~10000 users per hour (very rough estimate). Then users can be distributed among computers beforehand or they can be requested in chunks of 1000 or so users, so the machines that end sooner can process more users.BUTthere is an added bonus in using MR framework (such as Hadoop), even if you only use one phase (map only). It does the error handling for you (nodes failing, jobs failing,...) and it takes care of distributing the input among the nodes.I'm not sure if MR is worth all the trouble to set it up, depends on your previous experience - YMMV.
I have a website set up on an EC2 instance which lets users view info from 4 of their social networks.Once a user joins, the site should update their info every night, to show up-to-date and relevant information the next day.Initially we had a cron-job which went through each user and did the necessary calls to the APIs and then stored the data on the DB (amazon rds instance).This operation should take between 2 to 30 seconds per person, which means doing it 1 by 1 would take days to update.I was looking at MapReduce and would like to know if it would be a suitable option for what im trying to do, but at the moment I can't tell for sure.Would I be able to give an .sql file to MapReduce, with all the records I want to update + a script that tells MapReduce what to do with each record and have it process them all simultaneously?If not, what would be the best way to go about it?Thanks for your help in advance.
Amazon MapReduce with cronjob + APIs
http://aws.amazon.com/ec2/:There is no Data Transfer charge between Amazon EC2 and other Amazon Web Services within the same region (i.e. between Amazon EC2 US West and Amazon S3 in US West)http://aws.amazon.com/s3/:There is no Data Transfer charge for data transferred between Amazon EC2 and Amazon S3 within the same Region or for data transferred between the Amazon EC2 Northern Virginia Region and the Amazon S3 US Standard Region.so there is no need for a separate internal address.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionAccording to the EC2 documentation, its more cost-effective to use the internal address to communicate between EC2 instances. What is the optimal way to communicate between EC2 and S3? Is there a notion of an "internal address" for S3 and is it any faster/more cost effective than fetching from public address?
Optimize EC2 -> S3 performance/cost [closed]
As Wukerplank suggested: "You can make it difficult, but you can't make it impossible."
Im trying to wrap my head around Cloudfront. We notice some video sites don't allow us to download the video. I.e. there is no physical link to the file. Or at least, I am not able to locate it in the flash player's source code using Firebug.On some sites, a typical block of code could look like the following:<object width="496" height="24" type="application/x-shockwave-flash" id="media_player" name="media_player" data="/flash/jwplayer/player.swf" ....> <param name="flashvars" value="file=http://some_bucket_name.s3.amazonaws.com/uploads/users/1/foo.mp3&amp;title=Test&amp;author=Foobar&amp;plugins=&amp;autostart=true&amp;controlbar=bottom&amp;repeat=none&amp;screencolor=000000"> </object>Above, you notice, from the html source code, that the file can be 'cleverly' downloaded through the physical link:http://some_bucket_name.s3.amazonaws.com/uploads/users/1/foo.mp3.I understand what a CDN is. A good explanation can be foundhere.If we use Cloudfront, will this disallow end-users from 'cleverly' downloading media files directly from our app since the files will be streamed?
Does Amazon Cloudfront hide the file from being download directly?
When creating the key pair its best to pipe the output straight into a file so that there are no formatting issues, using:ec2-add-keypair ec2-keypair | sed '1d' > ec2-keypairMax.
I have setup an amazon EC2 intance using the command line tools. I have create a key pair for it etc. and it is up and running. I try to SSH into it using the following (I am running bash in Snow Leopard):$ ssh -i ec2-keypair[email protected]Snow Leopard pops up a box saying "Enter you password for the SSH key "ec2-keypair" ". Can someone please tell me what I should do? If I don't provide a password its just asks me for one in the bash terminal.Thanks for the response. I create a key pair for Amazon EC2 using:ec2-add-keypair ec2-keypairI have create a password protected sash key and now have two files in my .ssh directory:id_rsa id_rsa.pubDo I need to transfer one of these to my EC2 instance? Which one? What is the best way of doing this and where shall I put?Max.Any help greatly appreciated as I have spent some while trying to sort this out.Max.
Amazon EC2 instance
I've updated RTurk and continued development, it should work for most jobs. Here's a little tutorial I wrote up on it -http://squarepush.com/ps/2009/08/11/mechanical-turk-in-ruby/I'll be doing a more rails specific one in the next few days.
does anybody know of any tutorials/resources that discuss integrating Amazon'sMechanical Turkand Rails? (besides those resources that Amazon already provides)Thanks!
Mechanical Turk tutorials or how-to guides
With a bit of digging around in the S3 library I'm using I have found the problem here.When you upload a file to S3 you have to set theContent-Typeheader. In my situation I was uploading two files, one was an original PDF file with a Content-Type ofapplication/pdf, the other was a thumbnail preview in PNG format. The library I was using to upload to S3 does set theContent-Typeheader, but it was setting the header toapplication/pdffor both the original PDF and the PNG thumbnail.It seems that Firefox and IE will happily render a PNG image from S3 even though it has the wrongContent-Typeheader, whereas Safari doesn't like this at all and consequently won't render the image.So, patching the S3 library I'm using such that the correctContent-Typeheader is correctly set on the PNG thumbnails solved the issue.Phew.
I'm using Amazon S3 to host images. The S3 bucket is private, so I generate a temporary URL (usingRight AWS) with a 5-minute expiry to allow the image to be rendered. The URL looks like this (note: URL below will not work):https://mybucket.s3.amazonaws.com:443/attachments%2F30%2Fsmall.png?Signature=J%2BXzQd95myCNv0Re8arMhuTFSvk%3D&Expires=1235511662&AWSAccessKeyId=1K3MW21E6T8LWBY94C01This works fine, and I can paste the URL into Firefox and the image is displayed. Same for IE. However, when I try it in Safari the URL appears to resolve but no image is displayed. Similarly, if I try and use the URL in thesrcattribute of anIMGtag on a web page, nothing is rendered by Safari (fine in all other browsers), e.g:alt text http://lylo.co.uk/screenshot.pngHas anyone seen this behaviour before and can you point out what, if anything, I might be doing wrong?
Amazon S3 temporary URL to image works in IE and Firefox but not Safari
As Allan Chua suggested, using the SES SMTP interface with a tool likenodemailerworks.nodemailer.createTransport({ port: 587, host: `email-smtp.${region}.amazonaws.com`, auth: { user: smtpUsername, pass: smtpPassword } });
I have already set up the necessary VPC endpoints as statedhere.However, when inside of my Lambda function (using Node.js) I do:const sesClient = new SESClient({ }); ... await sesClient.send(sendEmailCommand);it times out after 5 minutes.Should a specific endpoint be specified when initializing the SES Client?
Access AWS SES from a lambda function that is inside a VPC
According tothis docthe format should be"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf:username1::"But you have:"arn:aws:secretsmanager:<region>:<accountid>:secret:/rds/rds_secret-D2fBVv:SecretString:password"that should bearn:aws:secretsmanager:<region>:<accountid>:secret:/rds/rds_secret-D2fBVv:password::
Below is a portion of my cloudformation template for an ECS task. It fetches a secret/rds/rds_secret-D2fBVvwhich contains a json key-value pair secret like{"password":"1234ABCD","dbname":"my_db"}...TaskDefinitionAPI: Type: AWS::ECS::TaskDefinition Properties: ContainerDefinitions: - Name: api Secrets: - Name: "DB_PASSWORD" ValueFrom: "arn:aws:secretsmanager:<region>:<accountid>:secret:/rds/rds_secret-D2fBVv:SecretString:password"as per this documentationhere.However when creating the stack, I get the following errorResourceInitializationError: unable to pull secrets or registry auth: Execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 1 time(s): secrets manager: failed to retrieve secret from arn:aws:secretsmanager:::secret:/rds/rds_secret-D2fBVv:SecretString:password: unexpected ARN format with parameters when trying to retrieve ASM secretI suspect it is because I have a json key-value pair as the secret. I have tried many modifications to this, but cloudformation still complains.
Unexpected ARN format with parameters when trying to retrieve ASM secret
Role B (in account B) cannot use identity providers defined in another AWS account.You can either:(1) Create and use an identity provider in account B(2) Create a role in account A that can be assumed with web identity, then userole chainingto assume a role in account B that trusts this role in account A. (In other words, assume role A, then assume role B using role A).
I have 2 AWS accounts:Account A: It contains an Identity Provider (IdP).Account B: It has an IAM role (ROLE B). This role is also configured to allow assuming the role using the WebIdentityAssume against the Identity Provider set up in Account A."Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::ACCOUNT_A:oidc-provider/token.actions.githubusercontent.com" }, "Action": [ "sts:TagSession", "sts:AssumeRoleWithWebIdentity" ], "Condition": { "StringLike": { "token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:*" }, "StringEquals": { "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" } }Then I use this action to do OIDC authentication:- name: Login to AWS use OIDC flow uses: aws-actions/configure-aws-credentials@v1 with: role-to-assume: <ROLE_B_ARN> role-duration-seconds: 1200 role-session-name: OidcSessionI get error "Not authorized to perform sts:AssumeRoleWithWebIdentity"My question is thatis it possible to do OIDC across AWS accounts?
Setup AWS OIDC across accounts
The configuration you posted seems not really optimal. I would suggest always running two instances minimum in a production environment. Also because you are scaling based on the 'UnHealthyHostCount' I would expect that in some cases, the unhealthy instance may have already reached its capacity limit, which means that it's struggling to handle the current workload. Scaling up your application may help, but it won't fix the underlying issues with the unhealthy instance. Therefore I don't find it weird that it is slow. If you run always two instances I would think this could be solved without changing your scaling triggers.If this was already your policy before it could also be that you have some special configuration which stopped working on upgrading which was not included in your question.I can only give you some pointers on what could have happened when upgrading:I also see that you upgraded not only to Python 3.8 but also from OS. On thechangelog of ElasticBeanstalk its python platformI see thatPython 3.6was running onAmazon Linuxand not onAmazon Linux 2.I am not familiar with the Python platform but when I upgraded my PHP ElasticBeanstalk toAmazon Linux 2. I needed to change alot of my.ebextensions. I used an Amazon Doc tomigrate to Amazon Linux 2. In this doc there are also some platform specific considerations
So we have recently uploaded a new enviornment for our productWe are using python 3.8Elastic beanstalk is of application typeThis is my Auto Scaling group configurationThese are myscaling triggers configurationI can see there are few spikes from the monitoring but these things were happening before also, at then we were not facing this issueIs there something that I am missing in my confiuration, beacuse we have used exactly same configuration before and it was working fine. We have just updgraded from python 3.6 to python 3.8Any help would be appreciated.
Elastic Beanstalk server slows down whenever a instance is added or removed on auto-scaling (Python 3.8 running on 64bit Amazon Linux 2)
"name": "*ubuntu/images/hvm-ssd/ubuntu-focal-22.04-amd64-server-*",You name still hasfocalin it. Change it tojammyAlso the leading*isn't needed (though I suppose it isn't actively hurting anything either).For ref, this is my working filter:source_ami_filter { filters = { architecture = "x86_64" name = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*" root-device-type = "ebs" virtualization-type = "hvm" } most_recent = true owners = ["099720109477"] }I get the owner ID by looking up the AMI in my region (https://cloud-images.ubuntu.com/locator/ec2/) & then doing anaws ec2 describe-images.E.g.aws ec2 describe-images --filters "Name=image-id,Values=ami-00874d747dde814fa" | grep -e Name -e Architecture -e OwnerId=>"Architecture": "x86_64", "OwnerId": "099720109477", "DeviceName": "/dev/sda1", "DeviceName": "/dev/sdb", "VirtualName": "ephemeral0" "DeviceName": "/dev/sdc", "VirtualName": "ephemeral1" "Name": "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230115", "RootDeviceName": "/dev/sda1",
I'm planning to update from ubuntu 20.04 to 22.04, I had changed the config as below"source_ami_filter": { "filters": { "virtualization-type": "hvm", "name": "*ubuntu/images/hvm-ssd/ubuntu-focal-22.04-amd64-server-*", "root-device-type": "ebs" } }I'm getting error as no matching filters found. So far I had changed the filters to 22.04 but it didn't work
Packer builder source_ami_filter for ubuntu 22.04
This button is not available for FIFO queues.There is a note about this in the aws documentation:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html
I have a Queue set up with a redrive policy that automatically sends failed messages to a Dead Letter Queue. When looking at the Dead Letter Queue, there is a button to "start DLQ redrive" which should allow me to reprocess the failed messages in the original queue. Unfortuately this button is not enabled on my queue and I cannot fiure out why.relevant article here:https://aws.amazon.com/blogs/compute/introducing-amazon-simple-queue-service-dead-letter-queue-redrive-to-source-queues/
In AWS SQS, the "Start DLQ Redrive" button is disabled
Possible alternative:Consider using AWS Athena to view and query your application logs that are stored in S3.For example, here's a guide to using Athena to query CloudFront access logs that are stored in S3:https://docs.aws.amazon.com/athena/latest/ug/cloudfront-logs.html
My application is storing logs in S3 in a specific format. But currently, I'm not able to view those logs directly. Can we use AWS CloudWatch to view those logs that are stored in S3?When I checked, I saw that we can use CloudWatch Logs to create Log group and then store logs to that using cloudwatch agent. But is there a way to import logs from S3 to CloudWatch and view them on the cloudwatch logs section?
Can we view Logs stored in S3 using CloudWatch?
When the pipeline page opens you select the "Actions" dropdown menu, and then you can choose the "Update runtime" option. Under resolver type you can then choose a Unit type resolver which you are probably looking for.Something did change, there is a new feature where you can write resolvers using JavaScript and i guess they've put a PIPELINE resolver as the default one.
I have created an AppSync API and a lambda data source and a resolver to provide data for it. I managed to successfully run queries a couple of days ago.I wanted to attach a second lambda resolver for a different GraphQL query. I added a new lambda as a data source, but when I clickAttachin the schema next to the query, I am forwarded to the page for creating a pipeline resolver and there is no way to choose a lambda resolver instead.Now, even when I just create a new copy of the previous AppSync API with one query and want to attach a lambda resolver to that single query, there is no way to attach a lambda resolver any more. Though it was possible earlier this week. And I can see on old APIs that they are still using lambda resolvers.Has anything changed on AppSync recently? Or how can I attach a lambda resolver to a query, not a pipeline resolver?
AWS AppSync: No option to add Lambda resolver
Yes, you can use the resource-level classes such asTablewith both the real DynamoDB service and DynamoDB Local via theDynamoDB service resource, as follows:resource = boto3.resource('dynamodb', endpoint_url='http://localhost:8000') table = resource.Table(name)
I have some existing code that uses boto3 (python) DynamoDB Table objects to query the database:import boto3 resource = boto3.resource("dynamodb") table = resource.table("my_table") # Do stuff hereWe now want to run the tests for this code using DynamoDB Local instead of connecting to DynamoDB proper, to try and get them running faster and save on resources. To do that, I gather that I need to use a client object, not a table object:import boto3 session = boto3.session.Session() db_client = session.client(service_name="dynamodb", endpoint_url="http://localhost:8000") # Do slightly different stuff here, 'cos clients and tables work differentlyHowever, there's really rather a lot of the existing code, to the point that the cost of rewriting everything to work with clients rather than tables is likely to be prohibitive.Is there any way to either get a table object while specifying the endpoint_url so I can point it at DynamoDB Local on creation, or else obtain a boto3 dynamodb table object from a boto3 dynamodb client object?PS: I know I could also mock out the boto3 calls and not access the database at all. But that's also prohibitively costly, because for all of the existing tests we'd have to work out where they touch the database and what the appropriate mock setup and use is. For a couple of tests that's perfectly fine, but it's a lot of work if you've got a lot of tests.
Can I get a boto3 DynamoDB table object from a client object?
Solved it, it was because of the signature version in boto3. Apparently boto doesn't set signature version 4 by default (I think I read in the docs somewhere it should). Here are the changes I had to make:from botocore.config import Config config = Config(signature_version='s3v4') s3 = boto3.client('s3', config=config) response = s3.get_object( Bucket=os.getenv('S3_BUCKET'), Key=f'path/abc.json' )
I am getting an error when I am trying to open a presigned url for an encrypted file. Here's my line to create the URL:client.generate_presigned_url('get_object', Params={'Bucket': 'bucket1', 'Key': a})Here's the error I am getting:<Error> <Code>InvalidArgument</Code> <Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message> <ArgumentName>Authorization</ArgumentName> <ArgumentValue>null</ArgumentValue> <RequestId>F6VK4TD1S0G4K6YR</RequestId> <HostId>HOTh/YUsnxC4sSBYVsK5psX5vBz21q1M/qx+pVmKa6s7Np4EbRUbBV4toRJ52OAtqpHIejY03Zk=</HostId> </Error>Note, I am using the defaults in boto3 so it should be using signature 4 out of the box. My bucket is encrypted using default encryption and I am using S3 bucket keys, and KMS key auto-generated by AWS.What am I missing here?
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
BuildImage = LinuxBuildImage.FromDockerRegistry("public.ecr.aws/docker/library/node:16-bullseye-slim")
I've successfully tested out a manual CodePipeline/CodeBuild test action to run Cypress UI tests with the public Docker imagepublic.ecr.aws/cypress-io/cypress/browsers:node16.17.0-chrome106Now on trying to codify the setup into CDK, I can't find a complete code example on how to reference the custom image. The docs only offers a very terse block that doesn't explain where the (public) ECR repository comes from.https://docs.aws.amazon.com/cdk/api/v2/dotnet/api/Amazon.CDK.AWS.CodeBuild.htmlEnvironment = new BuildEnvironment { BuildImage = LinuxBuildImage.FromEcrRepository(ecrRepository, "v1.0") }Checking out the AWS.ECR namespace doesn't explain how to reference public repositories with the static methods either.https://docs.aws.amazon.com/cdk/api/v2/dotnet/api/Amazon.CDK.AWS.ECR.Repository.html#methodsHow is the CodeBuild project's BuildEnvironment object supposed to be coded to properly use that public custom image?
AWS CDK: How to reference custom Docker image from public ECR repository
found the solutiontask: AWSCLI@1 inputs: awsCredentials: "$$$" regionName: us-east-2 awsCommand: cloudfront awsSubCommand: create-invalidation awsArguments: --distribution-id DistributionID --paths "/fd/cm/latest/remoteEntry.js" displayName: Cache Invalidation
I have created Automatic invalidations in GitHub and it is working fine:run: aws cloudfront create-invalidation --distribution-id ${{ secrets.AWS_DISTRIBUTION_ID }} --paths "/fd/cm/latest/remoteEntry.js"but not sure how to create the same task in AzureDevops. Any help would be greatly appreciated?
How to Create auto CloudFront invalidations in AzureDevops?
This is a common security feature for avoiding user enumeration, ie, identify if a given username/email is valid in the platform, which can lead to attacks like brute-forcing or credential stuffing.In order to avoid this vulnerability, it is recommended that the response content (and timing) to operations like sign in, sign up and password reset is the same for valid or invalid usernames and this is what Cognito is doing by sending afakeresponse stating that a code has been sent to a simulated email address or phone number, but none is sent.From Cognito Developer Guide onManaging error responses:ForgotPasswordWhen a user isn't found, is deactivated, or doesn't have a verified delivery mechanism to recover their password, Amazon Cognito returnsCodeDeliveryDetailswith a simulated delivery medium for a user. The simulated delivery medium is determined by the input user name format and verification settings of the user pool.
I'm usingamazon-cognito-identity-jsto reset user password. I calluser.forgotPassword()and that all works fine, the user receives a verification code, etc.However, something strange happens when I enter a non-existing username!I do everything properly, I create auser = new CognitoUser(...)object with my pool and some random username. And then, when I calluser.forgotPassword(...),onSuccessis triggered, and I get something like this as a response:CodeDeliveryDetails: Object { AttributeName: "phone_number", DeliveryMedium: "SMS", Destination: "+*******5651" }or, if I insist on email recovery instead of SMS:CodeDeliveryDetails: Object { AttributeName: "email", DeliveryMedium: "EMAIL", Destination: "4***@g***" }Is Cognito really sending random people SMSs and emails?!? I swear I don't have users with any similar email or phone in my User Pool. O_o
AWS Cognito forgotPassword strange response when user does not exist
It turns out I wasn't naming my resource correctly.After some trial and error, I ran a destroy plan for my entire infrastructure (terraform plan <insert module runtime params> -destroy). Using the output from that, I found the name of the resource I wanted to destroy. The format wasmodule.<submodule>.<resourcetype>.<resourcename>.Once I acquired the resource name directly from Terraform, I first ran theterraform plan -destroy -target="module.<submodule>.<resourcetype>.<resourcename>"command to verify the outcome, then theterraform destroy -target="module.<submodule>.<resourcetype>.<resourcename>"command and it worked!
I have a DMS task that failed and isn't resuming or restarting. Unfortunately, according to AWS Support, the only recourse is to destroy and recreate it. I have a large infrastructure that takes several hours to destroy and recreate with Terraform. I'm running Terraform version 1.2.X with the AWS provider version 4.17.0.I tried runningterraform plan -destroy -target="<insert resource_type>.<insert resource_name>". I tried with and without quotes, double hyphens prior to thetargetoption, module names, etc. Every time the result comes back with this error:Either you have not created any objects yet or the existing objects were...My hierarchy is this: Main module -> sub module -> resource. My spelling and punctuation are correct.I've Google it. I find only the Hashicorp documentation that specifies the syntax but not the naming convention, as well as bug reports from years ago. How do I selectively destroy a resource?
How can I destroy a specific Terraform managed resource?
!GetAttonly works for resources created in the template, you can't use it to get a property for a parameter.To achieve the desired workflow, you'll have to either:Create the launch template in the same template (then you can use!GetAttas you are doing now)Pass in the version number as an additional parameter.If the launch template was created with another CloudFormation template, then there is a third option:Export the launch template resource from the other template and import it in your template; then you'll be able to use!GetAtt
I am writing a cloudformation (cf) template to provision an auto-scaling group, I prepared the launch template ahead of time and want to use the latest version.I am getting the LaunchTemplateId as a parameter but I am not sure how to use the latest lunch template version.my cf template looks like this:--- AWSTemplateFormatVersion: 2010-09-09 Description: Create Auto Scaling Group with 2 min, 2 desired, 4 max Parameters: ... TemplateID: Description: Lunch Template for ASG EC2s Type: String Default: lt-xxxxxxxxxxxxxxxx Resources: ASG: Type: AWS::AutoScaling::AutoScalingGroup Properties: ... LaunchTemplate: LaunchTemplateId: !Ref TemplateID Version: !GetAtt - !Ref TemplateID - LatestVersionNumber ...but I always get this linting error when I run taskcat for testing:[ERROR ] : Linting detected issues in: /Users/zaidafaneh/Desktop/RealBlocks/repos/cloudformation-temaplates/templates/saas/asg.yml[ERROR ] : line 23 [1010] [GetAtt validation of parameters] Invalid GetAtt TemplateID.LatestVersionNumber for resource ASGI am thinking about using Lambda Custom Resource as a workaround but I feel it's too much. is there any way to do this through cloudformation?
Use latest launch template version in AWS Cloudformation
Upto 7 days - YesMore than 7 days - NOIt is important to know the max expiration time differs with different ways which creates pre-signed URLUsing s3 console - max 12 hoursAWS explorer for Visual Studio - max 7 daysAWS SDK - max 7 daysAWS CLI - max 7 daysDocs forreferenceIt becomes essential to understand that presigned URLs are used for limited time access, not long durations. In a broader scope, you are giving access to your bucket to some external identity why would you give access to some external identity that too for so long?
Just wanted to know is there a way to use AWS S3 Presigned URL for more than 7 Days using V4 of Presigned URL.
AWS Presigned URL valid for more than 7 days
I found the answer to this in this thread!:https://github.com/aws-solutions/serverless-image-handler/issues/375#issuecomment-1227290911tl;drMake sure your"concurrent Lambda executions quota"is large enough (for me was 10, increased to 1000), otherwise if you request multiple images at once, some of them will get dropped out and fail, and then CloudFront will also cache them for a few minutes.So the fix is to request anincrease of the concurrent Lambda executions quotahere:https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.htmlHope this fixes the issue for you as well.
I am using the AWS Serverless Image Handler as a CloudFormation to resize my images on my S3 buckets which I distribute via CloudFront. When requesting an object, I generate signed URLs on my backend and use them as a src attribute in my frontend.Now, while some images are working and getting displayed correctly, some other images aren't:When I open the image in a new tab, strangely I can view it.Dev Tools gives me a 500-Error, but while the image is visible in a new tab, it can't be an issue by the Serverless Image Handler, could it?In addition, there's not really a pattern which decides which images will not be displayed. It changes over time and appears random.What could be the problem here?Thanks!
AWS Serverless Image Handler: images aren't displayed even though they are generated
To answer my own question, the answer isno.I found this in the Recomendation page:
I am currently looking into purchasing a bunch of ElasiCache reserved instances.I know from EC2 and RDS, that it does not matter if you purchase e.g. at3.mediumor 2t3.smallinstances, because the reserved instances aresize-flexibleIs this also the case for reserved ElasiCache instances?I can not find any hint about ithereorhere...
Are AWS ElastiCache Reserved instances size-flexible?
It's instructive how differently the CloudFormation docs and CDK docs present Nested Stacks. TheCloudFormation docsfocus on their role in component resuse. TheCDK docsdon't mention reuse, instead presenting them as a workaround for the per-stack resource limit. Of course, Nested Stacks do both things.CDK Constructs are more composable and portable than the Nested Stacks it inherited from CloudFormation. The CDK docs recommend Constructs for compositionhereandhere.Apart from overcoming the per-stack resource limit, there is a backwards compatability rationale for using Nested Stacks whenincluding existing CloudFormation templatesinto CDK apps.
I'm trying to understand the use case of Nested Stacks vs. Constructs specifically in CDK. The AWS docs say the following:Stacks are the unit of deployment: everything in a stack is deployed together. So when building your application's higher-level logical units from multiple AWS resources, represent each logical unit as a Construct, not as a Stack. Use stacks only to describe how your constructs should be composed and connected for your various deployment scenarios.This makes sense when evaluating whether to use a Construct or Stack, but what about Nested Stacks? Both Constructs and Nested Stacks solve the problems of:reusability of component architecturescontrolled information sharing between components / mitigating import and export (deadly embrace) issuesand both Constructs and Nested Stacks are deployed together from the root Stack (from what I understand, NestedStacks are rarely deployed alone and are intended to be deployed as part of a group of NestedStacks under one parent Stack)So what's the benefit of using Nested Stacks over Constructs besides working around the resource limitations of a single Stack (i.e. when should I use one over the other)?
What are the tradeoffs between NestedStacks and Constructs?
I think you could create an IAM Role in the account with parameter store, give that Role permission to access parameter store, and configure it to let the IAM user you created in the other account to assume that Role and do what it needs.Somethinglikeaws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Sessionand thenaws ssm get-parameter --name "MyStringParameter"
The use case: The database credentials are stored inParameter Storefor an AWS source Account and we need to share such credentials with other AWS Account.I know the recommendation is to useSystem Manager, but that is not a valid option for custom reasons.We won't access Parameter Store from a Lambda inside another AWS Account/VPC. Instead, we need to access such keys from theAWS CLIto fill in the application environment variables at build time- again, it's not ideal.🤷‍♂️In summary, we have an AWS Cross-Account / Same region / IAM user (another account) scenario to access theParameter Storekeys from the source AWS Account.Thanks in advance for any kind of guidance/direction 👊
Is it possible to share Parameter Store keys in another AWS Account for same region?
If your deployment is purely using Fargate, not EC2, then there's really no technical reason to split it into a separate ECS cluster, but there's also no reason to keep it in the same cluster. There's no added cost to create a new Fargate cluster, and logically separating your services into separate ECS clusters can help you monitor them separately in CloudWatch.
I have a cluster with a mixture of services running on EC2 and Fargate, all being used internally. I am looking to deploy a new Fargate Service which is going to be publicly available over the Internet and will get around 5000 requests per minutes.What factors do I need to consider so that I can choose if a new cluster should be created or if I can reuse the existing one? Would sharing of clusters also lead to security issues?
How to choose if a new cluster is needed for an AWS Fargate Service?
I think you're on the right track. I'd design it as a table with two Global Secondary indexes. The base table looks like this:The first Global Secondary Index like this (GSI1):The second Global Secondary Index like this (GSI2):Now for the why:This design allows you to easily update the following things:A user's departmentA ticket's status if you know the ticket IdA ticket's user if you know the ticket IdA ticket's department if you know the ticket IdYou can get a bunch of information from this model:Fetch a Department.Query the base table with the department name or list all departmentsFetch all users in DepartmentQuery GSI 1 with the Department Name and filter the sort Key usingbegins_with = USER#Fetch a given user in DepartmentSound like you know the UserId, so do a GetItem on the base table. If that's not the case, do the query mentioned in "Fetch all users in Department".Fetch all Tickets belongs to the DepartmentQuery GSI 1 with the department name as the PK and filter the SK usingbegins_with = Ticket#Fetch all Tickets assigned to the UserQuery GSI 2 with the user id as the PK and filter the SK usingbegins_with = Ticket#
I am working in Dynamo DB for the first time . My assignment is Ticket Management System where it has 3 entities Department , User and Ticket. The relationship between each entity is.I have identified the following access patternsFetch a Department.Fetch all users in DepartmentFetch a given user in DepartmentFetch all Tickets belongs to the DepartmentFetch all Tickets assigned to the Userfor which i defined the following data model . I am thinking of creating GSI with Tickets as PK and User as SK to do 4 & 5On a higher level I need to perform 2 updates . I can update the User to which the ticket is assigned and I can update the ticket status as inprogress, resolved . And in the table I have Ticket details as JSON object as below.I need help from from the experienced people whether my understanding and approach is efficient.
Data modelling in dynamo db for Ticket Management System
local.bucket_nameexecutes in your bash, not in TF. You have to actually provide the full name:terraform import aws_s3_bucket.trigger_pipeline "my-bucket-name"
I wanted to create an event notification on an existing s3_bucket (which is not setup by me in this current terraform code).I came across this answer:terraform aws_s3_bucket_notification existing bucketso I tried this. Here, local.bucket_name is the name of the existing bucket.notification.tfresource "aws_s3_bucket" "trigger_pipeline" { bucket = local.bucket_name } terraform import aws_s3_bucket.trigger_pipeline local.bucket_nameHowever, I am not sure how to use this import statement. Do I use it after the resource block? Do I use it in the beginning of the same file?If I use it as it is, under the resource block, I get this error:Invalid block definition: Either a quoted string block label or an opening brace ("{") is expected here.HCLat the dot here:aws_s3_bucket.trigger_pipelineEdit:So first I defined s3 resource as shown in the question above. Then I runterraform init. Next, I runterraform import aws_s3_bucket.trigger_pipeline "myoriginalbucketname"on the CLI. However, I still get the error that:Before importing this resource, please create its configuration in the root module. For example: resource "aws_s3_bucket" "trigger_pipeline" { # (resource arguments) }I guess I am getting the sequence of events wrong
import an existing bucket in terraform
You can use the--querylogic to filter the list objects locally to only those that are zero-byte big:aws s3api list-objects-v2 --bucket example-bucket --query 'Contents[?Size==`0`]'Or, if you just want to see the list of keys without other meta-data, you can further filter the list:aws s3api list-objects-v2 --bucket example-bucket --query 'Contents[?Size==`0`].Key'(For both of these, replace the outer'with"when running on Windows.)Further, if the goal is the remove these objects, you can use jq and a subshell to construct a query that deletes the targeted objects:aws s3api delete-objects --bucket example-bucket --delete \ "$(aws s3api list-objects-v2 --bucket example-bucket --query 'Contents[?Size==`0`].Key' |\ jq '{"Objects": map({"Key":.})}')"There isn't a direct way to do this same sort of construct with Windows's command interpreter.
at the moment there's some files being uploaded where they are getting corrupted. They'll have a filesize of 0 bytes. May I ask how do I query my s3 bucket and filter by specific size, i'm trying to query when byte is 0?At the moment I have two queries.First one list all the files recursively in the bucket but no sorting.aws s3 ls s3://testbucketname --recursive --summarize --human-readableSecond one sorts but only when provided a prefix, in my case the prefix is the folder name. My current bucket structure is as followed{accountId}/{filename}aws s3api list-objects-v2 --max-items 10 --bucket testbucketname --prefix "30265" --query "sort_by(Contents,&Size)"30265 is the accountId/folder name. When the prefix isn't provided, the sort doesn't quite work.Any help would be greatly appreciated.This query works well for filtering the name which is a stringaws s3api list-objects --bucket testbucketname --query "Contents[?contains(Key, '.jpg')]"Unfortunately I couldn't use contains for Size and there isn't a equals.
How to get query s3 bucket by specific file size
Your variable default valuescan't by dynamic. They must be static values. Thus, instead of havingvar.lambdas, in your case it would be better to uselocals:variable "lambdas" { type = map(string) default = { "lambda1_name" = "lambda_function1", "lambda2_name" = "lambda_function2" } } locals { lambdas = {for key, value in var.lambdas: "${key}-${local.global_suffix}" => value} }Them you would:'resource "aws_lambda_function" "main" { for_each = local.lambdas .... }
I am trying to use a map variable(which has 2 lambda names). Also, I want to pass a local variable inside the key values, as shown in the example below.However, I get an error as variable not allowed here. Any suggestions/advice?variables.tf:variable "lambdas" { type = map(string) default = { "lambda1_name-${local.global_suffix}" = "lambda_function1", "lambda2_name-${local.global_suffix}" = "lambda_function2" } } locals{ global_suffix = "${var.env}-${var.project}${var.branch_hash}" }main.tf:resource "aws_lambda_function" "main" { for_each = var.lambdas function_name = each.key handler = "${each.value}.${var.handler}" filename = "${path.module}/modules/lambda-main/${each.value}.zip" source_code_hash = data.archive_file.init[each.key].output_base64sha256 role = module.lambda_iam_role.arn runtime = "python3.6" memory_size = "2048" timeout = "900" tags = local.tags description = "${var.project} Lambda Function" }I am trying to use one lambda resource block to create 2 lambda functions(hence using the map variable)
Local variables inside map variables
Php 8.0 isonly supportedif you useUbuntu standard:5.0CodeBuild image. The fact that you are getting your error, means that you use different image thenUbuntu standard:5.0.
I am working with AWS CodeBuild, I am facing this issue where I am building a docker image for deployment. However, I am facing this issue where AWS shows me the error message ofUnknown runtime version named '8.0' of php. This build image has the following versions: 7.3, 7.4However, in thedocumentationit says that it supports PHP V 8.0.Note that I am working with Laravel 9.0 with Laravel Sail, MySQL, Redis and MailHog containers.I am attaching mybuildspec.ymlfile for referenceversion: 0.2 phases: install: runtime-versions: php: 8.0 commands: - echo "PHP Version ⬇" - php -v - echo "Initiation of build 🏠" - echo "Installing Composer 🎺" - curl -s https://getcomposer.org/installer | php - mv composer.phar /usr/local/bin/composer - echo "Installing dependencies 📦" - composer install - echo "Installed dependencies 📦" build: commands: - echo "Building 🏗" - composer build - echo "Built 🏗" post_build: commands: - echo "Post build 🏗" - echo "Build completed on `date` 🏗"
AWS CodeBuild Error, unsupported runtime version PHP 8.0
I think the equivalent for ubuntu, on Amazon Linux 2 (lambda is using it) would be:FROM public.ecr.aws/lambda/nodejs:14 COPY index.js package.json cad/ ${LAMBDA_TASK_ROOT} RUN yum install -y libgl1-mesa-devel libx11-devel mesa-libGL-devel RUN npm install CMD ["index.handler"]
I am writing an AWS Lambda function using Node.js which is deployed via a container image.I have used the base Node.js Dockerfile image for Lambda provided at the link below to configure my image. This works well. My image is deployed and my Lambda function is running.https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-baseHere is the Dockerfile:FROM public.ecr.aws/lambda/nodejs:14 COPY index.js package.json cad/ ${LAMBDA_TASK_ROOT} # Here I would like to install libgl1-mesa-dev, libx11-dev and libglu1-mesa-de RUN npm install CMD ["index.handler"]However, I now need to install additional dependencies on the image. Specifically I need OpenGL to use PDFTron to convert CAD files to PDF, according to thePDFTron documentation here. So I requirelibgl1-mesa-dev,libx11-devandlibglu1-mesa-de.The information on the AWS documentation above states:Install any dependencies under the ${LAMBDA_TASK_ROOT} directory alongside the function handler to ensure that the Lambda runtime can locate them when the function is invoked.If this was an ubuntu or alpine image I could install usingapt-getorapk add. But neither is available on this base AWS Lambda Node image since this isn't an ubuntu or alpine image.So my question is, how do I installlibgl1-mesa-dev,libx11-devandlibglu1-mesa-deon this image so that the Lambda runtime can locate them when the function is invoked?
How to install dependencies in base AWS Lambda Node.js Dockerfile image
You are basically asking for a failover mechanism. You want to route requests to the primary Target Group. Then if it goes unhealthy, route to the next Target Group based on the priority score. However, thisarticlespecifically mentions failover between target groups is not supported by default.The Application Load Balancer distributes traffic to the target groups based on weights. However, if all targets in a particular target group fail health checks, then the Application Load Balancer doesn't automatically route the requests to another target group or failover to another target group. In this case, if a target group has only unhealthy registered targets, then the load balancer nodes route requests across its unhealthy targets. A weighted target group should not be used as a failover mechanism when all the targets are unhealthy in a specific target group.However, if your problem is just the Lambda cold start, you can always schedule a ping to your Lambda every 5 minutes. I would think this is much easier than having to duplicate the same endpoint in ECS, especially from ops and maintenance perspective.Finally, may be obvious, but if you really want it, you can always implement a script to do so.EDITThere is another way using Route 53, since it has a failover mechanism. Create 2 ALBs, one backed by ECS and another by Lambda. Use the failover routing.
There is an ability to forward requests to multiple weighted target groups. Is there a way to set up priorities for target groups, not weights? What I'm trying to achieve is: I wanna have a rule which would have 2 tgs, one with scaled down ecs services (to 0) and another one with lambdas. If I need a performance boost (to avoid lambda warmups) I wanna be able to scale up the ecs cluster and it should overtake the request handling. That would be very nicely possible if I could set up two target groups as I described. When the number of healthy targets in ecs will be 0, it won't be able to handle any requests and the infra would route all the requests to the lambda target group. When the number of healthy targets in ecs will be 1 and more, based on the priority, all the requests will go to it. Is that possible? If not, what is the alternative approach to achieve what I have described?
Is there a way to set up priorities for target groups, not weights?
Sadly you can't do this. Instead, it would be better to send all your events to alambda function, and the function would take care of filtering and re-distributing the events further down your message pipeline.
I'm creating a rule for EventBridge and I want the rule to match the following structure:{ "id": "<string>", "changes": { "name": { "old": "old name", "new": "new name", }, } }Inside of thechangesobject, I don't know what the keys are. Once an attribute is modified, I'll receive the key name, the old value, and the new value in the format referenced above. I want to verify that thechangesobject exists. According to theirdocs, I can only use theExists matcheron leaf nodes and not intermediate nodes. Is there a wildcard character I could use to do something of the following:{ "detail": { "id": [{ "exists": true }], "changes": [{ *: { "old": [{ "exists": true }], "new": [{ "exists": true }], } }] } }
Is there a wildcard character in AWS EventBridge rules?
They come preinstalled, as per officialdocumentation:Your code runs in anenvironmentthatincludes the SDK for Python (Boto3), with credentials from an AWS Identity and Access Management (IAM) role that you manage.As regards to your task timeout issue, try increasing the timeout of your Lambda function in theGeneral Configurationsection of your Lambda settings. It defaults to 3 seconds which is probably too short.
My lambda function is failing (without an error message, just saying that the task had timed out) but running the same function with the same permissions locally works fine. The only difference I can think of is that I needed to install boto3 and botocore and in lambda, I didn't do any of that, as I expected that they come preinstalled. But the function failing made me suspect: is there a chance that botocore.exceptions or boto3 do no come preinstalled?
Does Lambda functions comes preinstalled with AWS libraries? (boto3, botocore)
Answering my own question.I ended up using Sagemaker Processing jobs for this. As initially suggested by the other answer. I found this library developed a few months ago:Sagemaker run notebook, which helped still keep my notebook structure and cells as I had them, and be able to run it using Sagemaker run notebook using a bigger instance, and modifying the notebook in a smaller one.The output of each cell was saved, along the plots I had, in S3 as a jupyter notebook.I see that no constant support is given to the library, but you can fork it and make changes to it, and use it as per your requirements. For example, creating a docker container based on your needs.
I'm currently using Sagemaker notebook instance (not from Sagemaker Studio), and I want to run a notebook that is expected to take around 8 hours to finish. I want to leave it overnight, and see the output from each cell, the output is a combination of print statements and plots.Howevever, when I start running the notebook and make sure the initial cells run, I close the Jupyterlab tab in my browser, and some minutes after, I open it again to see how is it going, but the notebook is stopped.Is there any way where I can still use my notebook as it is, see the output from each cell (prints and plots) and do not have to keep the Jupyterlab tab open (turn my laptop off, etc)?
Run Sagemaker notebook instance and be able to close tab
As commenter @luk2302 already said, it is not possible.The fact that you are trying to solve your problem with regex points to an interesting and often underestimated issue:How to effectively model your data in S3.S3 is commonly thought of as just a "disk in the cloud". But it is more like a database. Usually, we design our data in databases with relations etc. and use indexes. We create a data model based on the data we have and the way we want to access it.You can do the same in S3.You could think of theprefixas a kind ofindex. Everything that belongs together and you want to have fast access to should have the same prefix.For example:Imagine your application stores images. Furthermore, imagine you went on a nice vacation in France in 2019.You could structure it like this (in terms of prefixes):/images/2019_franceNow you would need to use a "regex" if you want all pictures taken in France :/images/*_franceAnd as we know, that does not work. So if you find yourself often looking for pictures taken in a specific country, you probably should structure your data in S3 like this:/images/france/2019Now it is easy to find all the pictures taken in france by using the prefix:/images/franceIf you more often look for pictures by year, then maybe you need to structure your bucket like this:/images/2019I hope the point is clear:Prefix design is important when working with S3.
Can I use regex as a prefix in listobjects from amazons3 in Java aws sdk
Does amazons3.listObjects(bucket,prefix) accept regex as a prefix to find objects?
Firehose uses lambda function to transform records before they are being delivered to the destination in your case OpenSearch(ES) so they are only used to modify the structure of the data but can't be used to influence CRUD actions. Firehose can only insert records into a specific index. If you need a simple option to remove records from ES index after a certain period of time have a look at "Index rotation" option when specifying destination for your Firehose stream.If you want to use CRUD actions with ES and keep using Firehose I would suggest to send records to S3 bucket in the raw format and then trigger a lambda function on object upload event that will perform a CRUD action depending on fields in your payload.A good example of performing CRUD actions against ES from lambdahttps://github.com/chankh/ddb-elasticsearch/blob/master/src/lambda_function.pyThis particular example is built to send data from DynamoDB streams into ES but it should be a good starting point for you
I'm not seeing how an AWS Kinesis Firehose lambda can send update and delete requests to ElasticSearch (AWS OpenSearch service).Elasticsearch document APIs provides for CRUD operations:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.htmlThe examples I've found deals with the Create case, but doesn't show how to dodeleteorupdaterequests.https://aws.amazon.com/blogs/big-data/ingest-streaming-data-into-amazon-elasticsearch-service-within-the-privacy-of-your-vpc-with-amazon-kinesis-data-firehose/https://github.com/amazon-archives/serverless-app-examples/blob/master/python/kinesis-firehose-process-record-python/lambda_function.pyThe output format in the examples do not show a way to specifycreate,updateordeleterequests:output_record = { 'recordId': record['recordId'], 'result': 'Ok', 'data': base64.b64encode(payload) }Apart from the examples, I'm not finding the definition of the output format for what the kinesis firehose lambda handler should return.
How can AWS Kinesis Firehose lambda send update and delete requests to ElasticSearch?
Managed to resolve by using --context instead of --parameter:self._resource_name = self.node.try_get_context('resourcename') cdk deploy --context resourcename=value
I am trying to use a CfnParameter in the AWS Python CDK to pass in a value: this will then be included in subsequent resource names._resource_name_param = CfnParameter(self, 'resourcename', type='String', description='base name for res') self._resource_name = _resource_name_param.value_as_stringe.g. used in ec2 naming:instance_name=self._resource_name + '-ec2'When I run cdk deploy --parameters resourcename=xyz-123 however it returns an error...jsii.errors.JSIIError: ID components may not include unresolved tokens: ${Token[TOKEN.199]}-ec2Any help is appreciated. Thanks very much!
ID components may not include unresolved tokens: ${Token[TOKEN.199]}-ec2 when using CfnParameter
To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws firstConfigure Profilesaml2aws configure \ --skip-prompt \ --mfa Auto \ --region <region, ex us-east-2> \ --profile <awscli_profile> \ --idp-account <saml2aws_profile_name>> \ --idp-provider Okta \ --username <your email> \ --role arn:aws:iam::<account_id>:role/<aws_role_initial_assume> \ --session-duration 28800 \ --url "https://<company>.okta.com/home/amazon_aws/......."URL, region ... can be got from OKTA integration UI.Loginsamle2aws login --idp-account <saml2aws_profile_name>that should prompt you for password and MFA if exist.Verificationaws --profile=<awscli_profile> s3 lsthen finally , Just export AWS_PROFILE byexport AWS_PROFILE=<awscli_profile>and use awscli directlyaws sts get-caller-identity
I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine. While trying to connect to k8s suppose to list pods, a simplekubectl get podsreturns an error[Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255But if i dosaml2aws exec kubectl get podsi am able to fetch pods.I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.Any kind of help will be appreciated.
SAML2AWS connecting to k8s issues
No. Edge Locations are used for Amazon Route 53, Amazon CloudFront and AWS Lambda@Edge. They are located in placesoutsideof regions.I think of Local Zones as Availability Zone that are outside of the Region's normal geographic description. Normally, all AZs belonging to a region are within a particular distance of each other. A Local Zone isoutsideof that distance, sort of 'attaching' itself to a more-distant region. It's a way that AWS can provide more geographic coverage without having to create yet another multi-AZ region (which is rather expensive!).
Closed.This question isnot about programming or software development. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed1 year ago.Improve this questionIs it okay to say local zones are an extension of edge locations as they both focus on bringing services closer to the end-user. With local zones allowing more services such ec2 instances and edge locations focusing on CDN and route53
Aws local zone vs Edge locations [closed]
Your issue seems to be caused by theAWS Command-Line Interface (CLI), which has apager. You can turn off the pager by appending this to yourawscommand:--no-paginateAlternatively, put these lines in the~/.aws/configfile:[default] cli_pager=For more information, see:Using AWS CLI pagination options - AWS Command Line Interface
AWS Lambda bash deploy script:01 #!/usr/bin/env bash 02 cp package.json ./dist 03 cp -a ./env/ ./dist/env 04 cd ./dist 05 npm install --silent --only=prod 06 zip -q -r ../function.zip . 07 cd ../ 08 aws lambda update-function-code --function-name $FUNCTION_NAME --zip-file fileb://function.zip 09 rm ./function.zipThe above script works fine, however, the prompt is not released after L08aws lambda update-function-codestep, instead, I see the output from AWS CLI in a text file and the prompt just hangs (must pressQto quit).{ "FunctionName": "foo", "FunctionArn": "arn:aws:lambda:xxx:xxxx:function:xxx", "Runtime": "nodejs14.x", "Role": "arn:aws:iam::xxx:role/xxxxxxxx", "Handler": "index.handler", "CodeSize": 24471144, "Description": "", "Timeout": 3, "MemorySize": 128, "LastModified": "2022-01-27T23:38:54.000+0000", "CodeSha256": "frtAS71k17XuXNB1xQUTyaBsTcE/8aBUWoBgpGrfFYA=", "Version": "$LATEST", "TracingConfig": { "Mode": "PassThrough" }, "RevisionId": "61ea91d6-9e85-4aa1-93e1-0784c359f5b4", "State": "Active", "LastUpdateStatus": "InProgress", "LastUpdateStatusReason": "The function is being created.", "LastUpdateStatusReasonCode": "Creating", "Architectures": [ "x86_64" ] }It seems the script gets stuck inside VIM? How do I get out and proceed to the next step?
Bash script release prompt after Lamba update
So for anyone else stuck where I was, Icannotsay this is the right way to do it, because I never did get it working, but the closest I got to at least forming up my credentials and trying to make a connection was this:const client = new LexRuntimeV2Client({ region: "us-east-1", credentials: new AWS.Credentials({ accessKeyId: "my_IAM_access_key_id", secretAccessKey: "my_secret_access_key" }) }); const lexparams = { "botAliasId": "my alias_id", "botId": "bot_id_from_lex", "localeId": "en_US", "inputText": "hello, this is a test sample", "sessionId": "some_session_id" }; let cmd = new StartConversationCommand(lexparams); try { const data = await client.send(cmd); console.log("Success. Response is: ", data.message); } catch (err) { console.log("Error responding to message. ", err); }As said, buyer beware, and best of luck out there! I hope this might help somebody in some slight way. We taking a momentary break on this problem until a member of our team with better aws fu can take a look at it. :-)
My context is this: I am attempting to build a chat bot into my Mozilla Hubs client, which is a node js / React project. I have a lex bot created on AWS, and I have installed the client-lex-runtime-v2 package and can import it successfully, but I have no idea how to set up a StartConversationCommand, give it credentials, etc. Most of the javascript examples seem to go the other way, where Lex calls a lambda function after processing a request, but I have user input in my app and I need to send it to Lex, and then deliver the resulting text back to the user inside my application.This seems like very basic Lex usage - if anyone could point me to a code example I would be most grateful.
How do I initiate a conversation with AWS LEX from node js?
As for now it is not supported. FSx documentation mentions this here:https://docs.aws.amazon.com/AmazonECS/latest/developerguide/wfsx-volumes.htmlYou cannot use FSx for Windows File Server volumes in a Windows containers on Fargate configuration.
I am trying to figure out if I can use Fargate for our app. As it uses a network storage, we decided to use FSx. However, I am unable to find a documentation on AWS on how to mount an FSx storage on a Fargate windows container. Is it possible?I have only found this AWS article on how to mount FSx storage onECS EC2 containerhost.Could somebody confirm if FSx on Fargate windows containers is possible?
Is it possible to mount FSx storage on a Fargate windows containers?
Solved the problem.So confusingly, despite the$defaultcatch-all route, being a route. You don't actually specify it using theAddRoutes()method. The clue is in the fact that it takes an enum calledHttpMethod.Instead, a$defaultroute is applied automatically when you set theDefaultIntegrationattribute on theHttpApiPropsobject of yourHttpApi.So in my case,httpApifrom the examples in the question were an instance of theHttpApiclass.var httpApi = new HttpApi(this, Constants.API_GATEWAY_ID, new HttpApiProps() { ApiName = "Your API name", CreateDefaultStage = true, DefaultIntegration = lambdaProxyIntegration });Once you specify theDefaultIntegration, AWS will set the$defaultroute as expected.
Within the AWS console, it is possible to add a route to an API Gateway with the value of$default. This then removes the ability to input a HTTP method for the route.The AWS console describes it as: "You can also specify one $default route per API. The $default route is invoked when the request to the API matches no other routes."I am trying to recreate this using AWS CDK v2 (C# in this case) however am having no luck. I integrate my API Gateway with a Lambda function based on an image stored in ECR, unfortunately I cannot get my API Gateway to be created with the$defaultroute.httpApi.AddRoutes(new AddRoutesOptions() { Path = "$default", Integration = lambdaProxyIntegration });Doing the above, defaults the HTTP method to POST. It is also possible to specify a HTTP method like so:httpApi.AddRoutes(new AddRoutesOptions() { Path = "$default", Integration = lambdaProxyIntegration, Methods = new [] {HttpMethod.ANY} });But there is no option equivalent to when you enter$defaultwithin the AWS console & it greys out the drop down to select a Http Method.Any ideas?Thanks
How can I declare the $default catch-all route using CDK for an API Gateway?
The roles are definedhere. Looking at the definitions you can see what they are used for:FilePublishingRole - access to S3 with associated KMSImagePublishingRole - access to ECRLookupRole - role to performe lookups with variousfromLookupmethodsDeploymentActionRole - access to CloudFormation, KMS and S3
The following IAM roles were found in the bootstrap of cdk.FilePublishingRoleImagePublishingRoleLookupRoleDeploymentActionRoleCloudFormationExecutionRoleI understand the meaning of CloudFormationExecutionRole, but in what situations are the other IAM roles used? I would like to know if there is any documentation that clearly states this.
About the IAM role in cdk bootstrap
As @jordanm said,You can't. github.com/hashicorp/terraform/issues/22544 the last comment here contains a workaround, but not a great one.EDIT:The not-greatworkaroundin question is:As a workaround, since we use the S3 backend for managing our Terraform workspaces, I block the access to the Terraform workspace S3 bucket for the Terraform IAM user in my shell script after Terraform has finished creating the prod resources. This effectively locks down the infrastructure in the workspace and requires a IAM policy change to re-enable it.
I have a variable defined locally, calledlocal.protect, and defined invariables.tfwithdefault = trueandtype = bool. How do I get around the use of variables constraint on theprevent_destroyargument? I thought I couldlocal.ize it (eg,locals {protect = var.protect}) but that doesn't work, either.│ Error: Variables not allowed │ │ on main.tf line 105, in resource "aws_eip" "backend_eip": │ 105: prevent_destroy = local.protect │ │ Variables may not be used here. ╵ ╷ │ Error: Unsuitable value type │ │ on main.tf line 105, in resource "aws_eip" "backend_eip": │ 105: prevent_destroy = local.protect │ │ Unsuitable value: value must be knownInmain.tf:resource "aws_eip" "backend_eip" { vpc = true depends_on = [module.vpc.igw_id] lifecycle { prevent_destroy = local.protect # line 105 } }Invariables.tf:variable "protect" { type = bool description = "Whether (true) or not (false) to protect EIP from deletion via `terraform destroy`." default = true }Use case here is being able to set this flag at runtime, for a set of resources (like five EIP), all at once.
How to parameterize prevent_destroy lifecycle configuration in Terraform?
You can't delete items through secondary indexes. Secondary indexes act like aread-only viewof the base table, so if you want to delete data, you'll have to delete it from the base table.Ideally you project the key attributes from the base table into the Global Secondary Index. If you want to delete an item, you can just delete it from the base table using the values you can see in the GSI.Note:This will delete the item from the base table and thus from all secondary indexes as well.If you just want to remove it from the GSI, you need to remove the attributes that are used as the keys for the GSI from the item in the base table, then it won't be replicated to the GSI.
I have table with GSI and trying to delete item using GSI as below.But I get following error."The provided key element does not match the schema".I tried to query using following keyword and worked.So I already confirmed the provided key element is right.I tried to search whether delete operation by GSI is possible or not but could not find good documentation.Could anyone tell me what is the best approach to delete item using GSI?async function deleteProject(projectDB:string,projectId:string):Promise<any>{ const params={ TableName:projectDB, IndexName:'projectId-index', Key:{ 'projectId':projectId, }, ExpressionAttributeNames: { '#a': 'projectId' }, ExpressionAttributeValues: {":val": projectId}, ConditionExpression:"#a = :val", } const result=await db.delete(params).promise(); console.log('result',result); return result; }
Can we delete item in DynamoDB using GSI by AWS SDK?
I think we are loosing the benefits of serverless model here, as we need permanent tcp connection opened by WS client. EC2 looks like a good match in general.If we know the period of times we need to listen for market data, then one of the options could be to trigger Fargate instance by CloudWatch event, listening for messages for some time and then closing the connection and Fargate instance. The pricing/benefits against EC2 will depend on the range of time we need to listen for incoming data.
Using AWS, I want my backend to listen to a websocket connection from an external server (that I do not control). This websocket connection emits market data.Each time the external server pushes data onto the websocket, I want a Lambda function to be triggered. To be clear: In this situation, AWS acts as the client.Is it possible to achieve this functionality in a serverless manner (without using EC2)?I looked into AWS IoT pub/sub and API Gateway/w WS, but these services do not act as the client (I might be wrong though)
How to create a serverless websocket client on AWS
The answer is no, you don't need to deploy it more than once. If you don't specify a namespace in the args section of your external-dns deployment using--namespace=, it works for all of the namespaces in the cluster.
I have successfully installedexternal-dnsin my Kubernetes cluster following the official steps ongithub, it creates a Route53 record and I am able to access it correctly. I installed this on a specific namespace.My question is, do I need to deployexternal-dnson each namespace (and then creating the service account, cluster role binding and deployment) or I can use the same deployment across namespaces?
External DNS: Configure it in all namespaces
No, that won't work:LISTEN test; ERROR: cannot execute LISTEN during recovery NOTIFY test; ERROR: cannot execute NOTIFY during recovery
I created a read replica of a PostgreSQL 10 instance in AWS RDS. I was assuming that my clients would be able to LISTEN for notifications on the replica, but that does not seem to be the case. I have tried to research the limitation, but I have not found anything concrete. Can clients LISTEN/UNLISTEN for NOTIFY events on a read replica?
Can you perform LISTEN/UNLISTEN on a PostgreSQL read replica?
Unfortunately, it is cached only for that instance of the lambda.Extensions are running inside the same container with the lambda. Therefore, they will not share memory between different instances of the lambda. More specifically, every time that a lambda has a cold start - a fresh process of the extensions is being executed.Disclaimer: I just published a post explaining more about extensions:https://aws.amazon.com/blogs/apn/zero-friction-aws-lambda-instrumentation-a-practical-guide-to-extensions/I believe that it will help you understand more about that power of extensions, and how it can help you in other ways.
I'm trying to improve the cold start performance of a lambda. One of the things that takes time at startup is fetching information from the secrets manager.I've found a few solutions that talk about caching information from secrets manager using lambda extensions.https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/cache-secrets-using-aws-lambda-extensions.htmlhttps://github.com/square/lambda-secrets-prefetchhttps://github.com/hariohmprasath/aws-lambda-extensionsIf you cached a request from secrets manager, using the lambda extension approach, is it cached only for that instance of the lambda or is it cached for all instances of the lambda?If it's cached for all instances then in theory it would help me reduce cold start times.
Are lambda extensions shared across multiple instances of a lambda?
With CDKTF you can specify multiple providers like this:class MyStack extends TerraformStack { constructor(scope: Construct, ns: string) { super(scope, ns); new AwsProvider(this, "aws", { region: "eu-central-1", }); const provider = new AwsProvider(this, "aws.two", { region: "us-east-1", alias: "aws.two", }); // this cert is created in the us-central-1 region // it defaults to the provider without an alias new acm.AcmCertificate(this, "cert", { domainName: "cdktf.com", validationMethod: "DNS", }); // this cert is created in the us-east-1 region const cert = new acm.AcmCertificate(this, "cert", { domainName: "example.com", validationMethod: "DNS", provider, }); } }This is documented in the examples:https://github.com/hashicorp/terraform-cdk/blob/main/examples/typescript/aws-cloudfront-proxy/main.ts
I am using CDKTF and python for a project where I am generating JSON output that will be interpreted by Terraform.I have a use case where I need to send in multiple aliased AWS providers. I am able to specify a single provider to the stack by using theadd_providermethod but I cannot add a secondary aliased provider without usingadd_override.Is there a way for me to do this without getting name conflicts in the keys where CDKTF gives an error that I am specifying theawskey twice.Basically I am asking if there is a way for me to specify the key I use when specifying the keys in providers so that I get something like:"providers": { "aws": "aws.account-one", "aws.two": "aws.account-two" }Kindly let me know if I am doing this wrong.Thanks in advance.
Multiple AWS providers using CDKTF
For HTTP api you have to useapigatewayv2:aws apigatewayv2 get-apis
The commandaws apigateway get-rest-apisreturns onlyREST API's. As you can see in the following screenshot I have 3 API's. But the command returns only one API (the REST protocol API).How to get all 3 API's?aws apigateway get-rest-apis{ "items": [ { "id": "xxxxxxxx", "name": "zabbixPy-API", "description": "Created by AWS Lambda", "createdDate": "2021-10-31T10:16:23+00:00", "apiKeySource": "HEADER", "endpointConfiguration": { "types": [ "REGIONAL" ] }, "disableExecuteApiEndpoint": false } ] }
AWS CLI get APIGateway returns only REST API's
S3 static websitesrequire public access. There is no such thing as a private S3 website in a VPC or accessible only through a VPC endpoint.To make your S3 website work, you must set your bucket to public, or use CloudFront which also is accessible only through the internet. But at least yourbucket can be privatewhen you front it with CloudFront (though not the website itself).
I am looking to host a static website on AWS, using an S3 bucket.I followedthesesteps.The site is a usual directory with subdirectories:app │ index.html └───scripts │ │ things.js │ │ stuff.js └───images │ img1.png │ img2.jpgI want to make the website accessible only to peopleinside our VPC. I attached the following type of policy to the bucket holding the site files (adding my specific bucket name and VPC id):{ "Version": "2012-10-17", "Id": "Policy1415115909152", "Statement": [ { "Sid": "Access-to-specific-VPCE-only", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my_bucket*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-blahblahblah" } } } ] }I also setup a VPC endpoint, with the endpoint ID set as the value foraws:sourceVpceinside the bucket policy.I setup the VPC endpoint followingthesesteps.But I still cannot access this site on my browser (I'm assuming that since I am accessing the AWS console with the same browser that AWS is aware I am inside the VPC).<Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>blahblahblah</RequestId> <HostId>blahblahblah</HostId> </Error>
Cannot access static website hosted on S3 bucket, from within VPC
You have defined the Lambda function runtime but you haven't mentioned where the entry point to the function is.That is what thehandlerargument specifies -it is the method in your function code that processes events.It should have a format similar to:def handler_name(event, context): ... return some_valueThe value of the handler argument is comprised of the below, separated by a dot:The name of the file in which the Lambda handler function is locatedThe name of the Python handler function.e.g.ingest-existing-files-lambda.lambda_handlercalls thelambda_handlerfunction defined iningest-existing-files-lambda.py.If your Lambda handler method is calledlambda_handler& is insideingest-existing-files-lambda.py, this should work:resource "aws_lambda_function" "this" { filename = "${path.module}/src/existing-files-lambda.zip" function_name = "ingest-existing-files-lambda" handler = "ingest-existing-files-lambda.lambda_handler" role = aws_iam_role.lambda.arn runtime = "python3.9" timeout = 900 environment { variables = { source_bucket_arn = var.source_bucket_arn destination_bucket_arn = var.destination_bucket_arn } } }
I have defined a Lambda function with Terraform like this:resource "aws_lambda_function" "this" { filename = "${path.module}/src/existing-files-lambda.zip" function_name = "ingest-existing-files-lambda" role = aws_iam_role.lambda.arn runtime = "python3.9" timeout = 900 environment { variables = { source_bucket_arn = var.source_bucket_arn destination_bucket_arn = var.destination_bucket_arn } } } resource "aws_iam_role" "lambda" { name = "${var.prefix}-lambda-ingest" path = "/service-role/" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } Action = "sts:AssumeRole" }] }) }My python file is just this:import os def lambda_handler(event, context): print('Hello world from Terraform') return { 'statusCode': 200, }However, I am currently getting an error that:│ Error: handler and runtime must be set when PackageType is Zip │ │ with module.ingest_lambda.aws_lambda_function.this, │ on ingest_lambda/main.tf line 8, in resource "aws_lambda_function" "this": │ 8: resource "aws_lambda_function" "this" {What do I put ashandlerhere?I already haveruntimespecified.
Why do I get "Error: handler and runtime must be set when PackageType is Zip" when deploying a Lambda function using Terraform?
You need to set the correctContent-Typefor each object in S3, for exampleapplication/pdforimage/png.You can do this when uploading the object, or you can use the AWS S3 Console to modify it afterwards. Note thatContent-Typeis considered metadata.Setting the correctContent-Typeon the object means that when the object is served by S3 or CloudFront, that Content-Type will be conveyed to the client, allowing it to decide to display or download, as appropriate.
I have an AWS presigned download URL with a 20 second expiration:https://our-namespace.s3.amazonaws.com/documents/7443912/ffb9bbc5-5f4f-4315-a4e8-418bc31dbef2.png?X-Amz-Security-Token=123&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20211007T004004Z&X-Amz-SignedHeaders=host&X-Amz-Expires=20&X-Amz-Credential=ASIA4SLAKW7L216GHWOI%2F20278006%2Fus-east-1%2GF3%2Faws4_request&X-Amz-Signature=123When I load this URL in the browser, it forces a download. I'm looking for a way to display this as an image preview within the browser - instead of initiating a file download.My initial thought was to convert this URL into a blob and then display that blob in an image preview modal. The only issue is, I'm unsure how to do that. I found the following package:https://www.npmjs.com/package/rn-fetch-blobbut it looks like this is no longer maintained.What would be the optimal way of displaying an image as a preview from the AWS download only link?
How do I preview an image via a presigned S3 URL in React?
You can use theaws rds describe-db-snapshotsCLI command to get a list of DB snapshots, and then run a local query using--queryto get the latest DB snapshot using theSnapshotCreateTimefield.SnapshotCreateTime -> (timestamp)Specifies when the snapshot was taken in Coordinated Universal Time (UTC). Changes for the copy when the snapshot is copied.Something like this:aws rds describe-db-snapshots \ --db-instance-identifier your-id \ --query "sort_by(DBSnapshots, &SnapshotCreateTime)[-1].{id:DBSnapshotIdentifier,time:SnapshotCreateTime}"Note that this query sorts snapshots by their ascending SnapshotCreateTime and then simply takes the last one in the list (as dictated by [-1]), which will be the one that was last created.[Added] if you're looking for snapshots of Aurora DB clusters then you'd have to usedescribe-db-cluster-snapshotsin place ofdescribe-db-snapshots, but otherwise the process is similar: useDBClusterSnapshotsandDBClusterSnapshotIdentifier(in place ofDBSnapshotsandDBSnapshotIdentifier).
Using the AWS CLI, how can I get the most recent snapshot for a particular DB instance?I can get them through the GUI easily but I'd like to automate it.
How can I get the name of the most recent snapshot for an RDS DB instance using the AWS CLI?
Can I use S3 as a source?Sadly, you can't. You have to writebash/powershell scriptwhich is going to use AWS CLI to perform the copy from S3 to the instance. Then you invoke the script as part of yourappspec.yml. In addition to this, your instances will need to have an instance role withpermissions to S3.
I'm using aws-cdk for my architecture and I need one last piece to the puzzle. Is it possible to move files from S3 to specific directories on the EC2 instance?I have my .env file and nginx config files in an S3 bucket and once the update has rolled out using CodePipeline/CodeDeploy I want to move the files into their place. I don't want to manage these files on GitHub or by having to SSH into the instance.// appspec.ymlfiles: - source: / destination: /path/to/webserverdirCan I use S3 as a source?
CodePipeline/CodeDeploy move files from S3 to EC2
In my case the issue related to the outbound rules on the Application Load Balancer. I had to ensure that port 443 was allowed in outbound mode.
I have an AWS Cognito User Pool configured to talk to a SAML IDP and thats working fine, the SAML Assertion from the IDP tohttps://XXXX.auth.eu-west-1.amazoncognito.com/saml2/idpresponseworks fine.A request is then made to target group such ashttps://xxxxxx:443/oauth2/idpresponse?code=2f6aab53-ad64....&state=.....which is based on the settings in Cognito's App Client Settings (via the call back URL), and I am getting an internal server error.HTTP/2.0 500 Internal Server Error server: awselb/2.0I have traced the logs and extracted the salient elements:ELB Status Code: 500 Actions Executed: Authenticate Lambda Reason Error AuthTokenEpRequestTimeoutI am guessing that the Cognito ALB authenticate process uses Lambda as part of its internal process maybe to build the X-AMZN-OIDC* headers before forwarding to the Target Group.Our application is not using Lambda, and the Cogito Client App has no triggers enabled (i.e., where you can customize the workflow) we have no customization on the workflow process.So there seems some internal error during the authentication process, I can't see where this timeout AuthTokenEpRequestTimeout could be fixed.Anyone have ideas why this issue might happen or pointers to help resolve?I just want to clarify a little about the AWS: Load Balancer is internet facing. We allow internet traffic on port 443 and port 80. We have not outbound restrictions.We can see that the SAML assertion is working fine.
Application Loadbalancer authenticate with Cognito Internal 500 Error
Neptune by default uses Set cardinality. Each time you add a value you are expanding that Set as you are not explicitly usingCardinality.single. Moreover,elementMapwill only return one element of a Set cardinality property. To see them all usevalueMapinstead.
I have very weird issue with AWS Neptune DB. I can only change property to new value and can't use any of previous names.I'm using gremlin and node.js.That sounds so weird so let me to add some code:const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection; const dc = new DriverRemoteConnection(process.env.DB_URL, { authenticator, connectOnStartup: false }); await dc.open(); const g = gremlin.process.AnonymousTraversalSource.traversal().withRemote(dc); await g.V().hasLabel('Wtf').drop().iterate(); await g.addV('Wtf').property('test', 'foo').iterate() await g.V().hasLabel('Wtf').property('test', 'bar foo').iterate(); await g.V().hasLabel('Wtf').property('test', 'foo').iterate(); const wtf = await g.V().hasLabel('Wtf').elementMap().toList(); console.log(wtf);So I think I should have response like:Map(3) { EnumValue { typeName: 'T', elementName: 'id' } => '7cbdd4d6-9d72-905f-c4b1-0ee9a31db29a', EnumValue { typeName: 'T', elementName: 'label' } => 'Wtf', 'test' => 'foo' }but I have:Map(3) { EnumValue { typeName: 'T', elementName: 'id' } => '7cbdd4d6-9d72-905f-c4b1-0ee9a31db29a', EnumValue { typeName: 'T', elementName: 'label' } => 'Wtf', 'test' => 'bar foo' }Interesting thing, if I'm using any single word (without spaces) instead ofbar fooI have proper result. And I have the same issue when I query database not one after another. Did someone saw such problem? Do you know how to solve it?
Neptune DB doesn't change property value to the previous value
As far as I know the inventory report does not run on-demand. It's quite a heavy operation for AWS for many buckets have billions of objects, so I can understand why they don't provide that service for free.The aws cli can be used of course to get an inventory but it's incredibly slow (takes HOURS if not days just to list all objects in a bucket of a few million objects). Basically the only real options for large buckets is custom scripting with parallel execution. There are quite some open source projects out there that do this.But since your original question is about the inventory report itself I'm afraid there is no real alternative.
Is there any way to manually kick off an Amazon S3 Inventory report job?I'm working on a project that creates daily inventory reports to another account but I can't seem to find a way to manually kick off the run. We're in the design / development phase of a data telemetry project and are tweaking our inventory configurations but having to wait for the daily job to run to see if the configuration satisfies our requirements is really inconvenient and slowing us down.Is there a way to manually kick off an inventory report run after making a configuration change? I've tried looking in the api documentation as well as the boto3 documentation and all I have found is a call to create a bucket inventory configuration but nothing to actually perform a run.Thanks, Bill
Manually trigger an Amazon S3 Inventory Report
One reason for that is that you can very easily scale your architecture if you publish to SNS first. This is due to being able to implementfanoutscenario:Now you may only need your msg in a single SQS queue. But later you may want to add second one to process same messages, also maybe invoke some lambda functions independently from your queue, or send them as well to some HTTP endpoint.
I am currently building a microservices based backend for my E-Commerce SetupI need to push all the transactions to a Queue Service, but AWS documentation says that I should publish my message to SNS and then subscribe my queue to a topicbut in SQS documentation there is also a way to send message directly to SQS*PS: I have already searched stackoverflow but none of the question answers for my specific use case so why are there two solutions for the same thing and need to use SNS and pay extra money
Advantage of Publishing Message to SNS rather directly pushing to SQS
As noted by Adrian Klaver,\gsetis available only inpsql.Try something like this, instead:cur.execute(""" with get_uri as ( select aws_commons.create_s3_uri( 'data-analytics-bucket02-dev', 'output', 'ap-southeast-1') AS s3_uri_1 ) select e.* from get_uri g cross join lateral aws_s3.query_export_to_s3( 'select * from sample_table', g.s3_uri_1 ) e """)Or, you can just nest the function call:cur.execute(""" select * from aws_s3.query_export_to_s3( 'select * from sample_table', aws_commons.create_s3_uri( 'data-analytics-bucket02-dev', 'output', 'ap-southeast-1' ) ) """)
Im trying to run the below postgres sql queries using Python.SELECT aws_commons.create_s3_uri('data-analytics-bucket02-dev','output','ap-southeast-1') AS s3_uri_1 \gset"SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM sample_table', :'s3_uri_1');"The below is my Python Code:print('PostgreSQL database version:') # cur.execute('SELECT version()') select_query = "SELECT aws_commons.create_s3_uri('data-analytics-bucket02-dev','output','ap-southeast-1') AS s3_uri_1 \gset" cur.execute(select_query) cur.execute("SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM sample_table', :'s3_uri_1');")Getting the below error
Executing Postgresql Query in Python
I have created 2nd NAT and added the route in second private subnet. Now I'm able to create the environment without any issues
I'm creating Apache airflow in console.Status: Create failedLast update: Error code: INCORRECT_CONFIGURATIONMessage: You may need to check the execution role permissions policy for your environment, and that each of the VPC networking components required by the environment are configured to allow traffic. Troubleshooting:https://docs.aws.amazon.com/mwaa/latest/userguide/troubleshooting.htmlI have read the network configurations, created two private subnet in my default VPC, created NAT gateway and added the NAT gateway route to Private subnet route table? What else am I missing?
Creating Apache Managed Workflows for Apache Airflow[MWAA]: INCORRECT_CONFIGURATION
You have to installawsebclito be able to use theebCLI command.python3 -m pip install awsebcliMore info regarding installation:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html
I'm trying to deploy my Django Application with AWS beanstalk following this documentation:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.htmlEverything works up till this point:~/ebdjango$ eb init -p python-3.6 student-archiveWhen I run that line of code I get this error:eb : The term 'eb' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + eb init -p python-3.6 studentarchive + ~~ + CategoryInfo : ObjectNotFound: (eb:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundExceptionWhat could be some causes of this? Here's my django.config:option_settings: aws:elasticbeanstalk:container:python: WSGIPath: StudentArchive/wsgi.pyHere's my requirements.txt:asgiref==3.4.1 autopep8==1.5.7 backports.entry-points-selectable==1.1.0 certifi==2021.5.30 distlib==0.3.2 Django==3.2.6 filelock==3.0.12 pipenv==2021.5.29 platformdirs==2.2.0 pycodestyle==2.7.0 pytz==2021.1 six==1.16.0 sqlparse==0.4.1 toml==0.10.2 virtualenv==20.7.0 virtualenv-clone==0.5.6
eb : The term 'eb' is not recognized as the name of a cmdlet, function, script file, or operable program
As you noticed, boto3 throws the same exception type,ClientError, on all errors that were received from the server. There are other exception it can throw in other occasions (such as boto3 found an error in the request before even sending it to the server, or when it can't connect to the server) - but all errors that DynamoDB itself returns are bunched together as aClientError.You can parse this exception's string content to see if it contains a ResourceInUseException or not.Theboto3 documentationcontains a (not very clear) explanation of this, and an example:try: logger.info('Calling DescribeStream API on myDataStream') client.describe_stream(StreamName='myDataStream') except botocore.exceptions.ClientError as error: if error.response['Error']['Code'] == 'LimitExceededException': logger.warn('API call limit exceeded; backing off and retrying...') else: raise errorand also notes that there is a nicer alternative which you may prefer, using a dynamic map of exceptions in the "client" object:except client.meta.client.exceptions.BucketAlreadyExists as err: print("Bucket {} already exists!".format(err.response['Error']['BucketName'])) raise err
So this is the exception:botocore.errorfactory.ResourceInUseException: An error occurred (ResourceInUseException) when calling the CreateTable operation: Cannot create preexisting tableI am going through their tutorials, and have searched for some code examples of Python and Dynamodb exceptions but so far no luck. I only seeClientErrorexception, but not this specific one.https://docs.aws.amazon.com/search/doc-search.html?searchPath=documentation&searchQuery=python%20dynamodb%20exceptionsI have tried several variants such as :except boto3.ResourceInUseException: except botocore.errorfactory.ResourceInUseExceptionvarious others, but no such exception exists.I am not sure what the proper way to catch such as exception (when a table already exists).Thank you.
How to catch DynamoDB ResourceInUseException Python?
Your step function isn't being triggered because the PutObject events aren't being published to cloudtrail. S3 operations are classified as data events so you must enable Data events when creating your cloudTrail. The tutorial says next next and create which seems to suggest no additional options need to be selected. By default, Data Events on the next step (step 2 - Choose log events - as of this writing) is not checked. You have to check it and fill up the bottom part to specify if all buckets/events are to be logged.
After setting the EventBridge, S3 put object event still cannot trigger the StepFuction.However, I tried to change the event rule to EC2 status. It's working !!!I also try to change the rule to S3 all event, but it still not working.Amazon EventBridge:Event pattern:{ "source": ["aws.s3"], "detail-type": ["AWS API Call via CloudTrail"], "detail": { "eventSource": ["s3.amazonaws.com"], "eventName": ["PutObject"], "requestParameters": { "bucketName": ["MY_BUCKETNAME"] } }Target(s):Type:Step Functions state machine ARN:arn:aws:states:us-east-1:xxxxxxx:stateMachine:MY_FUNCTION_NAMEReference:https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
Amazon EventBridge rule S3 put object event cannot trigger the AWS StepFunction
TheActions, resources, and condition keys for Amazon S3 - Service Authorization Referencedocumentation page lists the conditions that can be applied to theCreateBucketcommand.Tags arenotincluded in this list. Therefore, it is not possible to restrict theCreateBucketcommand based on tags being specified with the command.
I want to create an IAM policy to only allow the "Test" user to create S3 bucket with "Name" and "Bucket" Tags while creating. But not able to do.I have tried this, but even with the specified condition, the user is not able to create an Bucket.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Deny", "Action": "s3:CreateBucket", "Resource": "*", "Condition": { "StringNotEquals": { "aws:RequestTag/Name": "Bucket" } } } ] }Thanks in advance.
Need help to deny S3 bucket creation without specific Tags
The Distributor has not been installed correctly. Could not enable database for publishing.", which indicates that source is not correctly configured to allow for ongoing replication based on Distributor has not been installed correctly.To check if distribution has already been configured, run the following command when connect to the source server.sp_get_distributorIf the result is NULL for column distribution, distribution isn't configured.Please note that, you can use either MS-Replication or MS-CDC to enable continuous replication for on-prem SQL server. You may please use MS-Replication for the tables with primary keys and for the tables without primary keys , you can choose to enable MS-CDC at the database and the individual table levelTo have the MS-replicaiton, the distribution database has to be configured as per the steps discussed and mentioned below:To set up distribution=>Connect to your SQL Server source database using the SQL Server Management Studio (SSMS) tool.=>Open the context (right-click) menu for the Replication folder, and choose Configure Distribution. The Configure Distribution Wizard appears.=>Follow the wizard to enter the default values and create the distribution.You can use the following procedure to set up distribution.Once the distribution database is set up, DMS should be able to create a publication and then be able to add articles to the publication. Followed by which the task should be able to continue with the replication.
I am using anAWS-DMS instance to migrate and replicate an on-premises databaseto another SQL instance in the AWS cloud.When I use a migration task of type Full load, the instance successfully executes the migration, but with the same Mapping rules and Tasks migrations of types Full load and/or ongoing replication they fail:Last failure messageLast Error Fatal error has occurred Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2859] [1022505] Failed (retcode -1) to execute statement; RetCode: SQL_ERROR SqlState: 42000 NativeError: 20028 Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The Distributor has not been installed correctly. Could not enable database for publishing.Line: 1 Column: -1; Failed while preparing stream component 'st_0_6SULBUTK4MXZHA6HQ'.; Cannot initialize subtask; Stream component 'st_0_6SULBUTK434OZAJOXANFLHA6HQ' terminated [reptask/replicationtask.c:2866] [1022505] Stop Reason FATAL_ERROR Error Level FATAL
The Distributor has not been installed correctly. Could not enable database for publishing
Restoring to existing DB isnot supported. Fromdocs:Youcan't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore.You could usemysqldumpto get data from your new cluster and import to existing one.
We had a AWS Aurora MySQL RDS DB Cluster (saymy-dbcluster) & DB Instance (saymy-instance) setup. However due to some issue the DB instance got deleted. We do have backup at the cluster level and I could see those inAWS Consoleundermy-dbcluster -> Maintenance & backups tab -> Snapshots section.Based on the AWS documentation toRestore from DB snapshot, it should allow to create the DB instance by restoring from the DB snapshot. So on the AWS Console I went on to select the latest snapshot and try to Restore by providing the originalDB Instance Identifierand other details. My expectation was that it will create the DB instance under the same DB Cluster (i.e.my-dbcluster) but it created altogether a new DB Cluster and created the DB instance under that. I tried to look for ways to move the DB Instance under the original DB cluster but could not find anything.My question is, why it does not create/restore the DB Instance under the original DB Cluster, if this is not the default behavior, it should at least give an option to restore the DB instance under the DB Cluster of our choice. How can I achieve that.
Restore (DB Instance) from AWS Aurora RDS DB backup snapshot to existing DB Cluster
You should be able to import existing resource from AWS to Terraform viaterraform importcommand.In this case , you would need to doterraform import aws_organizations_account.prod_account AWSAccountIDmentioned over herehttps://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_account#import
I have an existing aws account that I would like to invite into my organization using Terraform. I am able to do this using the console but have not figured out if it is possible as code.Currently I created several organization accounts using the following code:resource "aws_organizations_account" "prod_account" { name = "prod" email = "<new_email>" iam_user_access_to_billing = "DENY" parent_id = aws_organizations_organizational_unit.production.id }This works great when I am creating a new account, however, I am not able to use the same resource block by specify the email of my existing 'dev' account. I get an error saying the EMAIL_ALREADY_EXISTS, which makes sense because it is trying to create a new account using an existing email address.So how do I invite my existing 'dev' account into my organization using Terraform? Is this even possible?
Is it possible for Terraform to invite an existing aws account into an Organization?
You may be able to usecoalesce(J, B)which won't set the J to 2 but can be assigned to a new field (e.g.fields coalesce(J, B) as newBthat can be used for display or additional logic.coalescetakes 2+ arguments and returns the first value that isn't blank.https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html(search for coalesce)
I am working with data that comes in similar format to this in insights. Sometimes the value of J may be missing and I want to set the value as the value of B if this is the case. Is there any way to do conditional logic similar to this on data in CloudWatch Insights? I have exploredispresent()but cannot figure out how to do the conditional logic.Example:BJ13242for the last I would like to set the data equal in J to 2 when I run the query.
AWS CloudWatch Logs Insights
It should work like this: within your serverless.yml you can reference.envparameters with${env:keyname}and AWS Parameters using the${param:keyname}syntax.If you need to support both of them you just need to write${env:keyname, param:keyname}.Here's an example:provider: ... environment: ALLOWED_ORIGINS: ${env:ALLOWED_ORIGINS, param:ALLOWED_ORIGINS} AUTHORIZER_ARN: ${env:AUTHORIZER_ARN, param:AUTHORIZER_ARN} MONGODB_URL: ${env:MONGODB_URL, param:MONGODB_URL}
I'm using Serverless framework and NodeJS to develop my AWS Lambda function. So far, I have used.envfile to store my secrets. So, I can get access to them inserverless.ymllike thisprovider: ... environment: DB_HOST: ${env:DB_HOST} DB_PORT: ${env:DB_PORT}But now I need to use AWS Parameter Store instead of.envfile. I have tried to find information about how to emulate it on my local machine, but I couldn't.I think, I have to use one serverless config file on local and staging. I need a way to select somehow env values either from .env file (if it's local machine) or from Parameter Store (if it's AWS Lambda). Is there any way how to do it? Thanks!
How to emulate AWS Parameter Store on local computer for lambda function development?
It's certainly possible to stop a currently executing step function.Using the AWS CLI as an example, you can callaws step functions stop-executionwith an exeuction-arn:$ aws step-function stop-execution --execution-arn <value> [--error <value>] [--cause <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]https://docs.aws.amazon.com/cli/latest/reference/stepfunctions/stop-execution.htmlThe stop-execution function is also available in the various AWS SDKs and client libraries.
I need a step function that waits 2 days before processing a request. Within that two day period it's possible for a user to cancel the request with a follow up request. Is this achievable with step functions.
Cancel a Step Function in the Wait step
I think this should actually cover what you're looking for, and is straight from the source:https://aws.amazon.com/blogs/media/vod-automation-part-1-create-a-serverless-watchfolder-workflow-using-aws-elemental-mediaconvert/Essentially, you'll be making use of AWS Lambda, a serverless code execution product. Lambda functions by allowing you to hook directly into "triggers" or events from within the AWS ecosystem (like uploading a file to S3).The lambda can then execute code in a number of supported languages like Javascript or Python, which can be used to execute a MediaConvert job on the triggering object (the file uploaded to S3).
I am new to AWS. Most of example I have seen need an input file name from S3 bucket for media convert. I want to automate this process. What is the best way to do it. I want to achieve following.API to upload a video(mp4) to a S3 bucket.Trigger MediaConvert Job to process newly updated video and convert it to HLS.I know how to create an API as well as MediaConvert job. What I need help with it is automating this workflow. How can I pass recently uploaded video to MediaConvert job dynamically?
AWS how to Trigger mediaconvert after video upload automatically
AWSrecommendsmocking DynamoDB and the rest of AWS withaws-sdk-client-mockwithdocumentationincluded with the module.Install aws-sdk-client-mock and probably aws-sdk-client-mock-jestnpm i aws-sdk-client-mock aws-sdk-client-mock-jestImport the moduleimport {mockClient} from "aws-sdk-client-mock"; import 'aws-sdk-client-mock-jest';Create a mock. Reset after each testdescribe('Winner purge service', () => { const ddbMock = mockClient(DynamoDBClient) beforeEach(() => { jest.clearAllMocks() ddbMock.reset() })Define responsesddbMock .on(ScanCommand).resolvesOnce({ Items: [marshall(configItem, {removeUndefinedValues: true})], Count: 1 }) ddbMock.on(QueryCommand) .resolvesOnce({ Items: [marshall(olderWinner)], Count: 1 }) .resolves({ Items: [], Count: 0 })Test callsexpect(ddbMock.commandCalls(DeleteItemCommand).length).toBe(0)
I am using an Amazon DynamoDB package from aws-sdk v3 for javascript.Here is the docs which I have followed:https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/dynamodb-examples.htmlI have installed "@aws-sdk/client-dynamodb" package to perform CRUD operations from code.I have imported commands from package in this way:import { DynamoDBClient, PutItemCommand, DeleteItemCommand, UpdateItemCommand, GetItemCommand } from "@aws-sdk/client-dynamodb"; const dynamodbClient = new DynamoDBClient({ region: process.env.DYNAMODB_REGION, endpoint: process.env.DYNAMODB_ENDPOINT }); const result = await dynamodbClient.send(new PutItemCommand(params));I have tried to mock Amazon DynamoDB following Jest docs but it was calling real Amazon DynamoDB in local.How to mock these "@aws-sdk/client-dynamodb" package in Nodejs?Please provide an example in Nodejs!
How to mock an Amazon DynamoDB v3 in Nodejs using Jest?
On a SageMaker notebook instance, to write to the user-sized EBS volume, you need to write your data within the directory /home/ec2-user/SageMaker. If you run adf -h, you will see that your user-sized EBS volume (the 1024 GB storage) is mounted on /home/ec2-user/SageMaker. If you don't write inside this directory, then your data won't be persisted when you shutdown your notebook instance. In your case, I am assuming you are writing to the 100 GB storage and hence running out of space
I am running a sagemaker instance which always gives me an exception the same place in the cycle, even if I allocate more storage. So it might not be a storage issue, but I am at a loss for why it fails.I got the error at the same spot no matter if I allocate 1024gb or 100gb storage (estimator volume_size). The DiskUtilization sits at 23% when it crashes on 100gb allocated (however, that number does not update in real time, so it is probably higher)2021-06-10T13:25:32.141+02:00 terminate called after throwing an instance of 'dmlc::Error' 2021-06-10T13:25:45.144+02:00 what(): [11:25:31] src/io/local_filesys.cc:38: Check failed: std::fwrite(ptr, 1, size, fp_) == size: FileStream.Write incomplete 2021-06-10T13:25:45.144+02:00 Stack trace: [bt] (0) /usr/local/lib/python3.6/dist-packages/mxnet/libmxnet.so(+0x3c58ea9) [0x7f0280805ea9]I am loading parquet files and saving them as ndarrays of 10000 rows pr. file. And around 120000 rows, it crashes. I am doing this in order to give mxnet a dataset with random access, which I cannot do with just parquet files.Any help is appreciated.
Storage issue on Sagemaker, even when more provided
Sadly, there is no such way with pure CloudFormation (CFN), as this is not how CFN (or Terraform as a matter of fact) was designed to work. From CFN perspective, a given resource exists and is managed by CFN, or it does not exist at all. There is no middle ground.If your resource already exist, you have toimportit to CFN so that it gets managed by CFN. Alternatively, you have to createcustom resourcein the form of a lambda function. The function would perform any action you want based on the existing resources, including checking if it exists or not.
while configuring resource configuration , is there any way I can use so that serverless wont create throw any error if resource is already present.eg. don't throw this error if following resource is already present.Error :An error occurred: PaymentQueue - dev_payment_cron_queue already exists in stackresources: Resources: PaymentQueue: Type: "AWS::SQS::Queue" Properties: QueueName: ${self:provider.stage}_payment_cron_queue VisibilityTimeout: 40
How do I ignore resource creation if already present in serverless
+50I believe the issue is that you are using theScheduledEventclass as your input. When you specify the input from a rule you do not get any of the rest of the schedule data (fromAWS::Events::Rule Target - Input):If you use this property, nothing from the event text itself is passed to the target.In order to get the data you are expecting you need the class to be something that will be serializable from the input you specify.
I have a lambda which gets triggered by lambda cloudwatch rule. This is the cloudwatch rule created via cloudformationResources: CloudWatchRule: Type: AWS::Events::Rule Properties: Name: !Sub CloudWatchRule-${Stage} Description: "Scheduled event to fetch keys for score generation every hour" ScheduleExpression: "rate(2 minutes)" Targets: - Arn: Fn::GetAtt: [FetcherLambda, Arn] Id: "CloudWatchEvent" Input: '{"hours": 3}' CloudWatchRulePermission: Type: AWS::Lambda::Permission Properties: FunctionName: Ref: FetcherLambda Action: "lambda:InvokeFunction" Principal: "events.amazonaws.com" SourceArn: Fn::GetAtt: [CloudWatchRule, Arn]I can see in the console that the input is there in Constant(Json) field.But in the handler, when I am logging I am getting empty event,public Void handleRequest(final ScheduledEvent scheduledEvent, final Context context) { log.info("Received event: {}", scheduledEvent); return null; }I am getting a log asReceived event: {}Am I missing something or is there anything else needed to get the input here
Not getting input in scheduled event
If you create multiple buckets which just different by one or few arguments (e.g. name), you should be usingcountorfor_eachand provide the names aslist. For example:variable "buckets" { default = ["a", "b", "c"] } resource "aws_s3_bucket" "bucket" { for_each = var.buckets name = each.key # ... } resource "aws_s3_bucket_policy" "abc" { for_each = var.buckets bucket = aws_s3_bucket.bucket[each.key].id ... }UpdateYou can also do:locals { buckets = [aws_s3_bucket.a, aws_s3_bucket.b, ws_s3_bucket.c] } resource "aws_s3_bucket_policy" "abc" { for_each = {for idx, bucket in local.buckets: idx => bucket} bucket = each.value.id ... }
I have some S3 buckets which are created using terraform code as below:resource "aws_s3_bucket" "a" { ... } resource "aws_s3_bucket" "b" { ... } resource "aws_s3_bucket" "c" { ... }Now I want to create bucket policy and apply this policy for all existing s3 bucket (a, b, c). How can I get s3 bucket id and do a loop or something like that? Please advise me more. Thanks a lot!!!resource "aws_s3_bucket_policy" "abc" { bucket = aws_s3_bucket.*.id ... }
Create s3 bucket policy for multiple existing s3 bucket using terraform
You can't do what you are trying to do. AWS needs your credentials to be clear-text.If this is your local machine and you are uncomfortable with having your credentials in your .aws/credentials file, try using a tool likeaws-vault.It will store your credentials in your operating systems local keystore and require you to enter a password when you want to use your credentials.
I am trying to encrypt AWS credentials on AWS CLI. I want to have my access_key and access_id not readable.I have tried: aws kms encrypt --key-id --plaintext fileb:///.aws/credentials but I can still view content in plaintextAny Ideas?
Encrypting .aws/crendentials
I looks like there isopen feature request on kubernetes-sigs/aws-ebs-csi-driverrepo but no progress on this. So I guess that it is not supported at the moment but you can monitor the issue for updates.
I want to enableReadWriteManyaccess mode in EKS Persistent Volume. Came accross io2 volumetype by EBS AWS. SO using io2 type volumestorage_class.yamlapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: io2 provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer parameters: type: io2 iopsPerGB: "200"persistent_volume.yamlapiVersion: v1 kind: PersistentVolume metadata: name: pv spec: accessModes: - ReadWriteMany awsElasticBlockStore: fsType: ext4 volumeID: <IO2 type volume ID> capacity: storage: 50Gi storageClassName: io2 volumeMode: Filesystempv_claim.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi volumeMode: Filesystem volumeName: pv storageClassName: io2When 3 replicas of pods are deployed across 2 nodes in same AZ, 2 replicas (on one node) successfully mounts to the io2 volume and starts running but third replica on another node does not mount to volume.Error -> Unable to attach or mount volumes: unmounted volumes['']Also, I want to understand if io2 type volumes are meant to be mount to multiple nodes(EC2 instances in same AZ as volume) in EKS with ReadWriteMany access mode.
How to enable ReadWriteMany access mode using an io2 EBS Volume
ECS emitseventsto EventBridge (EB). You can setup a rule in the EB to capture events of interest and trigger your lambda function as target for the events.Example EB rule could be:{ "source": [ "aws.ecs" ], "detail-type": [ "ECS Task State Change" ], "detail": { "lastStatus": [ "RUNNING" ] } }Other customization of the rule are possible.also how to stop the task when the Lambda function finishes.Your lambda can use AWS SDK for ECS and stop the tasks. The EB event capture will have info which task was started.You could also orchestrate your lambda and ecs tasks through step functions:Manage Amazon ECS or Fargate Tasks with Step Functions
I am new to AWS. I have a Lambda function which I want to run daily at 4:00 AM GMT. The Lambda function is dependent on an AWS ECS Container task to be running. Instead of running the AWS ECS Container task to run always (because it costs a lot for me), I want to be able to trigger to run it and then run the Lambda task when its ready and finally when the Lambda function is finished, I want to stop it.I looked into this and found that I can run a Lambda function using Amazon EventBridge Rules. I know I can use the CRON expression,0 4 * * ? *, to run it at 4:00 AM everyday. However, I am not sure how to first run the ECS Container task first and also how to stop the task when the Lambda function finishes.Other info:The Lambda function has the Node.js environment.
How can I run an AWS ECS Task and then run a Lambda function after its ready then finally stop the Task?
This is not doable via API but you can useis_paused_upon_creationthis flag specifies if the dag is paused when created for the first time. If the dag exists already, this flag will be ignored.You can setis_paused_upon_creation=Falsein the DAG contractor.dag = DAG( dag_id='tutorial', default_args=default_args, is_paused_upon_creation=False, )Another option is to do it viaunpause CLI:airflow dags unpause [-h] [-S SUBDIR] dag_id
We are using AWS MWAA. We add our DAG.py files to our S3 bucket programatically. They then show up in the UI. However, they are "OFF" and you must click the "ON" button to start them.EDIT: Also we may sometimes want to turn a DAG that's ON to OFF (programatically)I am looking to do this programmatically, however I cannot figure out to.The API does not seem to have it:https://docs.aws.amazon.com/mwaa/latest/userguide/mwaa-actions-resources.htmlBoto does not seem to have it:https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mwaa.htmlIs it possible to manipulate a DAGs status from OFF/ON ON/OFF via API?
AWS MWAA (Managed Apache Airflow); Programmatically enable DAGs
just stopping the instance would suffice in order for the billing meter to stop tickingThe price will go down significantly, but you will be charged forstorage. To fully eliminate the cost you have to terminate the db instance, and check if you have any existing manual backups or snapshots of it. If you terminate the instance, while keeping the backups, you will be getting charged for their storage.
I'm new to Amazon Web Services and to be honest, I have not read the billing rules.Recently I set up a non-free tier instance and I was shocked because I was billed with enormous amount of money even though I've been using local database for my development.So I decided to temporarily stop this specific rds instance.My question is that, should I completely delete it or just stopping the instance would suffice in order for the billing meter to stop ticking ^_^.
Stopping AWS RDS Instance to avoid payment
You can useaws_lambda_invocation:Use this data source toinvoke custom lambda functionsas data source. The lambda function is invoked with RequestResponse invocation type.
How can I invoke an existing lambda function from one of my AWS accounts in my terraform code?i have the lambda name, arn and ID alongside with account number that is hosting the lambda
Invoke remote lambda function in terraform