Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Read replicas by default are read only until you customize them to be read write.You can configure an Amazon RDS DB instance read replica to be read/write by setting the read_only parameter to false for the DB parameter group that you create for your DB instance(s). Follow this developers guide to achieve the read/write for you read replica. Then you will be able to achieve your use case.https://aws.amazon.com/premiumsupport/knowledge-center/rds-read-replica/
I would like to add an index to a MySQL table in a database on Amazon RDS, but I do not want to stop using the database while the index is created. Thisanswersuggests using read replica promotion, first creating the index on the a read replica and then promoting the read replica. I created a read replica to try this approach, but when I try to change the index on the read replica I getERROR 1290 (HY000): The MySQL server is running with the --read-only option so it cannot execute this statement. How do I make it so that I can edit the read replica and will this prevent Amazon from continuing to update the read replica to match the master database?
Using Read Replica Promotion to Create Index On AWS RDS MySQL Database
Don't use Lambda to schedule the jobs. Linux servers already have a job scheduling service on them called Cron. Do some searches for "cron" or "crontab" to learn how to schedule jobs on Linux.
Im trying to achieve the ability to run Node.js code on an AWS EC2 instance on a scheduled interval. It is similar to how AWS Lambda works, but Lambda doesnt supply the amount of resources that I need. I have a working Node.js app already, I just need to get it running on EC2 (I think).Im new to servers and EC2, so I'm lost on how to achieve this. I am able to setup and run an EC2 instance just fine, but running the code is a different deal. My thought is to host the Node.js app on an EC2 instance, but run a Lambda function on a schedule that invokes the application to start in EC2. I just dont know where to start to learn how to do this.As always, thanks for the help!
Running Node.js functions in AWS EC2 on Schedule
'aws s3 ls --recursive' was added in version1.2.11- you are using version 1.2.9 - an outdated version. Please upgrade to the latest version.pip install -U awscli
I am trying to list the contents of an Amazon S3 bucket using the following command (documentation):aws s3 ls s3://mybucket --recursiveHowever, I get the following error:Unknown options: --recursiveThe following is the version information for my Ubuntu Linux EC2 instance:$aws s3 ls --version aws-cli/1.2.9 Python/3.4.3 Linux/3.13.0-85-genericHow can I enable the--recursiveoption on my aws-cli?
aws s3 ls Unknown options: --recursive
Previous Answer:Your server has to be public in order for API Gateway to access it. The best solution at this point is to use Client Side SSL Certificates, so that your server can easily reject any traffic not originating from API Gateway.http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.htmlUpdated Answer:You can now place a private Network Load Balancer inside your VPC that is not publicly accessible, and enableVPCLinkin API Gateway to allow API Gateway to send requests to the private NLB.
We have our api running on an AWS EC2 instance. We are interested in AWS API Gateway to authenticate API calls using Cognito and version control.Can anyone tell me how can I allow access to the API hosted on the EC2 instance without making it public? I only want API Gateway to access that API.I was not able to find any solution in the documentation.
Only allow AWS API Gateway to access EC2 instance
The documentation in Amazon is incorrect so if you copy their example you will not be able to run the command. There were two things wrong with the CLI command:1) There should not be s3:// in front of the bucket name.2) There should be quotes around the TagSet i.e. "TagSet=[{Key=xxxxx,Value=ddddd}]" (this is not in the AWS documentation).
I'm using the following aws cli command. I've looked over it time after time and can't figure out what is wrong with the command.aws s3api put-bucket-tagging --bucket s3://****edited**** --tagging TagSet=[{Key=Name,Value=FHWA_Packaging_Logs},{Key=Project,Value=FHWA_Processing},{Key=Team,Value=Production}]I get the following error:Unknown options: TagSet=[Key=Name,Value=FHWA_Processing,Key=Team], TagSet=[Key=Name,Value=FHWA_Processing,Value=Production], TagSet=[Value=FHWA_Packaging_Logs,Key=Project,Key=Team], TagSet=[Value=FHWA_Packaging_Logs,Key=Project,Value=Production], TagSet=[Value=FHWA_Packaging_Logs,Value=FHWA_Processing,Key=Team], TagSet=[Value=FHWA_Packaging_Logs,Value=FHWA_Processing,Value=Production], TagSet=[Key=Name,Key=Project,Value=Production]What is wrong with the command?
AWS cli s3api put-bucket-tagging not recognizing my TagSet
You are missing the fact that folders do not actually exist in S3.Everything that looks like a folder is only a convenient illusion presented by the console, based in the/delimiters in the object keys.The Amazon S3 data model does not natively support the concept of folders, nor does it provide any APIs for folder-level operations. But the Amazon S3 console supports folders to help you organize your data.—http://docs.aws.amazon.com/AmazonS3/latest/UG/about-using-console.htmlTo delete a "folder" using the API, you have to delete the objects that appear to be "in" it.So, why don't you get an error in your code?That's because theDELETEREST verb is idempotent. After your delete request, there isn't an object at the path you deleted, so, technically, you "succeeded," and the operation succeeds no matter how many times you delete something, whether it exists or not.The console still shows a folder because there are object with that prefix, still in the bucket.When you delete a folder from the console, the console takes care of the actual deletion of the "contained" objects by sending one or more additional requests to delete the underlying objects.Note also that there is no need to create a folder before storing objects "in" it using the API. It will implicitly appear in the console if you create just one object with/slashes in the key.
I have an application that communicates with my Amazon S3 bucket via REST API. The authentication works correctly and I am able to perform many operations (e.g., create folder, upload file, download file, GET objects).I have one remaining problem: although my REST requests to delete a folder succeed (i.e., AWS returns a "204 No Content" response), the "deleted" folder can still be accessed via the AWS web console.Before issuing the DELETE request, I can see via the AWS web console that the folder (and its subfolders and files) exists. My bucket has versioning DISABLED.Now the app issues the REST DELETE request:DELETE /App_Root/ HTTP/1.1 Accept: */* User-Agent: libcurl/7.28.0 OpenSSL/0.9.8j App/2.1.105-Windows Host: my-company-s3-account.s3.amazonaws.com Date: Sat, 20 Feb 2016 18:24:08 +0000 Authorization: AWS [signed string]Here is the response received from Amazon S3:HTTP/1.1 204 No Content x-amz-id-2: 6in0UAKZZWfgw2ifNhLVT8+UhNLGAo/8948L2SUqhg/OB5agr6X8q8ceQ/3Z4emO4n/XgfXqIUo= x-amz-request-id: 42802B620F593699 Date: Sat, 20 Feb 2016 18:23:44 GMT Server: AmazonS3Refreshing the AWS web console shows the deleted folder still exists. Issuing another GET object request to Amazon S3 shows the folders and files still exist as before.Am I missing something?
AWS DELETE folder request succeeds but the folder is still there
It really depends on the cluster size you are using. DISTSTYLE ALL will copy the data of your table to all nodes - to mitigate data transfer requirement across nodes. You can find out the size of your table and Redshift nodes available size, if you can afford to copy table multiple times per node, do it!Also, if you have a requirement of joining other tables with this table very very frequently, like in 70% of your queries, I believe it is worth the space if you want better query performance.If your Join keys across tables are same in terms of cardinality, then you can also afford to distribute all tables on that key so that similar keys lie in same node which will obviate replication of data.I would suggest trying out the two options above, and comparing average query run times of around 10 queries and then come to a decision.
How small should a table using Diststyle ALL be in Amazon Redshift?It says here:http://dwbitechguru.blogspot.com/2014/11/performance-tuning-in-amazon-redshift.htmlthat for vey small tables, redshift should use diststyle ALL instead of EVEN or KEY. How Small is small? If I was to specify a row number in the where clause of the query:select relname, reldiststyle from pg_classhow many rows should I specify?
How small should a table using Diststyle ALL be in Amazon Redshift?
Keep in mind that S3 is not a filesystem, but it is anobject store. There's a huge difference between the two, one being that directory-style activities simply won't work.Suppose you have an S3 bucket with two objects in it:/path/to/file1.txt /path/to/file2.txtWhen working with these objects you can't simply refer to/path/to/like you can when working with files in a filesystem directory. That's because/path/to/is not a directory but just part of a key in a very large hash table. This is why the error message indicates an issue with akey. These are not filename paths but keys to objects within the object store.In order to copy all the files in a location like/path/to/you need to perform it in multiple steps. First, you need to get a listing of all the objects whose keys begin with/path/to, then you need to loop through each individual object and copy them one by one.Here is a similar questionwith an answer that shows how to download multiple files from S3 using Java.
I have a folder namedoutputinside a bucket namedBucketA. I have a list of files inoutputfolder. How do I download them to my local machine using AWS Java SDK ?Below is my code:AmazonS3Client s3Client = new AmazonS3Client(credentials); File localFile = new File("/home/abc/Desktop/AmazonS3/"); s3Client.getObject(new GetObjectRequest("bucketA", "/bucketA/output/"), localFile);And I got the error:AmazonS3Exception: The specified key does not exist.
How to download files from Amazon S3?
IMO, creating unlimited amount of queues with a single message in each is a really bad design, even if theoretically it would work.If it was me, I'd try to make sure each video had some sort of unique identifier, that was the same even if the user 'double-clicked' the process button.I would invision a system where the video, with a unique name (such as a guid) was uploaded to S3, a message gets put in the queue, your threads pickup the message from the queue and do the encoding and then write the video back to a different S3 bucket, but with the same base name.Before processing any video, I would first check the 'output bucket' to see if there is already an encoded video there, with the matching name, and if it was - I'd skip the reprocessing and delete the message.If everything is running on an EC2 local disk (and you are not using S3), then the same could be done using an input and output directory on the hard disk (but that would assume that multiple machines aren't doing the processing.Its important to remember, that its possible for the same message to get delivered by SQS - even if the user only submitted it once. It happens, though rarely, so whatever system you setup you need to make sure if/when you do get the occassional duplicate it doesn't break anything.
I'm using SQS as a queue for video encoding and want to ensure that only a single encoding is performed per video.SQS works fine in that when a message is queued, it will only be received by a single thread. However, it's possible that multiple messages could be sent to the queue for the same video/encoding, meaning the message content would be the same for the particular 'encoding' queue.Is there anyway to de-duplicate to ensure that for a specific queue, that the messages in the queue or received from a queue, are unique?One option I thought would be to create a new queue for each encoding type, as the message is sent. So the queue could be named something likeencoding-video-id, which would only have a single message and I could check to ensure that the queue does not yet exist. The only "issue" is that there could be 1000's to 10's of thousands of these queues created.
Amazon SQS unique message
You can't see the associated EC2 instances via Elastic Beanstalk dashboard.Go toEC2 Management Consoleand then search instances bytag:elasticbeanstalk:environment-namewith your Elastic Beanstalk environment name as the value.
Is there a way to find out what EC2 instance is associated with my Elastic Beanstalk Application from the dashboard?I have checked every single page under "Configuration" but I couldn't find any info about what EC2 instance is running the application.
Find out EC2 instance associated with Elastic Beanstalk Application?
CloudFormation gives you the following benefits:You get to version control your infrastructure. You have a full record of all changes made, and you can easily go back if something goes wrong. This alone makes it worth using.You have a full and complete documentation of your infrastructure. There is no need to remember who did what on the console when, and exactly how things fit together - it is all described right there in the stack templates.In case of disaster you can recreate your entire infrastructure with a single command, again without having to remember just exactly how things were set up.You can easily test changes to your infrastructure by deploying separate stacks, without touching production. Instead of having permanent test and staging environments you can create them automatically whenever you need to.Developers can work on their own, custom stacks while implementing changes, completely isolated from changes made by others, and from production.It really is very good, and it gives you both more control, and more freedom to experiment.
I'm trying to understand the real-world usefulness ofAWS CloudFormation. It seems to be a way of describing AWS infrastructure as aJSON file, but even then I'm struggling to understand what benefits that serves (besides potentially "recording" your infrastructure changes in VCS).Of what use does CloudFormation's JSON files serve? What benefits does it have over using the AWS web console and making changes manually?
AWS CloudFormation vs. Web Console?
To remove the authentication credentials in the query string, setAWS_QUERYSTRING_AUTH = Falsein yoursettings.py. Fromdjango-storagesdocumentation athttps://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html:AWS_QUERYSTRING_AUTH(optional; default isTrue)SettingAWS_QUERYSTRING_AUTHtoFalseto remove query parameter authentication from generated URLs. This can be useful if your S3 buckets are public.
I am usingdjango-storagesand Amazon S3 for file storages. In my model I have:avatar = models.ImageField(_('Avatar'), upload_to='avatars/profiles/', blank=True, null=True)The image is uploaded successfully onsave, but full url with credentials is saved. In my Retrieve requests/ when I read the url from db via console) I get something like:https://subdomain.amazonaws.com/avatars/profiles/filename.jpg?X-Amz-Algorithm=XXX&X-Amz-Expires=XXX&X-Amz-SignedHeaders=XXXX&X-Amz-Signature=XXXX&X-Amz-Date=XXXXXX&X-Amz-Credential=XXXXHow can I prevent this? I could strip the url before responding, but I do not need and therefore do not want to save them in this format, because all files can be accessed publicly, also no need for credentials. Ps. I though of using thepost_savehook but it seemed like a hack to me.
Django S3 uploaded file urls show credentials
You can connect to the instance using any tool that does database dumps for the type of database you are running. For instance,mysqldumpif you are have a MySQL or Aurora database.If the database instance is not accessible to the internet you will need to make the dump from an EC2 instance that is in the correct subnet and security groups to talk to the database, or ssh tunnel through an instance to run mysqldump.Note that RDS is configured by default to take daily snapshots of your database (which are stored in AWS, so you cannot download them), but you can restore from them if anything goes wrong. You can also take a manual snapshot at any time using the AWS web console or the API. You could also launch a new database from a snapshot and connect to it to create your local dump from a snapshot instead of the active database.
I would like to have a copy of the Database in my local computer from the Amazon AWS RDS. How can I do that?
How to take a backup of data to a local machine from the AWS?
This might not be an exact answer, but it's too long for a comment.You may want to checkmtusetting on the server where you performing the execution.Redshift want's to operate on1500bytes frame and all EC2 instances are set with jumbo frame by default (9000)In order for you to run queries without problems you need to have the samemtusetting.To check what you currently have, run this command:ip addr show eth0an example output would be like this:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000in this case mtu is 9001 so you need to change it to 1500 by running:/sbin/ifconfig eth0 mtu 1500 up
I use amazon redshift and sometimes the query execution hangs without any error messages e.g. this query will execute:select extract(year from date), extract(week from date),count(*) from some_table where date>'2015-01-01 00:00:00' and date<'2015-12-31 23:59:59' group by extract(year from date), extract(week from date)and this not:select extract(year from date), extract(week from date),count(*) from some_table where date>'2014-01-01 00:00:00' and date<'2014-12-27 23:59:59' group by extract(year from date), extract(week from date)But it happens only when I deploy project to server and on my local machine all queries executed without any problems.I already set in my codeautoCommit=truefor connection. Also all things listed above I do with grails using this librarycompile 'com.amazonaws:aws-java-sdk-redshift:1.9.39'Any ideas?
Amazon Redshift: query execution hangs
Amazon is using Xen as hypervisor (it provide virtual machines). Virtual box is doing the same and need access to the processor instructions and state that are already used by Xen, so your EC2 instance doesn't have access to it.The Vagrant instruction is for the development set up on the dev machine not to be use in server set up.You will need to follow the manual instruction to set up the server (http://www.spikaapp.com/en/build/server).
I have installed Virtual box 3.2 in AWS ubuntu 14.04 instance. But while it is not running because of the issue "Running VirtualBox in a Xen environment is not supported".ubuntu@ip-172-31-20-204:~$ sudo /etc/init.d/vboxdrv setup * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS [ OK ] * Starting VirtualBox kernel modules * Running VirtualBox in a Xen environment is not supportedAnyone pls help me to overcome this. Ubuntu instance is mandatory for me so is there any way to remove Xen or use some other?(my primary aim is to setup a Spika server. For that vagrant with virtual box is mandatory).
Virtual box installation issue "Running VirtualBox in a Xen environment is not supported" in AWS ubuntu 14.04 instance
Your error is in how you formatted the value of the hash key in your query expression. The v2 AWS SDK for Ruby (aws-sdkgem) accepts all attribute values as vanilla Ruby values.Avaluecan be:StringNumeric (Integer, Float, BigDecimal, etc)BooleanIO (blob type)Set (of Numeric/String)Array (ofvalues)Hash (String =>value)You do not need to provide the type hint as was required with the v1 AWS SDK for Ruby.ddb = Aws::DynamoDB::Client.new ddb.query({ table_name: 'TEST_TABLE', key_conditions: { 'ID' => { comparison_operattor: 'EQ', attribute_value_list: ['test-123'] } } })Also, not directly related to you question, but you may find the following blog series helpful when working with DynamoDB from theaws-sdkgem:part 1 - projection expressionspart 2 - condition expressionspart 3 - update expressions
I've got a hash (string) and range (number) table in DynamoDB. I'm trying to run a query using the ruby SDK v2.0.30 but keep getting the following error:aws-sdk-core-2.0.30/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call': One or more parameter values were invalid: Condition parameter type does not match schema type (Aws::DynamoDB::Errors::ValidationException)Here is my code:gem 'aws-sdk', '~> 2' require 'aws-sdk' dynamodb = Aws::DynamoDB::Client.new(region: 'eu-west-1', credentials: creds) resp = dynamodb.query( table_name: "TEST_TABLE", key_conditions: { 'ID' => { comparison_operator: 'EQ', attribute_value_list: [{ 's' => 'test123' }] } })I'm new to ruby and have tried looking online and on AWS docs but can't find anything. Any help would be appreciated.Thanks
AWS ruby sdk v2 - dynamodb query
It looks like you would want to use the Query method on the SDK to find the items your looking for. It seems that "EndsWith" is not available as a comparison operator in the SDK though. So you would need to use CONTAINS and then check your results locally.This should lead to the best performance, letting DynamoDb do the initial heavy lifting and then further pruning the results once you receive them.http://docs.aws.amazon.com/sdkforruby/api/Aws/DynamoDB/Client.html#query-instance_method
Is it possible, using the AWS Ruby SDK (or just DynamoDB in general), to get an item or items from a table that uses a primary key only, and where that primary key ends with a certain string?I haven't come across anything in the docs that explicitly answers this question, either in the ruby ddb docs or the general docs for ddb. I'm not saying the question is not answered, but if it is, I can't find it.If it is possible, could someone provide an example for ruby or link to the docs where an example exists?
Is it possible to get items from DynamoDB where the primary key ends with a given string?
You can get all the messages in the queue, you just can't get them all at once. You request messages, and specify the max you wantup to a max of 10 at a time, anymore than that, and you'll need to request another set of messages until your queue is empty (and even then you need to constantly poll SQS if there is a possibility that new messages will be coming in at anytime).Also important to remember that even if you have less than 10 messages in the queue, and you request the max of 10 (and even if there are no other clients currently polling), you still may not get all of the messages in the queue on a given call - you need to poll repeatedly.
I am trying to retrieve all the messages in the queue using theAWS PHP SDK.Earlier there used to be aget_queue_size()method to get the queue size and then I would iterate through the loop to get all the messages.In the newest SDK I don't see such a method.LinkCan someone tell me how I can receive all the messages in the queue using the latest SDK for PHP ?
Get all messages in SQS queue using AWS SDK for PHP
Cognito is only supported via the thev2 Ruby SDK.Here is a minimal example forGetOpenIdTokenForDeveloperIdentityusing the v2 SDK:require 'aws-sdk' cognito = Aws::CognitoIdentity::Client.new(region:'us-east-1') resp = cognito.get_open_id_token_for_developer_identity( identity_pool_id: 'IDENTITY_POOL_ID', logins: {'MY_PROVIDER_NAME' => 'USER_IDENTIFIER'})IDENTITY_POOL_ID- The ID of your poolMY_PROVIDER_NAME- The provider name you configured on your identity poolUSER_IDENTIFIER- The unique identifier for this user in your systemThe response (when successful) will contain anidentity_idandtokenfor your user, which can be passed back to your mobile application.
I am trying to follow the steps to upload files to Amazon S3 from an iOS app.According to the AWS iOS SDK docs, before uploading, it is required to authenticate the app users for secure access to AWS resources via my backend server:http://docs.aws.amazon.com/mobile/sdkforios/developerguide/cognito-auth.html#providing-credsWhat is the right way to call the AWS Cognito IdentityGetOpenIdTokenForDeveloperIdentityservice from a rails (version 4.1) server?This service is not part of the aws-sdk gem.
Upload to Amazon S3 and Calling Amazon Cognito Identity from Rails server
Use Elastic IP addresses and assign them to your EC2 instances. Configure Route 53 to resolve your DNS entries to those IP addresses.
I have multiple AWS EC2 instances running, and I use Route 53 for public DNS.I know that I can point Route 53 DNS records to the public IP address or the public DNS name of an instance. These two values change, however, whenever an instance is started or stopped, so every time an instance is stopped, I need to reconfigure Route 53.Is there any way to statically link an AWS Route 53 record to an EC2 instance, either by instance name, private IP address, private DNS name, or some other identifier?Obviously, for the DNS record to work for the public, Route 53 would have to resolve the new DNS record to either a public IP address or a public DNS name. I'm just hoping that Route 53 will substitute the current public IP address for an EC2 instance for whatever static identifier it might use to statically link a DNS record to the EC2 instance.
Statically link AWS Route 53 DNS record to EC2 instance
Ran debug var=instance, and got:TASK: [debug var=instance] **************************************************** ok: [54.90.128.104] => { "instance": { "changed": true, "msg": "All items completed", "results": [ { "changed": true, "image_id": "ami-be14b9d6", "invocation": { "module_args": "wait=yes aws_access_key=**** aws_secret_key=**** instance_id=i-393284d2 region=us-east-1 name=blah", "module_name": "ec2_ami" }, "item": "i-393284d2", "msg": "AMI creation operation complete", "state": "available" } ] } }Given that:- debug: var=instance.results[0].image_idgave the correct results.
I'm building an EC2 instance with Ansible, then creating an AMI from the instance. I'm sure I'm missing something here, but how do I get the DI of the newly created AMI? I've tried:tasks: - name: create an ami in us-east-1 ec2_ami: wait=yes aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} instance_id={{ item }} region={{ region1 }} name=data-mgmt-qa-006 with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id'] register: ec2_ami_info - debug: var=item with_items: ec2_ami_info.image_idand:tasks: - name: create an ami in us-east-1 ec2_ami: wait=yes aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} instance_id={{ item }} region={{ region1 }} name=data-mgmt-qa-006 with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id'] register: instance - debug: var=item with_items: instance.image_idThe latter 'register' is copied from the docs, but I'm not able to get the right with_items obviously.The AMI is being created fine. Any suggestions would be much appreciated.
Get the AMI ID of an AMI created with Ansible
Found the answer.The problem was the way I was getting the JSON. I needed to useJSON.load(result['Message']), instead ofJSON.parse(...).
I have an SQS queue which is subscribed to a SNS topic. When I publish a new notification to the topic, I use the following code (within a Sinatra app):jsonMessage = { "announcement" => { "first_name" => results['first_name'][:s], "last_name" => results['last_name'][:s], "loc_code" => results['location'][:s], "note" => params['note_content'] } } msgid = @announcments_topic.publish(jsonMessage.to_json, {subject: "Note Created", message_structure: 'json' })When my queue listener picks up this notification, the message section of the corresponding hash looks like this:"Message"=>"{\"announcement\":{\"first_name\":\"Eve\",\"last_name\":\"Salt\",\"loc_code\":\"Location\",\"note\":\"test\"}}"In my queue listener, I want to use this hash, but when I try to useJSON.parse(result['Message'])I get an unexpected token error because of the escaped double quotes. Any suggestions on how I can fix this? Am I not sending my notification as JSON properly? How can I get sns/sqs to not escape the double quotes?
AWS SQS JSON format when receiving message from SNS with Ruby SDK
It doesn't come prepackaged in the case of Amazon Linux. It's an addon module (Assuming you are using Amazon Linux). You can run:sudo yum install php55-mysqlnd
I have mySQL and PHP 5.5 installed on an AWS EC2 instance. However, when I try$db = new mysqli($args)PHP kicks me to the autoloader, as if it can not find the constructor for the mysqli object. I have uncommented extension=mysql.so in the php.ini file, but that does not seem to have accomplished anything. At startup, I getPHP Startup: Unable to load dynamic library '/usr/lib64/php/5.5/modules/msql.so' - /usr/lib64/php/5.5/modules/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0I thought that mysqli/native driver came prepackaged with PHP5.5, but maybe I was wrong about that. Can someone give me a hint as to how PHP5.5 plays with mysqli?
PHP 5.5 not finding MySQLi constructor
You can definitely ssh to the instance, and see around. But remember, that your changes are not persistent. You should look at .ebextensions config files as the way to re-run your commands on the host, plus more.It might take some time to see where ElasticBeanstalk stores configuration files and all other interesting things.To get you started, your app files are located at:/opt/python/current/appand if you are using Python, it is located in virtual environment at:/opt/python/run/venv/bin/python27Customizing the Software on EC2 Instances Running Linuxguide contains detailed information on what you can do:Packages -install packagesSources -retrieve archivesFiles -operations with filesUsers -anything with usersGroups -anything with groupsCommands -execute instance commandsContainer_commands -execute commands after the container is extractedServices -launch servicesOption_settings -configure container settingsSee if that satisfies your requirements, if not, come back to StackOverflow and ask more questions.
I think I'm on the right path. I can use .ebextensions to change some of the conf files for the instance I'm running. Since I'm using Elastic Beanstalk, and that a lot of the software is shrinkwrapped (which I'm fine with), I should be using .ebextensions as a means of modifying the environment.I want to employ some form of mod_rewrite config, but I know nothing of this Amazon Linux. I don't even know what the web server is. I've been through the console for the past few hours and see no trace of the things I want to override.Apparently I can setup a shell to take a look around, but modifying things that way will cause things to be overridden since Beanstalk is handling config. I'm not entirely sure on that last point.Should I just ssh and play in userland like a typical unix host?
How do I know what .ebextensions config file to create?
I have tried downgrading from Multi AZ deployment to a standard deployment. The entire process took around 2-3 minutes (The transition time should depend upon your database size). The transition was seamless. We did not experience any downtime. Our website was working as expected during this period.Just to ensure that nothing gets affected, I took a snapshot and a manual data base dump before downgrading.Hope this helps.
What might happen if I downgrade my Multi AZ deployment to standard deployment? Is there any possibility of i/o freeze or data loss? if yes, what might be the proper way to minimize downtime of data availability.
Downgrade Amazon RDS Multi AZ deployment to Standard Deployment
I've encountered two (potentially unrelated) problems in this context:1) You might have missed an important prerequisite still (it's easy to miss, I did as well ;) - seeSetting up the Development Environment:If you are using the Eclipse development environment, [...] install the AWS Toolkit for Eclipse using the update sitehttp://aws.amazon.com/eclipse/.Be sure to install the Amazon Simple Workflow Service (SWF) Tools.Among other things, this plug-in processes the annotations and generates the client classes.[emphasis mine]Once I fixed this oversight of mine, compile time weaving started generating classes on build as expected.2) Myinitial answeraddresses a subsequent problem of the AspectJ runtime missing due to an apparent conflict between AspectJ provided via theSpringSource Tool Suite (STS)and theAspectJ developer tools for Eclipse. I still haven't figured out whether this might have been a local problem of my STS installation only - please see my answer for details, in case this conflict applies to you as well.
Has anyone managed to get the AWS SDK samples for Simple Workflow and the Flow Framework to work properly? I've followed the Eclipse set-up instructions (http://docs.amazonwebservices.com/amazonswf/latest/awsflowguide/setup.html) to the letter, but no classes get generated. As a result my project won't build because there are missing *Client classes all over the place.I've tried this with both the samples in the SDK and theImageProcessingsample that is offered when one first logs into the SWF Admin Console. Colleagues similarly can't get it to work.
AWS SWF Flow Framework - Eclipse AspectJ Load-Time Weaving
The "instances" attribute of the LoadBalancer class only contains a tiny bit of information about the instance - it's not a full Instance object. To get the full instance object you must use the instanceId, which is available, to query for more info. This code snippet extends yours with the required calls:#Create connection to ec2, credentials stored in environment ec2_conn = connect_ec2() conn = regions[3].connect(aws_access_key_id= access, aws_secret_access_key = secret_key) loadbalancers = conn.get_all_load_balancers() for lb in loadbalancers: for i in lb.instances: #Here, 'i' is an InstanceInfo object, not a full Instance instance_id = i.id #Query based on the instance_id we've got #This function actually gives reservations, not instances reservations = ec2_conn.get_all_instances(instance_ids=[instance_id]) #pull out our single instance instance = reservations[0].instances[0] #Now we've got the full instance and we can get parameters from here print(instance.public_dns_name)
I get the elb details of a specific region, say Europe. Then I am able to get the instances that are related to the ELB. The problem is I am not able to get the public dns of those instances. What I do is:conn = regions[3].connect(aws_access_key_id= access, aws_secret_access_key = secret_key) loadbalancers = conn.get_all_load_balancers() for lb in loadbalancers: print lb.instancesHow to get the public_dns_name of these instances?When I try:for i in lb.instances: i.public_dns_nameAttributeError: 'InstanceInfo' object has no attribute 'public_dns_name'
How do I get the public dns of an instance in AWS
Check out the docs forgetObject:You need to either pass the remote file name as the 2nd param, then in the options set the value of 'fileDownload' to a file name or an OPEN file resource as a parameter there.Example:$s3->getObject('myBucket','myRemoteFile', array('fileDownload' => 'localFileName'));
how would I get the file from amazon s3 to local system using php.I am trying to do this but its not working$s3 = new AmazonS3("key 1", " acces pass"); $s3->getObject("Bucket/filename"); //write to local $fp = fopen('/tmp/filename.mp4', 'w'); fpassthru($fp);EDITI am trying to save the file to my local server from s3
s3 file to local system php
It looks like the AWS Console creates empty folders by creating a 0 byte file that ends in a '/'eg PUT a 0 byte file called ThisisAnEmptyFolder/Then when listing the objects in a folder, the aws tools you use may return a 'file' called ThisisAnEmptyFolder/ - which users won't want to see. So you may need to include logic something like (this is NOT PHP!)if (object.key != prefix) show the file to the user
I'm using the AWS SDK for PHP, specifically the Amazon S3 portion, and I'm not quite sure where to proceed. The CMS I'm developing includes the ability to manage files both locally and remotely using an S3 account. I want administrators to have the ability to create folders in the S3 bucket, but because S3 is a flat-file system, I'm not sure how to create an empty "folder", or at least a blank object that looks like one. A guide I was reading (dated 2009..) mentioned suffixing the object name with_$folder$, but I tried that and it doesn't seem to work.It must be possible to create empty folders in an S3 bucket because the AWS console has the ability to do it, so what is the method for creating empty folders in Amazon S3?
Adding folders programatically to S3 with the AWS SDK for PHP
What I get from your question is you need Amazon as an underlying service for your applciation.Have a look atAmazon WebServices Apiand particularly thisCheckout by AmazonWhat Your Customers SeeAmazon Simple PayHope this helps
I need to buy item from Amazon.com programmatically without redirect to amazon web site for money withdraw?
Buy item from amazon programmatically with C#
You can add a description usingcommentAWS console uses description ( console is misleading :D) however the cloudfront API usescommentto denote description. Terraform mapped API namecommentinto its HCLcomment (Optional) - Any comments you want to include about the distribution.
Terraform's docsdoesn't say anything about anyDescriptionfield. Neither Googling (which's is problematic in the first place, asDescriptionis super-common word).I tried adding aDescriptionTag, but it doesn't show up in theDescriptioncolumn of theCloudFront > Distributionspage
How to set CloudFront Distribution Description?
I faced the same issue with Terraform. The user had some access tokens and MFA devices configured on their account. They have created it manually, hence, Terraform didn't know about that. So, it was not able to delete the user due to the exact same error.Deleting the MFA tokens and the manually generated access tokens fixed the issue.Perhaps can you automate it with Java?
while trying to delete aws user from AWS Java SDK. i am getting following error:Cannot delete entity, must remove tokens from principal first.the relevant code snippet is:DeleteUserRequest deleteUserRequest = DeleteUserRequest.builder().userName(userName).build(); iam.deleteUser(deleteUserRequest);
Cannot delete entity, must remove tokens from principal first
You can split the keys on "/" and only keep the first level:level1 = set() #Using a set removes duplicates automatically for key in s3_client.list_objects(Bucket='bucketname')['Contents']: level1.add(key["Key"].split("/")[0]) #Here we only keep the first level of the key #then print your level1 set logger.debug(level1)/!\ Warningslist_objectmethod has been revised and it is recommended to uselist_objects_v2according toAWS S3 documentationthis method only returns some or all (up to 1,000) keys. If you want to make sure you get all the keys, you need to use thecontinuation_tokenreturned by the function:level1 = set() continuation_token = "" while continuation_token is not None: extra_params = {"ContinuationToken": continuation_token} if continuation_token else {} response = s3_client.list_objects_v2(Bucket="bucketname", Prefix="", **extra_params) continuation_token = response.get("NextContinuationToken") for obj in response.get("Contents", []): level1.add(obj.get("Key").split("/")[0]) logger.debug(level1)
I am trying to list s3 obejcts like this:for key in s3_client.list_objects(Bucket='bucketname')['Contents']: logger.debug(key['Key'])I just want to print the folder names or file names that are present on the first layer.For example, if my bucket has this:bucketname folder1 folder2 text1.txt text2.txt catallog.jsonI only want to printfolder1,folder2andcatalog.json. I don't want to include text1.txt etc.However, my current solution also prints the files names present within the folders in my bucketname.How can I modify this? I saw that there's a'Prefix'parameter but not sure how to use it.
list S3 objects till only first level
Second Redshift defaults to lower case for all column names so FirstName is being seen as firstname. You can enable case sensitive column names by setting theenable_case_sensitive_identifierconnection variable to true and quoting all column names that require upper characters:SET enable_case_sensitive_identifier TO true;and changingmy_json.FirstNametomy_json."FirstName".See:https://docs.aws.amazon.com/redshift/latest/dg/r_enable_case_sensitive_identifier.htmlhttps://docs.aws.amazon.com/redshift/latest/dg/super-configurations.html
I'm trying to access fields of a SUPER column which have camel case fields, so something like:{"FirstName": "Mario", "LastName": "Maria"}So let's say I store this field in Redshift as columnmy_json, then I'd query it withSELECT my_json.FirstName FROM my_tableThen I'd get onlynullresult instead of the real value.How to handle this use case?
Redshift SUPER type: accessing camel case fields returning null result
By default EKS compute mode group, will have the default cluster Security Group attached which is created by AWS. Even if you provide additional security group to EKS cluster during the creation, that additional security group will not be attached to compute instances. So, to get this working, you have to use Launch Templates.
I need a help with EKS managed node group. I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?Thanks in advance for help!
Additional security group in EKS managed node group
No, Network Load Balancers do not have security groups. You should add Security Groups directly to the EC2 targets based on IP addresses or CIDR blocks.See:Target Security Groups - Elastic Load Balancing
In the doc:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-update-security-groups.htmlIt syas the following when editting the security group of load balancers:Update the associated security groups You can update the security groups associated with your load balancer at any time. To update security groups using the console: 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. On the navigation pane, under LOAD BALANCING, choose Load Balancers. 3. Select the load balancer. 4. On the Description tab, under Security, choose Edit security groups. 5. To associate a security group with your load balancer, select it. To remove a security group from your load balancer, clear it. 6. Choose Save.However, for my Network Load Balancer, I cannot select the "Edit security groups" option. It is greyed out:How to edit its security group then?Currently requests to the NLB cannot be delivered to the EC2 instance in the target group, because of security group configurations.Why can't I edit??
AWS EC2: does Network Load Balancer have security groups?
Terraform is not at fault here. You simply cannot change the encryption setting on an RDS instance after it was originally created, not with terraform, not via the AWS console or via any AWS API.Instead you can / need to create a snapshot of the current db, copy + encrypt the snapshot and then restore from that snapshot:https://aws.amazon.com/premiumsupport/knowledge-center/update-encryption-key-rds/This will cause a downtime of the DB. And terraform does not do that for you automatically, you need to do this manually. After the DB is restored terraform should not longer try to replace the DB since the expected config now matches the actual config.Technically you canignore_changesthestorage_encryptedproperty but of course that causes terraform to simply ignore any storage encryption changes.
I have a postgres RDS instance in AWS that I created using terraform.resource "aws_db_instance" "..." { ... }Now I'm trying to encrypt that instance by addingresource "aws_db_instance" "..." { ... storage_encrypted = true }But when I runterraform plan, it says that it's going to force replacement# aws_db_instance.... must be replaced ... ~ storage_encrypted = false -> true # forces replacementWhat can I do to prevent terraform from replacing my db instance?
Terraform - Encrypting a db instance forces replacement
-1Solution based onwatkinsmatthewpAnswer:public class TaskStatusConverter implements AttributeConverter<TaskStatus> { @Delegate private final EnumAttributeConverter<TaskStatus> converter; public TaskStatusConverter() { converter = EnumAttributeConverter.create(TaskStatus.class); } }Task status attribute looks like this:@Getter(onMethod_ = {@DynamoDbConvertedBy(TaskStatusConverter.class)}) TaskStatus status;
I am trying to implement a simple java event-handler lambda for AWS. It receives sqs events and should make appropriate updates to the dynamoDB table.One of the attributes in this table is a status field that has 4 defined states; therefore I wanted to use an enum class in java and map it to this attribute.Under AWS SDK v1 I could use the @DynamoDBTypeConvertedEnum annotation. But it does not exist anymore in v2. Instead, there is the @DynamoDbConvertedBy() which receives a converter class reference. There is also an EnumAttributeConverter class which should work nicely with it.But for some reason, it does not work. The following is a snip from my current code:@Data @DynamoDbBean @NoArgsConstructor public class Task{ @Getter(onMethod_ = {@DynamoDbPartitionKey}) String id; ... @Getter(onMethod_ = {@DynamoDbConvertedBy(EnumAttributeConverter.class)}) ExportTaskStatus status; }The enum looks as follows:@RequiredArgsConstructor public enum TaskStatus { @JsonProperty("running") PROCESSING(1), @JsonProperty("succeeded") COMPLETED(2), @JsonProperty("cancelled") CANCELED(3), @JsonProperty("failed") FAILED(4); private final int order; }With this, I get the following exception when launching the application:Class 'class software.amazon.awssdk.enhanced.dynamodb.internal.converter.attribute.EnumAttributeConverter' appears to have no default constructor thus cannot be used with the BeanTableSchema
How can I use Java Enums with Amazon DynamoDB and AWS SDK v2?
The problem was with my Lambda functions they were defined for my application and some of them were unrecognized fixing them and giving permissions to the user is the only solution.
I have deployed a Next.js server side rendering app on AWS Amplify. I am new to AWS and don't know exactly why I am encountering this error. I have read so many articles and documentations but I am unable to solve this issue.I am usinggetServerSidePropsto get params and props from API etc. On Vercel and Netlify, my app is running fine but I am getting errors on Amplify AWS.My app is loading static pages, but giving me an error on dynamic pages. E.g.www.example.com/test-1Heretest-1is a dynamic route"/:id"The error I get:503 ERROR The request could not be satisfied. The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.I know this error is specifically towards permissions, but I don't know how to configure them.
503 ERROR The request could not be satisfied (AWS Amplify)
I know the Tutorial which this is from, use- name: ACTIONS_ALLOW_UNSECURE_COMMANDS run: echo 'ACTIONS_ALLOW_UNSECURE_COMMANDS=true' >> $GITHUB_ENVbefore- uses: chrislennon/[email protected]and it should work.
My CI/CD pipeline that is using github workflows is failing giving the following error:Error: Unable to process command '##[add-path]/opt/hostedtoolcache/aws/0.0.0/x64' successfully. Error: Theadd-pathcommand is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting theACTIONS_ALLOW_UNSECURE_COMMANDSenvironment variable totrue. For more information see:https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/This is my container.yml filename: deploy-container on: push: branches: - master - develop paths: - "packages/container/**" defaults: run: working-directory: packages/container jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - run: npm install - run: npm run build - uses: chrislennon/[email protected]- run: aws s3 sync dist s3://${{ secrets.AWS_S3_BUCKET_NAME }}/container/latest env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}Any idea why this might be happening. Thanks in advance
Github Workflows CI/CD failing
You need to set up your Go project properly for dependency management. First follow the steps for initializing the project as described inTutorial: Get started with Go:go mod init YOUR_PROJECT_NAMEAnd then add your dependencies:go get github.com/aws/aws-sdk-go/aws go get github.com/aws/aws-sdk-go/service/dynamodb
Why am I getting this error message? I'm a beginner at using aws sam and Go.Error: GoModulesBuilder:Build - Builder Failed: main.go:9:2: no required module provides package github.com/aws/aws-sdk-go/aws; to add it: go get github.com/aws/aws-sdk-go/aws main.go:10:2: no required module provides package github.com/aws/aws-sdk-go/aws/session; to add it: go get github.com/aws/aws-sdk-go/aws/session main.go:11:2: no required module provides package github.com/aws/aws-sdk-go/service/dynamodb; to add it:<br> go get github.com/aws/aws-sdk-go/service/dynamodbThis is my code in vscode package mainimport ( "logs" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/dynamodb" )
no required module provides package github.com/aws/aws-sdk-go/aws
Yourreplicate_regionshould be string, not a list of strings. It should be, e.g.:variable "replicate_region" { description = "value" type = string default = "us-east-1" }UpdateIteration usingdynamic block.variable "replicate_region" { description = "value" type = list(string) default = ["us-east-1", "ap-southeast-1", "ap-south-1"] } resource "aws_ecr_replication_configuration" "replication" { replication_configuration { rule { dynamic "destination" { for_each = toset(var.replicate_region) content { region = destination.key registry_id = "xxxxxxxx" } } }}}
I am trying to replicate my AWS ECR repository to multiple regions within the same account using terraform. I tried manually from the AWS console it works fine but from terraform, I am not able to find the solution. What I tried: I tried to make a separate variable for the region called replicate_region and tried to provide the region in the list but it keeps on giving me an error calledInappropriate value for attribute "region": string required.Here is the variable code:variable "replicate_region" { description = "value" type = list(string) }Here is my code for ecr replication:resource "aws_ecr_replication_configuration" "replication" { replication_configuration { rule { destination { region = var.replicate_region registry_id = "xxxxxxxx" } }}}Can anyone please help me out?Thanks,
Cross region replication of AWS ECR repository
Since I did not find any solution, I raised an AWS support ticket and got the answer. This issue appears when the ES cluster migrating from an Elastic Search version which did not support Auto-Tune to one that does (in my case 5.6 to 6.8).As a solution, the AWS service team manually deployed Auto-Tune agent on cluster and it took about a day. After that I can enable or disable Auto-Tune as well.BTW, new domains creates already with Auto-Tune feature enabled, so in the case of a new cluster, you should not face it.
I just updated AWS Elasticsearch from version 5.6 to 6.8. And there is anAuto-Tunefeature tab appeared in the Console. But it looks like does not work and shows only "Error" in front of Auto-Tune and nothing else.After enabling Auto-Tune it show as Enabled, but after page reloaded changes to Error status back. Is any solutions to fix this or additional ways to get more detailed error message?
AWS Elasticsearch Auto-Tune feature shows Error
I figured out that I can get it to work by usingcdk deploy --app 'cdk.out/' <my-stack>Then it will refer to the CDK assets in cdk.out and build from there, so there is no need to upload all the files and do a "cdk synth" all over again
I would like to know if its possible to do something like this:Do a CDK synth locally which will create a cdk.out folder (it should have all the required files for a cdk deploy?)Upload the cdk.out folder to a S3 bucketDo a CDK deployment based on the content of the cdk.out folder in S3 using "cdk deploy" in CodeBuildI can manage to do those three steps if I upload the full CDK project including typescript files to the S3 bucket. So my question is if there is a workaround so you only need the cdk.out folder and not the typescript scripts themselves. I mean the project has already been synthesized?When I try to do it from cdk.out folder, it complains about needing the cdk.json file. If i upload that file with the cdk.out folder, it compains about not finding the typescript files. I think its because i do have an "app": "npx ts-node bin/app.ts" in cdk.json. I am not sure how to go about this and if its even possible.If it is not possible, i will just have to upload the full project and not only the cdk.out folder...
Deploying CDK Stack from cdk.out folder on S3 bucket (AWS)
Yes, you can make roles that assume roles. The process is calledRole chaining:Role chaining occurs when you use a role to assume a second role through the AWS CLI or API.The key thing to remember about this is that once A assumes B, all permissions of A are lost temporary, and the effective permissions are of the role B. So the permissions of roles A, B and C donot add up.
Question same as the title, i want to know if a role and assume a role that can assume another role.Example: Role A. A Role that is trusted by an external account and it a policy that can assume any role Role B. This role is assumed by A and it also has a policy that can assume Role C. Role C. This role has policy that can access S3 bucket for example.
aws iam- can a role assume a role and that role assume another role?
Yes, you can.resourcetakesregion_nameparameter:dynamo_client = boto3.resource('dynamodb', region_name='<your-other-regio>') dynamo_table = dynamo_client.Table('table')
I am wondering if we can write a lambda function in one region for example us-east-1 to query a DynamoDB database present in another region.I feel that there is a provision. just wondering how the syntax would be to achieve this.dynamo_client = boto3.resource('dynamodb') dynamo_table = dynamo_client.Table('table')Above is a normal example of a dynamodb connection in the same region.Wondering how would be the syntax when we want to access the dynamodb from another region.Thank you
lambda function to query DynamoDB present in another region?
The question refers to the EBS in-tree storage plugin,kubernetes.io/aws-ebs, not the CSI one,ebs.csi.aws.com, which already supports gp3 volumes.According tothis, gp3 support will not be backported to the deprecated in-tree provisioner.We do not have any plans to backport gp3 support to the deprecated in-tree provisioner. Our focus right now is on getting the EBS CSI driver to 1.0 release in preparation for enabling the CSIMigration flag on EKS clusters.
I have a question regarding support of AWSgp3on Kubernetes.AWS announcedgp3EBS volume type which shows better performance than existinggp2:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html.But, fromKubernetes docs, I could not seegp3support by Kubernetes. As I understand, Kubernetes does not supportgp3for AWS yet.Is my understanding correct or is there other documentation that I might have missed?
AWS EBS gp3 volume support on Kubernetes
For exact details of your Linux distro you can use commandcat /etc/os-releaseIts example output forUbuntu(default user isubuntu) is:NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.1 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focalwhile forAmazon Linux 2(default user isec2-user):NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"Since it seems you are using Amazon Linux 2, you should useyumto install and updated your packaged, notaptnorapt-get,e.g.:sudo yum updateAlternatively, when you create your instance,choose Ubuntu imagefor your it, rather then default Amazon Linux 2.
I am using an instance of AWS EC2. I want to useapt-getcommand, but it throws an error: 'apt-get not found'How do i get to use apt-get command?
How to install apt-get on AWS EC2 instance?
The application lives under the following directory/var/app/current/
I have deployed a Python Flask server to AWS EB.I have been able to SSH into the EC2 instance and when I go to the root directory and type in the commandlsI get this;bin boot dev etc home lib lib64 local media mnt opt proc root run sbin srv sys tmp usr varAfter looking around I made the assumption that my application code is located in the/home/webappBut here I have a problem, I am unable tocdinto the directory as I get a permission error-bash: cd: webapp/: Permission deniedAnd when I usesudoI don't get an error but the directory does not change[ec2-user@ip-###-##-##-### home]$ sudo cd webapp/ [ec2-user@ip-###-##-##-### home]$I have two questions;Where is my application code?Assuming my application code is in the directorywebappwhy is it thatsudo cddoes not work?
Where does my application code sit on AWS Elastic Beanstalk EC2 Instance?
If you use server-side encryption then your data is protected by policiesonly. If you accidentally give access to someone (or someone steals your AWS access keys) then it does not matter if it is stored encrypted or not.With client-side encryption you manage the key and without it nobody can access the contents of the files. If you mess up the policies, the keys protect your data.
I want to store a lot of files in Amazon S3 for my application. I have an option to use server-side encryption or client-side encryption or both.By Server-side encryption, I mean using the Amazon S3 encryption feature to encrypt files. And by Client-side encryption, I mean that I will encrypt files in my application and then store that in S3.Which one is preferred as both method has different advantages like Server-side encryption will be good in processing as Amazon has used full optimization but in client-side encryption, I am not dependent on Amazon in future I can easily transfer my file to other file system and my encryption would be intact. Also If someone gets access to my Amazon S3 UI they can easily download decrypted files in the server-side encryption method. Also, Amazon S3 encryption comes with a cost.Please help me in deciding this.
Server Side encryption vs Client Side encryption - Amazon S3
Your/etc/nginx/conf.d/nginx.timeouts.confdoes not work because this is a valid file for EB platforms based onAmazon Linux 1(AL1). However, it as confirmed in the comments, you are using AL2.For AL2, the nginx settings should be in.platform/nginx/conf.d/, not in.ebextentionsas shown in thedocsin the "Reverse proxy configuration" section.Therefore, you could try creating the following.platform/nginx/conf.d/myconfig.conffile with the content of:client_header_timeout 5; client_body_timeout 10; send_timeout 940; proxy_connect_timeout 2; proxy_read_timeout 940; proxy_send_timeout 10;
I'm trying to increase the timeout on Amazon Elastic Beanstalk but I still get a 504 Gateway timeout.Here's what I've done so far:.ebextensions/timeouts.config:option_settings: - namespace: aws:elb:policies option_name: ConnectionSettingIdleTimeout value: 940 - namespace: aws:elbv2:loadbalancer option_name: IdleTimeout value: 940 files: "/etc/nginx/conf.d/nginx.timeouts.conf": mode: "644" owner: "root" group: "root" content: | client_header_timeout 5; client_body_timeout 10; send_timeout 940; proxy_connect_timeout 2; proxy_read_timeout 940; proxy_send_timeout 10; container_commands: 01_update_nginx: command: "sudo sed -i 's/keepalive_timeout 65;/keepalive_timeout 940;/g' /etc/nginx/nginx.conf" 02_restart_nginx: command: "sudo service nginx restart"Procfile:web: gunicorn --bind :8000 --workers 10 --timeout 935 --graceful-timeout 935 main:appDespite this, I still get a "504 Gateway Time-out" after exactly 60.1 seconds.What am I missing that should make it work?
Unable to increase the timeout on AWS Elastic Beanstalk
You can use either theaws s3 cpcommand, or if you want to only synchronise new files you can use theaws s3 synccommand.The syntax is belowaws s3 cp s3://mybucket . --recursiveThe documentations are available below:aws s3 cpaws s3 sync
How to copy the files which are newly updated in S3 bucket using AWS CLI to local machine?Can we compare the logs and do the copy?
How to copy files from AWS S3 to local machine?
AWS AppConfighelps you applyingconfiguration changesto running applications.Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and quickly deploy application configurations. AppConfig supports controlled deployments to applications of any size and includes built-in validation checks and monitoring. You can use AppConfig with applications hosted on EC2 instances, AWS Lambda, containers, mobile applications, or IoT devices.AWS CodeDeployhelps you rolling outnew versionsof applications.AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
Both these services seem to do the similar things, when would you use each one.
What is the difference between AWS App Config and Code Deploy, and when to use each one
Answer recommended byAWSCollectiveYes you can run the EC2 instance even whilst the EBS volume is optimising.Whilst it is occuring you might find that the performance varies between both modifications however, it will never be lower than the minimal performance of either previous or new configuration.While the volume is in the optimizing state, your volume performance is in between the source and target configuration specifications. Transitional volume performance will be no less than the source volume performance. If you are downgrading IOPS, transitional volume performance is no less than the target volume performance.More information available in thedocumentation.
Once I stopped our instance and modified the size of one of the EBS volumes. Then, that EBS is stopping at in 'in-use - optimizing (60%)' status. I suppose it can sometimes happen to take long time for optimizing up to 24hours. But we need to start our EC2 instance as soon as possible.I'm just wondering if it is possible to start EC2 instance, one of the EBS in optimizing status not completed yet. That EBS is not root volume, but the important volume contains database files.Any advice would be appreciated.
Can we start up EC2 instance during attached EBS in optimizing status?
Amazon SNS is essentially just a pub-sub system that allows you to publish a single message that gets distributed to one or more subscribed endpoints.To process multiple messages in a single downstream process, you can add an Amazon SQS Queue that picks up the messages from the SNS topic and a Lambda function that retrieves the messages in batches from the queue.
I have a process that in a very quick period of time is publishing many SNS messages that would be processed by Lambda. However, Lambda is processing it one at the time. Is it a way to apply something similar to SQS long pooling?I have code:exports.saveLog = async (event) => { console.log('Event : ', event.Records.length); event.Records.forEach(record => { const sql = record.Sns.Message; ...I would like to Lambda receive a set of messages if they are published in a short period of time - Is it possible?
AWS SNS processing multiple message in one Lambda
You can perform a deleteItem operation for an item, and get its old value (before delete) by setting:"ReturnValues": "ALL_OLD"in the request params.To delete an item you must specify its primary key. so you can only delete one item. (in your caseFromdoesn't seem to be the primary key)DeleteItem docYou can perform a delete within a batchWriteItem operation to deal with multiple items at once. But note that batchWriteItem is not atomic i.e some delete ops may fail, and you can find them in batchWriteItem's response.BatchItem doc
We want to run a query, in which all the items that are returned, are deleted. More clearly, what we want to do exactly is run a query, in which if an item matches the condition, it should be included in the response, and be deleted from Amazon DynamoDB. And then the query should go with the second option.So, after the query would respond, there would no such orders exist in database, since they were deleted on the go.An example workflow with 5 items (items sample img. below) would look like -A Query runs checking ifFrom=Kartik.The query comes on 1st item (1000) & finds that it matches the condition.It captures the item, and deletes it from the Table. Now, only the response contains this item, not the table.The query moves onto further items (1001&1002) and finds that they don't fit under the condition, so it doesn't even capture them, and does not delete too.The query finds the 4th item (1003) matching the condition. So, it captures it in the response, and deletes it from the table.Same as above for the 5th item (1004).Now, the query completes, and returns a response containing ONLY the 1st, 4th & 5th Item. Now if I go and look for them in DynamoDB, it would return an error because they were deleted from there.So, that's how I want the flow to be. Any chances of this being possible to do?Any help is appreciated! Thanks!
Can we query and delete item in Amazon DynamoDB at the same time?
From theAmazon.Lambda.CloudWatchEventsNuGet package you can use theCloudWatchEventtype. The trick isCloudWatchEventis a generic class depending on the event source. There are some event detail types defined in theAmazon.Lambda.CloudWatchEventsbut depending on your event type you might have to create a own POCO to be used for the generic parameter which has the fields you care about.
I am trying to write a Lambda function in C# (.NET Core) that will handle when a CloudWatch event occurs in my account. I am using the Serverless Application Framework (https://www.serverless.com/) and have previously been successful with writing the handler code to respond to ApiGateway Requests/events. For the ApiGateway request handlers, the methods signature always had the same two parameters:public APIGatewayProxyResponse SampleHandler(RequestAPIGatewayProxyRequest request, ILambdaContext context)Per the docs (https://docs.aws.amazon.com/lambda/latest/dg/csharp-handler.html), the first parameter is defined as the "inputType" and is typically specific to the event that trips the function and the second parameter is the generic Lambda function context information. Currently, I've been unsuccessful in finding the corresponding object type of a Cloudwatch event.My serverless application framework YAML file has the event for wired up like so:functions: NewRevision: handler: CsharpHandlers::AwsDotnetCsharp.Handlers::NewDataExchangeSubscriptionRevision memorySize: 1024 # optional, in MB, default is 1024 timeout: 20 # optional, in seconds, default is 6 events: - cloudwatchEvent: event: source: - 'aws.dataexchange' detail-type: - 'Revision Published To Data Set'My question is, does anyone know what the appropriate object type that should be used in the method signature for a CloudWatch event?
C# .NET Core AWS Lambda Function Handler Signature for Cloudwatch Event
AWS works always on incremental snapshots.. Even if you take EBS volume snapshot.. it will be incremental.Here is the link to aws document. Please search for word incremental on this pagehttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
RDS Snapshot backup is full backup in the first time, and the second snapshot is incremental backup. I can find out about this in the following documents.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.htmlThe first snapshot of a DB instance contains the data for the full DB instance. Subsequent snapshots of the same DB instance are incremental, which means that only the data that has changed after your most recent snapshot is saved.I'd like to know Aurora's snapshot taking is a full backup or a differential. Does anyone have any information on this?I've checked the following in the manual, but I can't confirm that Aurora's snapshot works with this text.https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.htmlAurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period. Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period.And, I've checked the AWS re:Invent 2019 materials below. I thought take a full image snapshot of in each segment(per 10GB protection groups), does this right?https://youtu.be/Ul-j5fKfv2k?t=1095AWS re:Invent 2019: [REPEAT 1] Deep dive on Amazon Aurora with PostgreSQL compatibility (DAT328-R1)
Amazon Aurora Snapshot backups are full or incremental?
I struggled for months too because of the lack of tutorials online on how to deploy Angular Universal to AWS Elastic Beanstalk. And you will now be very happy to know how easy it is.First, run the commandnpm run build:ssrto build for production.Inside the dist folder, you will probably find a folder with your project name. Inside this folder you will find a "browser" folder and a "server" folder. Inside the "server" folder it is the main.js file.Your setup might be slightly different, but you will be able to adjust this explanation to your situation after you read my entire answer.Zip the dist folder.Let's now configure the environment in AWS Elastic Beanstalk.1) When you create an environment in Elastic Beanstalk, choose "Web server environment", and then on Platform branch config, choose the last option: "Node.js running on 64bit Amazon Linux". This is a very important step, since this is the only option that will enable you to configure the Container Options.2) On the Application code, choose "Upload your code" and upload your zip file.3) Click on Configure more options4) Click on the Edit button on the Software box.5) On the Node command field,typenode dist/yourProjectFolderName/server/main.jsThat's it!! Save and create your environment. Your app will work now. :-)
I have been trying and failing for over three days now to get this working, and am growing increasingly frustrated with my own lack of understanding on the topic - so this is my search for an answer that I've not yet found.I am using Angular 9.x and Angular Universal 9.x and am unable to work out how to deploy this to Elastic Beanstalk on a server running node. There are zero tutorials that explain how this should be done, as they are all aimed at those wanting to use Lambda on AWS. If someone could please point me in the right direction that would be great. I run npm run build:ssr --prod, and get the following in my dist folder:[I have tried deploying this folder by uploading it zipped, as well as triedeb deploywith my whole app - but all of these result in errors like the following (for eb deploy method)>[email protected]start /var/app/current > ng serve sh: ng: command not foundCould someone please point me in the correct direction?
Angular Universal - Deploying to AWS Elastic Beanstalk
Change sets are changes based on a template that are about to be applied. Drift is changes that you've manually made to your infrastructure. So it's a drift between the original template and the state of the infrastructure.
Can anyone explain the difference between Drift and Change Sets in AWS Cloudformation ?Both seem to belist of changesthat have occurred since the last time the Cloudformation Template was applied.Thanks!
What is the difference cloudformation drift and change sets?
As per thedocs, you need to use a Base64 decoding tool or use KCL library to get the data in the format it was sent:The first thing you'll likely notice about your record in this part of the tutorial is that the data appears to be garbage –; it's not the clear text testdata we sent. This is due to the way put-record uses Base64 encoding to allow you to send binary data. However, the Kinesis Data Streams support in the AWS CLI does not provide Base64 decoding because Base64 decoding to raw binary content printed to stdout can lead to undesired behavior and potential security issues on certain platforms and terminals. If you use a Base64 decoder (for example,https://www.base64decode.org/) to manually decode dGVzdGRhdGE= you will see that it is, in fact, testdata. This is sufficient for the sake of this tutorial because, in practice, the AWS CLI is rarely used to consume data, but more often to monitor the state of the stream and obtain information, as shown previously (describe-stream and list-streams). Future tutorials will show you how to build production-quality consumer applications using the Kinesis Client Library (KCL), where Base64 is taken care of for you. For more information about the KCL, see Developing KCL 1.x Consumers.
I have a Kinesis stream in AWS and can send data to it (JSON) using kinesis command and can get it back from a stream with:SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name mystream --query 'ShardIterator' --profile myprofile) aws kinesis get-records --shard-iterator $SHARD_ITERATOR --profile myprofileOutput of this looks like something like:HsKCQkidmlkZW9Tb3VyY2UiOiBbCgkJCXsKCQkJCSJicmFuZGluZyI6IHt9LAoJCQkJInByb21vUG9vbCI6IFtdLAoJCQkJImlkIjogbnVsbAoJCQl9CgkJXSwKCQkiaW1hZ2VTb3VyY2UiOiB7fSwKCQkibWV0YWRhdGFBcHByb3ZlZCI6IHRydWUsCgkJImR1ZURhdGUiOiAxNTgzMzEyNTA0ODAzLAoJCSJwcm9maWxlIjogewoJCQkiY29tcG9uZW50Q291bnQiOiAwLAoJCQkibmFtZSI6ICJTUUVfQVRfUFJPRklMRSIsCgkJCSJpZCI6ICJTUUVfQVRfUFJPRklMRV9JRCIsCgkJCSJwYWNrYWdlQ291bnQiOiAwLAoJCQkicGFja2FnZXMiOiBbCgkJCQl7CgkJCQkJIm5hbWUiOiAiUEVBQ09DSy1MVEEiLAoJCQkJCSJpZCI6ICJmZDk5NTRmZC03NDYwLTRjZjItOTU5Ni05YzBhMjcxNTViODgiCgkJCQl9CgkJCV0KCQl9LAoJCSJ3b3JrT3JkZXJJZCI6ICJTUUVfQVRfSk9CX1NVQk1JU1How do I get actual JSON message in raw format (to look as JSON) - same way as it was in original when I sent it?Thanks
How to read data from Kinesis stream using AWS CLI?
If your s3 files are in an OK format you can use Redshift Spectrum.1) Set up a hive metadata catalog of your s3 files, using aws glue if you wish.2) Set up Redshift Spectrum to see that data inside redshift (https://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html)3) Use CTAS to create a copy inside redshiftcreate table redshift_table as select * from redshift_spectrum_schema.redshift_spectrum_table;
I have some JSON files in S3 and I was able to create databases and tables in Amazon Athena from those data files. It's done, my next target is to copy those created tables into Amazon Redshift. There are other tables in the Amazon Athena which I created base on those data files. I mean I created three tables using those data files which is in the S3, latter I created new tables using those those 3 tables. So at the moment I have 5 different tables which want to create in the Amazon Redshift with data or without data.I checked theCOPYcommand inAmazon Redshift, but there is no COPY command forAmazon Athena. Here are the available list.COPY from Amazon S3COPY from Amazon EMRCOPY from Remote Host (SSH)COPY from Amazon DynamoDBIf there is no any other solutions, I planned to create new JSON files based on newly created tables in the Amazon Athena into S3 buckets. Then we can easily copy those from S3 into the Redshift, isn't it? Are there any other good solutions for this?
How to directly copy Amazon Athena tables into Amazon Redshift?
LISTrequests are charged at$0.005 per 1000 requestsso this shouldn't be a big impact on your charges.If you are frequently listing large buckets, you might consider usingAmazon S3 Inventory. It can provide a daily CSV file with a listing of all objects in the bucket, including metadata.
Suppose I have 10,000 files in AWS S3 bucket each placed in subdirectory/year/month/day/hour/fileALISTrequest can return up to 1000 objects.Will recursive list on this bucket be billed as 10LISToperations or 10,000 operations +LISToperations for root directories.
How estimate recursive `aws s3 ls` costs?
Check out allthe environment variables that are presentin the AWS Lambda runtime environment. SpecificallyAWS_LAMBDA_LOG_GROUP_NAMEandAWS_LAMBDA_LOG_STREAM_NAME.
Context:We have a Lambda python function with many simultaneous invocations of the same function creating many unique log streams in cloudwatch. Additionally we use Xray on this lambda function. Using Xray we can quickly find an erroring invocation, however going from Xray to Cloudwatch is a pain, because the "Search Log Group" feature in the AWS console does not work as it will simply not load. Loading a specific log stream will work easily, hence we would like to annotate the Xray events with the name of the log stream.Question:The unique identifier of the log stream uniquely identifies the container on which the lambda is running. I do not know however to get this id from inside to function such that I can pass it to xray. How to get the unique identifier of the log stream from inside a lambda function?
How to get the unique identifier of the log stream for a AWS lambda function?
I think this is what you are looking for:
I have a recurring task where I need to clone an existing EMR cluster (except with a different name). I have been doing this in the AWS Console (basically, finding the EMR cluster in the console, click "Clone", change the name, then "Create cluster"). Is there a way to do this in command line so that I can automate it? I have checkedaws emr create-cluster helpbut nothing seems relevant. Thanks!
How to clone an AWS EMR cluster in command line?
the recommended way to handle secret storage within AWS isAWS Secrets Manager. Secrets Manager stores secret in a secured fashion as a key-value pair. The key benefit is that it allows you to administer access to those secrets via IAM roles and permission abstractions, and retrieve them with the SDK of your choice, such asboto3for example. Secrets Manager is actually also used by Amazon SageMaker for git credential storage in the case ofthird-party git integrations
I used to hide connection credentials in environmental variables (.bash_profile). Recently working with SageMaker, I tried a similar process with the terminal available in SageMaker but I am getting the following error,NameError: name 'DB_USER' is not definedIs there any efficient way to hide the credentials in SageMaker?
Hiding Secret Keys in SageMaker (Environment Variables?)
You can use artifacts to upload dist folder to s3. I will suggest not to use post build command to achieve this because the post build command runs even when the build is failed, this is the known limitation of codebuild. Just replace your buildspec with the following.version: 0.2 phases: install: commands: - npm install build: commands: - npm run generate artifacts: files: - '**/*' base-directory: 'dist''**/*' means it will upload all the files and folder under the base directory "dist". You need to mention your bucket name in your aws console ( browser).Also make sure that your codebuild IAM role has sufficient permission to access your bucket.
I am trying to configure a CodePipeline on AWS that it takes my Nuxt website on Github, run the commandnpm run generateto generate the static website then upload thedistfolder on an S3 bucket.Here what mybuildspec.ymlit looks like:version: 0.2 phases: install: commands: - npm install build: commands: - npm run generate post_build: commands: - aws s3 sync dist $S3_BUCKETThe error I get is:The user-provided path dist does not exist.Is anyone know how to correct this? I read a lot about artefacts but I never use them before…Thanks in advance,
How to upload a generated folder content into S3 using CodeBuild?
API Gateway has a hard limit of 30 seconds. If your lambdas regularly take over 30 seconds (and you really need to use an API endpoint instead of a schedule, SQS or other source), you should use the lambda behind the gateway to trigger another lambda that does the actual work and give a response something like{ "file_id": "some_id", "status": "in_progress"}. Then fetch the result of the work from another API endpoint. And ideally you should also have another endpoint to check the status of the work so the user of the API knows when it is done and results are ready for download.
I'm using a lambda function to process a large amount of data (which exceeds 30s) and I am receicing a message from AWS Gateway:Endpoint request timed outI understand this is obviously because of the default timeout with AWS Gateway, however my Lambda function is set to run for up to 15 minutes.What is the best way to increase this timeout? Surely this can be done considering lambdas can be set to execute for a much longer time.Thanks
AWS Gateway timeout
See documentation hereIn your case you have to use RunTaskRun Task APITheRunTaskaction is ideally suited for processes such as batch jobs that perform work and then stop. For example, you could have a process call RunTask when work comes into a queue. The task pulls work from the queue, performs the work, and then exits.Start Task APICustom schedulers use the StartTask API operation to place tasks on specific container instances within your cluster.Custom schedulers are only compatible with tasks using the EC2 launch type.If you are using the Fargate launch type for your tasks, the StartTask API does not work.
I was writing AWS IAM permissions. Our use case is onlyAWS Fargate. After reading thedocumentationit seems likeStartTaskis not required for AWS fargate but I will need to give permissions forRunTask. Could not find any document related to this?Can anybody confirm/point to the docs?Thanks
StartTask V/S RunTask in Fargate
After further investigation (and AWS support help), the working (only on creation) example looks like this:Tags: Name: !Ref IdentifierAdditionally, tags cannot be modified (the docs actually state that tags changerequire replacement), when tried a slightly confusing error shows up:CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename kafka-eu-west-1-dev and update the stack again.
When creatingAWS::MSK::Clusterwith Cloud Formation I am not able to setTagsin the usual way:Tags: - Key: Name Value: !Ref IdentifierBecause of this error:Property validation failure: [Value of property {/Tags} does not match type {Map}]As of the time of writing, the documentation states that, instead of the usualType: List of Tag, I should use:Type: Json.Also the same documentation states that:You can specify tags in JSON or in YAML, depending on which format you use for your template
AWS MSK Cloud Formation Tags problems
(As I was about to post my question I figured out the answer by looking into howUnixTimeworked).To use a custom Marshaler and Unmarshaler you can create a custom type.type MillisTime time.Time func (e MillisTime) MarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { millis := timeAsMillis(time.Time(e)) millisStr := fmt.Sprintf("%d", millis) av.N = &millisStr return nil } func (e *MillisTime) UnmarshalDynamoDBAttributeValue(av *dynamodb.AttributeValue) error { millis, err := strconv.ParseInt(*av.N, 10, 0) if err != nil { return err } *e = MillisTime(millisAsTime(millis)) return nil } func timeAsMillis(t time.Time) int64 { nanosSinceEpoch := t.UnixNano() return (nanosSinceEpoch / 1_000_000_000) + (nanosSinceEpoch % 1_000_000_000) } func millisAsTime(millis int64) time.Time { seconds := millis / 1_000 nanos := (millis % 1_000) * 1_000_000 return time.Unix(seconds, nanos) }NOTE: Example above uses the new number literal syntax introduced in go 1.13.You can easily marshal and unmarshal structs usingMarshalMapandUnmarshalMapbut the downside is that the fields in your struct type have to use MillisTime instead oftime.Time. Conversion is not easy but is possible.The SDK defines aUnixTimetype which will handle marshaling and unmarshaling betweentime.Time<=> seconds since epoch.
The sdk by default marshalstime.Timevalues as RFC3339 strings. How can you choose to marshal and unmarshal in other ways e.g. millis since epoch?The SDK mentions the Marshaler and Unmarshaler interfaces but does not explain how to use them.
DynamoDB Marshal and unmarshal golang time.Time as millis since epoch
Yes, you can.Take a look at AWS CLI documentation:Use of Exclude and Include Filters:Currently, there is no support for the use of UNIX style wildcards in a command's path arguments. However, most commands have--exclude "<value>"and--include "<value>"parameters that can achieve the desired result. These parameters perform pattern matching to either exclude or include a particular file or object.For example, if the filter parameters passed to the command were:--exclude "*" --include "*.txt"All files will be excluded from the command except for files ending with.txt.
I have an S3 bucket from which I would like to copy:The entire directory structure (all directories and child directories, at any length)Wherever they are in the directory structure, all files that match a certain file-name path (Eg:*.log,*070719*.csv, etc.)Is there any way to do this from the AWS CLI?
'aws s3 sync' only copy files with a certain extension
These all are published to maven central so first check which hadoop versionspark 2.3depends on. It sayshadoop-client 2.6.5. Luckily hadoop-aws follows the same versioning sothisis the corresponding aws dependency. Finally we can see thathadoop-aws 2.6.5depends onaws-java-sdk 1.7.4.
I use Spark 2.3.0 and Scala 2.11.8What are the compatible versions of the following libraries?:hadoop-awsaws-java-sdk
hadoop-aws and aws-java-sdk versions compatible for Spark 2.3
You have to use array buffer in body stream to pass data object. As per the aws documentation you can pass data stream, string, array buffer or blob data type in body parameter.Please check below code, which will resolve your issue,import fs from "react-native-fs"; import { decode } from "base64-arraybuffer"; uploadImageOnS3 = async() => { var S3 = require("aws-sdk/clients/s3"); const BUCKET_NAME = "testtest"; const IAM_USER_KEY = "XXXXXXXXXXXXX"; const IAM_USER_SECRET = "XXXXX/XXXXXXXXXXXXXXXXXXXXXX"; const s3bucket = new S3({ accessKeyId: IAM_USER_KEY, secretAccessKey: IAM_USER_SECRET, Bucket: BUCKET_NAME, signatureVersion: "v4" }); let contentType = "image/jpeg"; let contentDeposition = 'inline;filename="' + this.state.s3BucketObj + '"'; const fPath = this.state.fileObj.uri; const base64 = await fs.readFile(fPath, "base64"); //console.log(base64); const arrayBuffer = decode(base64); //console.log(arrayBuffer); s3bucket.createBucket(() => { const params = { Bucket: BUCKET_NAME, Key: this.state.s3BucketObj, Body: arrayBuffer, ContentDisposition: contentDeposition, ContentType: contentType }; s3bucket.upload(params, (err, data) => { if (err) { console.log("error in callback"); console.log(err); } // console.log('success'); console.log(data); }); }); };
I am usingaws-sdkfor upload image on the s3 bucket. Please look at my code below I already spend one day in it.uploadImageOnS3 = () => { var S3 = require("aws-sdk/clients/s3"); const BUCKET_NAME = "testtest"; const IAM_USER_KEY = "XXXXXXXXXXXXX"; const IAM_USER_SECRET = "XXXXX/XXXXXXXXXXXXXXXXXXXXXX"; const s3bucket = new S3({ accessKeyId: IAM_USER_KEY, secretAccessKey: IAM_USER_SECRET, Bucket: BUCKET_NAME }); let contentType = "image/jpeg"; let contentDeposition = 'inline;filename="' + this.state.s3BucketObj + '"'; let file= { uri: this.state.fileObj.uri, type: this.state.fileObj.type, name: this.state.fileObj.fileName }; s3bucket.createBucket(() => { const params = { Bucket: BUCKET_NAME, Key: this.state.s3BucketObj, Body: file, ContentDisposition: contentDeposition, ContentType: contentType }; s3bucket.upload(params, (err, data) => { if (err) { console.log("error in callback"); console.log(err); } // console.log('success'); console.log(data); }); }); };Error:Unsupported body payload objectPlease help me to short out I am also usingreact-native-image-pickerfor image upload.
react-native through upload image on s3 Bucket using aws-sdk
The accepted answer here from mavriksc simply states that you need to do exactly what the error message says ("delete the login profile"), but does not offer any clues as to how to do that.If you need to just delete a user's login profile manually (not using the Java SDK), you can do so using theAWS CLIdelete-login-profilemethod:aws iam delete-login-profile --user-name=usernameAfter doing that, the Java SDKdeleteUser()method should succeed.
Is possible to delete a user while usingAWSJava sdk ? I have tried to delete a user and there is a error messageCannot delete entity, must delete login profile first.The relevant code snippet is:AWSIam.deleteUser(new DeleteUserRequest().withUserName(user));
Cannot delete entity, must delete login profile first
+50The question is about connecting the datapoints in the graph with a line. Gaps in your graph tell you that there are no datapoints for those timestamps. As mickzer explained, drawing a line where there are no datapoints has nothing to do with theLink graphsfeature from the screenshot.To have CloudWatch bridge the gaps in your metric, you can use theFILL()metric mathfunction.Let's say this is your data:You can fill the gaps:with a static value (0 in this case):by repeating the last value:linearly interpreting the missing values:
I upload some data points to aws cloudwatch andLinewidget.I chooseAction -> Link Graphsand doesn't work.Action -> Link GraphsThe first two data points are linked, but not the rests
How to set cloudwatch to link my data point
Firstly, please note that keypairs are an industry standard for accessing Linux systems. Amazon EC2 supports their use, but the concept of keypairs was not created by AWS. Therefore, any method of using keypairs with Linux systems in general will also apply to Amazon EC2 Linux instances.When you ssh into a Linux instance, you supply ausername and the private half of a keypair. The Linux system will look in the nominated user's.ssh/authorized_keypairsfile andwill attempt to find the matching public half of the keypair. If found, it will allow you to start the ssh session.Therefore, any keypair can be added to a user's.ssh/authorized_keysfile. It can include multiple keypairs, all of which would be permitted to login as that user.As a convenience, Amazon EC2 allows you to create or upload keypairs to AWS. They will appear in the Key Pairs section of the console. Then, when launching a new Amazon EC2 instance, you can nominate one of those keypairs. Software installed on the EC2 instance will copy the public half of the keypair to the/home/ec2-user/.ssh/authorized_keysfile.Bottom line:You can use thesame keypair on multiple instancesand you can also usemultiple keypairs on the same useron an instance.
We know that a key pair must exist in order to access an EC2 instance.I have created a key pair when I created EC2, but I saw the phrase that I could use an existing key pair.Does this mean that if you are using an existing key pair, you can access multiple instances with one key pair?
Can I use an existing key pair when creating a new EC2 instance?
If a message has been received from a FIFO Amazon SQS queue and it is still invisible ("in-flight"), then SQS will not provide another message with the sameMessageGroupId.Therefore, multiple consumers on the same queue will receive messages with a differentMessageGroupIdand message order within a givenMessageGroupIdwill be retained.The important thing here is to use a differentMessageGroupIdwhere you wish to retain order, but do not use the sameMessageGroupIdfor every message.See:AWS SQS FIFO - How to get more than 10 messages at a time?
I have three SQS FIFO queues where each one has a data projection listener daemon in EC2 instance as docker pods (SQL Server, PostgreSQL, Elastic Search, etc.)All queues have the same settings as below (Dead-Letter Queues to setup later).Queue Type: FIFO Messages Delayed: 0 Content-Based Deduplication: Enabled Default Visibility Timeout: 30 seconds Message Retention Period: 14 days Maximum Message Size: 256 KBThis is all part of an Event Sourcing architecture I am designing using DynamoDB Stream => Lambda SQS Router => SQS FIFO Queues (due to SNS not supporting FIFO queues as subscribers)Content-Based Deduplicationis enabled to avoid duplicate messages in the queue since an error is always possible in the Lambda Router for any of the queues.Now, I also have set theMessageGroupIdfor each message to the AggregateId to group them but don't really understand how that is utilized by the consumer side;I have only one consumer per SQS queue at the moment but what if I want to scale consumers. Is an application concern to make sure multiple consumers will not process messages from the same MessageGroupId; - which is unacceptable since using FIFO queues is due to order retention of events in the system!
How to scale a SQS FIFO queue multiple listeners
They are the same hardware.The difference is thatm5.metalis aBare Metalserver that allows you to use a different Hypervisor, such as Hyper-V.Unless you really know what you are doing, just select a normalm5.24xlargeinstance.
Are there any differences betweenm5.24xlargeandm5.metal?According toAWS, both instance types cost the same ($4.608 per Hour) and have the same specifications:ECUs (EC2 Compute Unit): 345vCPU: 96Physical Processor: Intel Xeon Platinum 8175Clock Speed: 2.5 GHzMemory: 768GiBInstance Storage (GB): 4 x 900 (SSD)Network Performance: 25 GigabitCost: $4.608 per Hour
AWS EC2: m5.24xlarge vs m5.metal
With aws-sdk you can turn an Item from a DynamoDB response into a more normal looking object using the Converter class available in the SDK:So ifdata1looks like this:const data1 = { Item: { "AlbumTitle": { S: "Songs About Life" }, "Artist": { S: "Acme Band" }, "SongTitle": { S: "Happy Day" } } }Passdata1.Iteminto theunmarshallfunction like so:const flat = AWS.DynamoDB.Converter.unmarshall(data1.Item);And nowflatwill look like this:{ "AlbumTitle": "Songs About Life", "Artist": "Acme Band", "SongTitle": "Happy Day" }So you can access the properties like normal:console.log(flat.Artist) #=> "Acme Band"
getting an item from my aws database. 'test2' from below prints correctly as an item in my console. But I want to get a attribute/variable from it in the item, and return it as var test. How would I do that? For example if i wanted to get the attribute name 'problem' and return it?var test; ddb.getItem(param, function(err, data1) { if (err) { console.log("Error", err); } else { var test2 = JSON.stringify(data1); console.log("Get Success", test2); test = JSON.stringify(data1, undefined, 1); } }); speechOutput = `Ok ${test}. Thanks, I have reported this. Do you have anything else to report?`; callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
How to retrieve specific object from a getItem dynamoDB (JavaScript)?
Every time your application sends a request that exceeds your capacity you get aProvisionedThroughputExceededExceptionmessage from Dynamo. However your SDK handles this for you and retries. The default Dynamo retry time starts at 50ms, the default number of retries is 10, and backoff is exponential by default.This means you get retries at:50ms100ms200ms400ms800ms1.6s3.2s6.4s12.8s25.6sIf after the 10th retry your request has still not succeeded, the SDK passes theProvisionedThroughputExceededExceptionback to your application and you can handle it how you like.Note that you can change the default retry behaviour of your SDK. For examplenew AWS.DynamoDB({maxRetries: 13, retryDelayOptions: {base: 200}});This would mean you retry 13 times, with an initial delay of 200ms. This would give your request a total of 819.2s to complete rather than 25.6s.
I have a table which has throttled write request at a specified time. I want to understand more about how AWS-SDK handle them.For my current understanding, DynamoDB will return an error to my Lambda. That's why I will have user errors in DynamoDB Table Metrics. However, AWS-SDK has error-handling and retry strategy which helps me to retry and write the throttled requests back to the table. Is it correct?
AWS DynamoDB Throttled Write Request Handling
There is not. Athena will always write the results to S3 (even with the new semi-private "streaming" API that is used by the JDBC driver). The only way to know when an Athena query is completed is to poll using theGetQueryExecutionAPI call. Even seemingly synchronous APIs like the JDBC driver use this method internally.However, there is no need to read the response from S3, there is also theGetQueryResultsAPI call that returns the result along with type information. If there are less that 1000 rows in the response or performance is not the top priority it's a better way to retrieve the results than reading the CSV file from S3.If you're using Athena from Lambda my suggestion is to look at Step Functions. Unless your Athena queries never run more than a few seconds you can save a lot of money by building a simple state machine that executes the query. You can find a good blueprint in thejob poller sample project.
I am looking to query data in my S3 buckets using Athena from my AWS Lambda. When I looked at some of the examples the call from Lambda to Athena seems to be asynchronous. The Lambda makes a call to Athena and waits for Athena to write the results to S3 bucket. Is there a way to directly retrieve the response instead of having to write it to a S3 bucket?
Synchronous call from AWS Lambda to Athena
Fixed. The Value key simply needed the ".$" postfix."Parameters": { "ContainerOverrides": { "Environment": [ { "Name": "PARAM_1", "Value.$": "$.param_1" }
What's the proper way to send part of a Step Function's input to a Batch Job?I've tried setting and env var using Parameters.ContainerOverrides.Environment like this:"Parameters": { "ContainerOverrides": { "Environment": [ { "Name": "PARAM_1", "Value": "$.param_1" }Step function input looks like this:{ "param_1": "value-goes-here" }But the batch job just ends up getting invoked with literal "$.param_1" in the PARAM_1 env var.
How to pass Step Function input to Batch Job
I know its late but I think the issue is with the "Serde serialization lib" InAWS GLUE --> Click on the table --> Edit Table --> check "Serde serialization lib" it's value should be "org.apache.hadoop.hive.serde2.OpenCSVSerde"Than Click ApplyThis should solve your issue. Below is a sample image for your reference.
When I query my files from Data Catalog using Athena, all the data appears wrapped with quotes. Isit possible to remove those quotes?I tried addingquoteCharoption in the table settings, but it didnt helpUPDATEAs requested, the DDL:CREATE EXTERNAL TABLE `holidays`( `id` bigint, `start` string, `end` string, `createdat` string, `updatedat` string, `deletedat` string, `type` string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' WITH SERDEPROPERTIES ( 'quoteChar'='\"') STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://pinfare-glue/holidays/' TBLPROPERTIES ( 'CrawlerSchemaDeserializerVersion'='1.0', 'CrawlerSchemaSerializerVersion'='1.0', 'UPDATED_BY_CRAWLER'='pinfare-holidays', 'averageRecordSize'='84', 'classification'='csv', 'columnsOrdered'='true', 'compressionType'='none', 'delimiter'=',', 'objectCount'='1', 'recordCount'='29', 'sizeKey'='2494', 'skip.header.line.count'='1', 'typeOfData'='file')
AWS Glue/Data catalog showing quotes around data
You need to create a new Security Group Inbound RulesGo to "Security group rules" (under "Connectivity & security")Click the item "default" Security groupClick "Actions" > "Edit inbound rules" > "Add rule"Select... Type: "All traffic", Source: "My IP", then click "Save rules"
I am trying to manipulate db directly from pgAdmin4 but I cannot connect.What I checked and did areI read this doc and followinghttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html#USER_ConnectToPostgreSQLInstance.Troubleshootingand input the information about the db instance as the doc does.However I couldn't connect and I checked the security group. The VPC security group is like this.What else should I check? I totally have no idea how I can fix this. The only concern is the password's current value is always going to be empty even after I set a password.Anyone could help me? I have to connect and manipulate db directly.
AWS RDS and pgAdmin4 : Unable to connect to server: could not connect to server: Connection timed out
Ran into the same problem and was able to figure it out by writing a few lines with the aws-cdk to generate the filter pattern template to see the difference between that and what I had.Seems like it needs each piece of criteria wrapped in parenthesis.- FilterPattern: '{ $.priority = "ERROR" && $.message != "*SomeMessagePattern*" }' + FilterPattern: '{ ($.priority = "ERROR") && ($.message != "*SomeMessagePattern*") }'It is unfortunate that the AWS docs for MetricFilter in CloudFormation have no examples of JSON patterns.
I am trying to define a metric filter, in an AWS CloudFormation template, to match JSON-formatted log events from CloudWatch. Here is an example of the log event:{ "httpMethod": "GET", "resourcePath": "/deployment", "status": "403", "protocol": "HTTP/1.1", "responseLength": "42" }Here is my current attempt to create a MetricFilter to match the status field using the examples given from the documentation here:FilterAndPatternSyntax"DeploymentApiGatewayMetricFilter": { "Type": "AWS::Logs::MetricFilter", "Properties": { "LogGroupName": "/aws/apigateway/DeploymentApiGatewayLogGroup", "FilterPattern": "{ $.status = \"403\" }", "MetricTransformations": [ { "MetricValue": "1", "MetricNamespace": "ApiGateway", "DefaultValue": 0, "MetricName": "DeploymentApiGatewayUnauthorized" } ] } }I get a "Invalid metric filter pattern" message in CloudFormation.Other variations I've tried that didn't work:"{ $.status = 403 }" <- no escaped characters { $.status = 403 } <- using a json object instead of stringI've been able to successfully filter for space-delimited log events using the bracket notation defined in a similar manner but the json-formatted log events don't follow the same convention.
How do I define an AWS MetricFilter FilterPattern to match a JSON-formatted log event in CloudWatch?
For anyone looking for the answer: include only theDockerrun.aws.jsonfile in artifacts ofbuildspec.ymland point it'simagefield to the ECR image.
I'm trying to combine ECR and Elastic Beanstalk with the following CodePipeline setup:Source : CodeCommitBuild :buildspec.ymlwhich Builds a docker image and pushes it to ECR repositoryDeploy: Elastic BeanstalkNote that Step 2 doesn't contain any artifacts, it merely builds the new image from the source code by usingdocker build -t <my-image> .and pushes it to ECR with the latest tag.My Questions are:How do you trigger beanstalk from step 3 to use the latest ECR image?Which artifacts should be included (if any) from step 1/2?Is the artifact is just the sameDockerrun.aws.jsonwhich point to the ECR image file every time?Alternative way: Should I just deploy the entire source code to beanstalk and let it use the Dockerfile in the package instead so it will build it?if so - Where can I see the build process of the image?Is there a way to select a different Dockerfile from the source code?
CodePipeline: How to integrate ECR with Elastic Beanstalk?
The option is calledBlock public and cross-account access if bucket has public policies. When this was TRUE, it meant that the Bucket Policy only applied to the bucket owner.For details, refer to:Using Amazon S3 Block Public Access - Amazon Simple Storage ServiceThese four new settings could be considered annoying because they are additional blockages on trying to make content public, but they are probably going to save many organizations a lot of embarrassment by preventing accidental public exposure of data.
Im trying to make files on my s3 bucket (CSS JS files) accessible to a Django application running in heroku.I think I have the settings.py configured correctly.However when I try to make changes to permissions in the S3 bucket i get access denied.I added cors and bucket policy is set to public.Ultimately when I load the application from heroku Im getting 403 errors when trying to access the static files.Bucket policy:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:*", "Resource": "arn:aws:s3:::NameOfBucket/*" } ] }CORs configuration:<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration>How can I get the permissions to make changes in the s3 bucket please?
How do I resolve access denied aws s3 files?
Problem:I also faced same issue. The reason was that I was using difference artefact versions for my aws libraries: aws-java-sdk-core and aws-java-sdk-s3.Solution (Maven):In case you are using Maven, Amazon suggests to use BOM dependencyManagement. This way you ensure that the modules you specify use the same version of the SDK and that they're compatible with each other.<dependencyManagement> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-bom</artifactId> <version>1.11.522</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>And as you already declared the SDK version in the BOM, you don't need to specify the version number for each component. Like this:<dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-s3</artifactId> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-sqs</artifactId> </dependency> </dependencies>Source:https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-project-maven.html#configuring-maven-individual-components
I am facing below error while using AWS SES Mail sending example?"Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.client.AwsSyncClientParams.getAdvancedConfig()Lcom/amazonaws/client/builder/AdvancedConfig; at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.<init>(AmazonSimpleEmailServiceClient.java:277) at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.<init>(AmazonSimpleEmailServiceClient.java:261) at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClientBuilder.build(AmazonSimpleEmailServiceClientBuilder.java:61) at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClientBuilder.build(AmazonSimpleEmailServiceClientBuilder.java:27) at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46) at saurabh.aws.learning.awsLearning.SendMailService.main(SendMailService.java:50) "
AWS SES Service For sending mail using java
I want you to consider the following behaviour of Lambda function:Let's say you spin one lambda up , and then you send a second message to lambda .If you first lambda finished before you send the second messageThesame lambdawill run the message .So this is why you see it changed the file , it's on the same instance with same files .I would suggest loading json into memory , and not change the file directly .That will solve your problem.
I have an AWS Lambda function. which have an array on a .json file. now the thing is that I want to modify that .json but after the run, the json remains exactly the same than before the run. The logs I place there make me think that is actually being modified, but, I wonder if a lambda goes back to its definition before the run. tbh the information that I need to hold in that json is going to be always just a small amount of settings but those are going to be easy to modify without making a deploy and im trying to avoid using a db or an s3 bucket.Regards, Daniel
Can an AWS Lambda modify a json file on itself?
Let me answer your questions inline.Whenever I make a change to my S3 bucket my CloudFront doesn't update to the new content. I have to create an invalidation every time in order to see the new content.Yes, this is the default behavior in CloudFront unless you have defined the TTL values to be zero (0).Is there another way to make CloudFront load the new content whenever I push content to my S3 bucket?You can automate the invalidation using AWS Lambda. To do this;Create an S3 event trigger to invoke a lambda function when you upload any new content to S3.Inside the Lambda function, write the code to invalidate the CloudFront distribution using AWS CloudFront SDKcreateInvalidationmethod.Note: Make sure the Lambda function has an IAM Role with policy permission to trigger an invalidation in CloudFront.
Whenever I make a change to my S3 bucket my CloudFront doesn't update to the new content. I have to create an invalidation every time in order to see the new content. Is there another way to make CloudFront load the new content whenever I push content to my S3 bucket?
AWS CloudFront Not Updating
Replica lag for Aurora is very small relative to non-Aurora read replicas, but is still a non-zero value which you can monitor with CloudWatch metricAuroraBinlogReplicaLagandAuroraReplicaLag- documented more extensively athttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Monitoring.html. Specific to your question, Aurora doesn't write to all 6 copies of the storage syncronously - only 4. A 4-part blog super deep dive on how this storage system works can be found athttps://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-quorum-and-correlated-failure, and I encourage everyone to read it. You can also read more about Aurora Replication athttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html. but Steve Buzonas is correct - if you need guaranteed read-after-writeSERIALIZABLEreads, then you need to read from the writer instance endpoint:https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.html
From what I understand from reading Amazon aurora documentation, even if Aurora master node synchronously write the WAL log to 4 of 6 storage nodes. Unless there is switch of master, the Aurora slave are only kept in sync using asynchronous log shipping directly from the master node.If this is true, I would assume that it's possible for a client to write and commit a value to master node and then immediately send a read only query to one of the slave and observe the old value instead of the latest value that was just written.this would mean it can only support snapshot isolation mode on the slave.this seem like a very big limitation! And I wanted to make sure this is correct.
Does Amazon Aurora offer serializable isolation for read-only transaction running on slave nodes?
One general solution to this kind of requirement is reactive, not proactive. Write automation based on CloudTrail Logs or AWS Config or by simply enumerating the current state of your AWS account periodically, and raise alerts (or terminate resources) if your policies have not been complied with.
I am trying to see if its possible to restrict(set some max limit) the number of EC2 instances which are created by an IAM user? Can i create custom policy for this?Note:I am looking for IAM user level permission. Not AWS Account level restriction.Similarly i am also looking for restricting EBS storage limit per IAM user.
AWS IAM user - Limit number EC2 instances and limiting EBS storage
The maximum length for a Redshift statement is 16MB. Please seehttps://docs.aws.amazon.com/redshift/latest/dg/c_redshift-sql.htmlMuch faster to move the data to S3 first then use the Redshift COPY command if you need to load a lot of data regularly.
I am trying to batch multiple rows of data into a RedshiftINSERTquery. In order to keep it efficient, I want to know the largest length I can go before I need to start a new batch. If there is a better way to do this, please let me know.EDIT:I was a little vague. I am trying to got from Elasticsearch to Redshift. This results in a JSON format that I convert into:INSERT INTO xxxx VALUES (a1, a2, a3), (b1, b2, b3), (c1, c2, c3)
What is the max size for a Redshift insert query?
Since you are usingAWS::EC2::SecurityGroup, you need to use the propertyVPCSecurityGroupsto specify your imported security group instead of usingDBSecurityGroups. It fails because the SG you've specified is not a DBSecurityGroup.There are two ways to set security groups for an RDS instance which is describedhere:DBsecurityGroups: Security group of typeAWS::RDS::DBSecurityGroup. This was the older way of securing RDS instances.VPCSecurityGroups: Security group of typeAWS::EC2::SecurityGroupwhich allows you to specify VPC security groups to secure your RDS instance.
Created and exported a SG from one template/stack:Resources RDSSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupName: "sg-name" Outputs: SGRDS: Description: security group of rds instances Value: !Ref RDSSecurityGroup Export: Name: SGRDSHowever, although the export is created when trying to use this SG in an RDS creation using another template (and stack)Resources MYRDS: Type: AWS::RDS::DBInstance Properties: DBSecurityGroups: - !ImportValue SGRDSit fails with the following error:DBSecurityGroup not found: sg-0983409kdje5999Update: This does not seem to be a problem related to the exported value; assigning the specific SG to my RDS instance for some reason fails either way (I explicitly used the SG name, but I get the above "not found" error with the name instead of the id this time).For some reason it fails to find the SG.
AWS CloudFormation: Unable to find existing SG to assign to RDS instance
Actually its return Psr7\Stream object.So if we need to get contents from PSR Stream we have to call getContents() method from the object.<?php $s3Client = new Aws\S3\S3Client(array( 'stats' => TRUE, 'http' => array( 'verify' => FALSE, 'connect_timeout' => 30 ), 'version' => 'latest' )); $result = $s3Client->getObject(array( 'Key' => $filename, 'Bucket' => $bucketName )); echo $result['Body']->getContents(); //Also you can get metadata like this print_r($result['Body']->getMetadata());Hope this will help someone who is actually using SDK version 3.Specification herehttps://docs.aws.amazon.com/aws-sdk-php/v3/api/class-GuzzleHttp.Psr7.Stream.html
I have tried to read content from a s3 object through the below code.$content = $s3Client->getObject( array( 'Bucket'=> $bucketName, 'Key' => $pathToObject, 'ResponseContentType' => 'text/plain', ) );And I got below responseGuzzleHttp\Psr7\Stream Object ( [stream:GuzzleHttp\Psr7\Stream:private] => Resource id #87 [size:GuzzleHttp\Psr7\Stream:private] => [seekable:GuzzleHttp\Psr7\Stream:private] => 1 [readable:GuzzleHttp\Psr7\Stream:private] => 1 [writable:GuzzleHttp\Psr7\Stream:private] => 1 [uri:GuzzleHttp\Psr7\Stream:private] => php://temp [customMetadata:GuzzleHttp\Psr7\Stream:private] => Array ( ))Any help will be appreciated to read object content in S3.
AWS S3 getObject is not able to read object content through PHP SDK
The basic different between a Dev/Test configuration for Amazon RDS and a Production configuration is that the Production configuration hasMulti-AZactivated. This means that there is a secondary database provisioned in case of failure of the primary database or the Availability Zone in which the database is running.Such failures are rare and given that you are cost-conscious and you are not providing a commercial-grade application, using the Dev/Test configuration would be acceptable.Please note that the intention of theAWS Free Tieris"to gain free, hands-on experience with the AWS platform, products, and services". It is not intended as a way to host applications for free.
I am a student. I am using AWS free tier account. I don't have money to pay for the resources. I have developed an application for the society. I don't want my website should face any downtime in production just because of the free service provided by Amazon.SO, can I go for the Dev/Test RDS for my production application?
Is using Dev/Test RDS is bad idea for your project over Production RDS?
In SNS trigger or any other asynchronous trigger, there isn't any 'server' that receives the return value of the Lambda.For that reason, theDead Letter Queueis a feature that makes it possible to handle errors in such a case, and it might be what you are looking for.If you wish to verify every message returned (and not only failures of the Lambda), you may have it configured to send the return message to another queue (SNS/SQS) and use another Lambda to make the verification.If you just look for monitoring your application (so you don't have any immediate action to take place in a verification failure), you may look for a monitoring solution - whether configuring CloudWatch metrics,Sentryor another.
Lambda L1 is subscribed to SNS S1.L1 returns the status code and a message every time it is invokedI can check L1 response every time it is invoked independently but when I invoke L1 by publishing a message to S1, how can I verify the message returned from L1?I need to do this programmatically in java.. Any pointers are appreciated
AWS Lambda to SNS response after invocation
Oh it's straight forward with aws sdk CognitoIdentityServiceProvider class.Can use the same access token that we are getting from the user authenticationvar params = { AccessToken: "string" }; var cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider(); cognitoidentityserviceprovider.getUser(params, function(err, data) { if (err) { console.log(err, err.stack); } // an error occurred else{ console.log(data); } // successful response })
I'm using amazon-cognito-identity-js to authenticate my user pool users.And after authenticating it's passing access token, id token and refresh token.And also the user id is there(user's id in user pool).Is there any way to get user attributes like(nick name,birthday,address) with these tokens or the with the user id in aws-cognito
How to get extra attributes of a user in user pool in AWS Cognito
There is a hard limit to the number of pods that can be run on a particular worker instance type. This is because, by default, Amazon's VPC CNI assigns a subnet IP to each pod.Thispage lists how many interfaces/ips per interface a particular instance type can have. One interface is reserved for the host, so you get your answer by ( Maximum Network Interfaces ) * ( IPv4/6 Addresses per Interface ) - 1. For example, with a t2.medium, you get 17 pods.Picking the right type, as mentioned by RickyA, is an exercise you will have to go through.
I am new to EKS and looking for the number of pods per node and sizes of EC2 instances for nodes, recommended by AWS in EKS for better performance and HA? I found limitations set by Kubernetes.io inhere. But I want to know what AWS's schools of thoughts when we run our clusters with EKS? You may share your experience too.This is not for polling, but to know the standard usages.
AWS Recommended POD sizes in Kubernetes - EKS